index
int64
0
18.8k
text
stringlengths
0
826k
year
stringdate
1980-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
18,600
Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning Jiang-Tian Zhai 1 , Xialei Liu 1,*, Lu Yu 2, Ming-Ming Cheng 1 1 VCIP, CS, Nankai University 2 Tianjin University of Technology Abstract Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past. This strict restriction enlarges the difficulty of alleviating catastrophic forgetting since all techniques can only be applied to current task data. Considering this challenge, we propose a novel framework of fine-grained knowledge selection and restoration. The conventional knowledge distillation-based methods place too strict constraints on the network parameters and features to prevent forgetting, which limits the training of new tasks. To loose this constraint, we proposed a novel fine-grained selective patch-level distillation to adaptively balance plasticity and stability. Some taskagnostic patches can be used to preserve the decision boundary of the old task. While some patches containing the important foreground are favorable for learning the new task. Moreover, we employ a task-agnostic mechanism to generate more realistic prototypes of old tasks with the current task sample for reducing classifier bias for fine-grained knowledge restoration. Extensive experiments on CIFAR100, TinyImageNet and ImageNet-Subset demonstrate the effectiveness of our method. Code is available at https://github.com/scok30/ vit-cil. Introduction Deep neural networks have achieved superior performance in many computer vision tasks. Since the real world is open and dynamic, it is important to be capable of learning new knowledge during the application of these networks. Incremental learning aims at learning new tasks without forgetting previous ones, and it is also a crucial characteristic of deep neural networks applied to real-world scenarios. For example, the system of face recognition may meet some faces with masks in recent years, and it is essential to adapt to these new circumstances as well. However, simply finetuning deep neural networks on the new task will cause severe catastrophic forgetting (Robins 1995) since the network almost completely adjusts its parameters to the new task (Goodfellow et al. 2013; McCloskey and Cohen 1989). To address this problem, many recent works (Castro et al. 2018; Douillard et al. 2020; Hou et al. 2019; Liu, Schiele, *Corresponding author (xialei@nankai.edu.cn). Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Vision Transformer CNN Feature map Patch embedding … Stability Stability Plasticity Ours Knowledge distillation Background patches Foreground patches Figure 1: Comparison between conventional knowledge distillation (KD) and our patch-level fine-grained knowledge selection method based on vision Transformer architecture. Conventional KD treats the image as a whole, while patch embeddings enable us to strike a better trade-off between stability and plasticity for different local regions. Moreover, a task-agnostic fine-grained prototype restoration is proposed to better replay the old knowledge. and Sun 2021; Rebuffi et al. 2017; Yan, Xie, and He 2021) are proposed to alleviate the catastrophic forgetting problem. In this paper, we consider a challenging task setting of class incremental learning termed non-exemplar class incremental learning (NECIL)(Gao et al. 2022; Zhu et al. 2021a, 2022), that forbids the model to preserve any old task sample while learning sequential tasks with samples from disjoint classes. Compared with normal settings, non-exemplar class incremental learning considers the data privacy issue and storage burden. To address the issue of catastrophic forgetting in this task, researchers have introduced various approaches aimed at preserving acquired knowledge without the need of past data. Early work like LwF (Li and Hoiem 2017) introduces vanilla knowledge distillation between the old and current model and does not perform well under this challenging setting. Some replay-based methods usually narrow the gap by generating synthetic images. While the effectiveness of incremental learning can be influenced by the quality of the generated images. Recently, DeepInversion (Yin et al. 2020) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6971 proposes to invert the trained networks from random noise to generate images as pseudo exemplars for training together with current data. R-DFCIL (Gao et al. 2022) introduces the relation-guided supervision to overcome the severe domain gap between the synthetic and real data to generate more realistic old samples for replay. In comparison, PASS (Zhu et al. 2021b) considers the problem of classifier bias and proposes to maintain the decision boundary of old tasks by applying augmented prototype replay. SSRE (Zhu et al. 2022) follows its design and introduces a dynamically expanded model structure for adaptation to new task knowledge while reducing the negative effect of forgetting old ones. However, most of these methods depend on a common and basic module for alleviating forgetting, i.e. knowledge distillation (Hinton, Vinyals, and Dean 2015). When knowledge distillation is applied to NECIL, it adopts the current task sample and reduces the representation distance from the old and new models. It aims to enhance the model stability when learning new tasks to reduce forgetting of old knowledge. However, this operation partially contradicts with learning the current task, especially on NECIL settings, since we expect the model to achieve plasticity on new data. The intrinsic reason is that the previous methods treat this distillation process from the sample level, which is less informative since only part of the region, i.e. the foreground in the image is highly related to the current classification task. If we can separately apply different strategies on task-relevant and task-agnostic regions, we can achieve a much more fine-grained learning process for plasticity and stability trade-offs. In other words, the model is taught to remember representations of the common background regions and learn discriminative and task-relevant knowledge of the foreground. A concept illustration can be seen in Figure 1. Another challenge of NECIL is the classifier bias across tasks described in PASS (Zhu et al. 2021b). The prototype is a one-dimension embedding of the input image for classification computed with the network (e.g. the result of average pooling on the feature map from CNN, or the [CLS] embedding from vision transformer). The prototype replay in PASS and SSRE augments the class center (averaged over all sample prototypes of this class) to synthesize samplelevel prototypes of old tasks and use them to maintain the old decision boundary. However, as mentioned in (Ma et al. 2023), the sample prototypes are not necessarily normally distributed. Due to the fact that DNNs tend to overfit to the training samples of the new task. These biased synthesized prototypes used for classifier replay may cause the classifier to remember wrong decision boundaries instead of preserving the initial old ones, leading to more severe forgetting. To solve the two problems described above, we propose a novel NECIL framework using the natural advantage of vision transformers: its patch representations. Our method consists of patch-level knowledge selection and prototype restoration. On the one hand, the vision transformer computes patch-level representations for each input image. According to the similarity between each patch and the [CLS] token embedding, the model perceives the relevance of each patch to the task and applies weighted knowledge distillation. For the foreground patches, the model tends to reduce the intensity of regularization and is encouraged to maintain plasticity on them. While the background patches may have little relevance to the task. Therefore the model can use them for preserving consistent representations over models of different historical versions. On the other hand, considering the hypothesis in PASS that the prototype has similar data distribution (normal distribution), we expand it into two steps. First, we compute the offset distance explicitly of each sample to its class center (i.e. prototype), and regularize them to maintain a task-agnostic data distribution. Then we use this property to restore prototypes of old tasks by current task prototypes and the old class prototypes. These operations mitigate the inaccurate restored old prototypes caused by adopting the Gaussian model in PASS and obtain more realistic prototypes for classifier replay, thus relieving the classifier bias and forgetting. Our main contributions can be summarized as follows: (i) We propose a novel vision transformer framework for NECIL tasks, in which patch-level knowledge selection can be naturally applied to achieve better trade-offs of network plasticity and stability. (ii) We adopt a novel prototype restoration strategy to generate more realistic synthesized prototypes for alleviating forgetting in the classifier. (iii) Extensive experiments performed on CIFAR-100, TinyImageNet, and ImageNet-Subset demonstrate the effect and superiority of our framework. Each component of our method can be easily applied to other related works with remarkable performance gain. Related Work Incremental Learning. Incremental learning involves learning sequential tasks with different knowledge, which has drawn much attention these years, including a variety of methods (Belouadah, Popescu, and Kanellos 2021; Delange et al. 2021; Liu et al. 2018, 2022; Zhou et al. 2023). To overcome catastrophic forgetting due to insufficient access to old task data, iCaRL (Rebuffi et al. 2017) uses knowledge distillation between class logits from the current and old model. PODNet (Douillard et al. 2020) further applies distillation on each intermediate feature map in the backbone. Recently, vision transformer has become popular in image classification and its derivation tasks like class incremental learning(CIL). DyTox (Douillard et al. 2022) replaces the backbone from CNN to the vision transformer and introduces task-relevant embeddings for adapting the model into incremental tasks. L2P and DualPrompt (Wang et al. 2022b,a) guide pre-trained transformer models through dynamic prompts, enabling them to learn tasks sequentially across different tasks. Besides, (Zhai et al. 2023) introduces the masked autoencoder to expand the replay buffer for class incremental learning. In this paper, we rethink the characteristic of the vision transformer and expand it to NECIL as a new insight for alleviating catastrophic forgetting: finegrained patch-level knowledge selection. Non-exemplar Class Incremental Learning. Nonexemplar class incremental learning (NECIL) is preferred The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6972 Transformer Encoder (Frozen) Transformer Encoder (Current) [CLS] … [CLS] … Patch-level Knowledge Selection Prototype Restoration Balance between Plasticity and Stability Classifier Closer Further Old Class Center [CLS] in a Training Batch Offset to Class Center Restore Old Prototype Current Task Data Previous Task Data Distillation Figure 2: Illustration of the fine-grained knowledge selection and restoration framework of our method. The patch embeddings are regularized with different weights. We first train the current network to preserve similar prototype distribution with the old one, and restore old prototypes with old class center and current task prototypes. Two kinds of prototypes are sent to the classifier to reduce task bias. in scenarios where training data is sensitive and cannot be stored long-term. DAFL employs a GAN to generate synthetic samples for past tasks, avoiding the need to store actual data (Chen et al. 2019). DeepInversion, another NECIL method, generates images by inverting trained networks with random noise (Yin et al. 2020). SDC addresses semantic drift in training new tasks on old samples by estimating and utilizing prototypes drift (Yu et al. 2020). Methods like PASS and IL2A (Zhu et al. 2021b,a) offer efficient NECIL by generating prototypes of old classes without retaining original images. SSRE introduces a reparameterization method balancing old and new knowledge, and self-training leverages external data as an alternative for NECIL (Zhu et al. 2022; Hou, Han, and Cheng 2021; Yu, Liu, and Van de Weijer 2022). The split prototypical replay in NECIL into task-relevant prototype center and task-agnostic prototype offset in our methods improves replay quality by first supervising the model to produce task-agnostic offsets, then using these to restore old class prototypes. This enhances prototype generation compared to the standard method in PASS (Zhu et al. 2021b). Method Preliminaries Problem Definition and Analysis. Class-incremental learning sequentially learns different tasks. Each of these tasks does not have overlapping classes with previous ones. Let t ∈{1, 2, ..T} denotes the incremental learning tasks, where T is the number of all tasks. The training data Dt contains classes Ct with Nt training samples {(xi t, yi t)}Nt i=1. xi t denotes images and yi t ∈Ct are their class labels. Most deep networks of class-incremental learning can be split into two components: a feature extractor Fθ and a classifier Gϕ which grows with each new task t + 1 to include classes Ct+1. The input x is converted to a deep feature vector z = Fθ(x) ∈Rd by the feature extractor Fθ, and then the unified classifier Gϕ(z) ∈R|Ct| is used to learn a probability distribution over classes Ct for predicting x’s label. Class-incremental learning requires the model to classify all learned samples from previous tasks at any training task. In other words, The model should retain its ability to classify samples from classes belonging to tasks t′ < t while performing task t. Taking these requirements into account, non-exemplar class-incremental learning (NECIL) imposes an additional constraint that models must learn each new task without using any samples from previous tasks. Most related methods are supervised with a fundamental objective that minimizes a loss function LCIL t defined on current training data Dt: LCIL t (x, y) = Lt(Gϕt(Fθt(x)), y). (1) Vision Transformer Architecture. DyTox (Douillard et al. 2022) has shown that vision transformer is effective for CIL with the dynamic task-relevant token which can be readily adapted across different tasks. In this paper, we discover one essential characteristic of vision transformer that can benefit CIL and adaptively alleviate forgetting in new tasks: its patch-level representation of images. The process of Visual Transformer is reviewed as follows: Vision transformer first crops the input image x into K × K non-overlapping patches, and we denote the number of patches in the full image x as N. After this operation, the patches are mapped to a visual embedding of dimension d with an MLP layer. The result of this, after concatenating with a class token [CLS] of shape Rd, is a tensor of size R(N+1)×d. After positionally encoding the original patch locations, the input is passed to the vision transformer Encoder. Each encoder transformer block has two sequential components: a self-attention layer and a feed-forward layer. LayerNorm is applied before each of these. These operation maintains the same shape of embedding, i.e. R(N+1)×d and yields patch-level representations for each region. The class token embedding from the processed result can be used for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6973 classification with a cross-entropy loss LCIL t . We adopt the linear classifier with softmax operation to predict the probability of each learned class. Patch-Level Knowledge Selection (PKS) When learning task t, only Dt is available for the model. Vanilla knowledge distillation in previous NECIL methods like PASS and SSRE directly maintain the inter-task stability with current data. This operation does not consider the semantic-gap between current task samples and old ones, leading to suboptimal effects of alleviating forgetting on old tasks. To overcome this problem, we rethink the usage of knowledge distillation with patchified images. A natural idea is to assign different weights for each patch when applying knowledge distillation due to their respective importance to the classification task: patches in foreground regions often contain more task-relevant context compared to other patches in the background, which mostly have task-agnostic pixels with random information. Considering this bilateral strategy for balancing plasticity and stability of the model on Dt, we implement it in two steps: (a) define a metric for evaluating the relevance of each patch to the current task t, (b) apply patch-level knowledge selection on each patch with these patch-specific weights between the current model and old one. For simplicity without losing generality, we adopt the L2 distance between the embedding of [CLS] token Pt,cls and each image patch Pt,i to compute their importance to task t: Wi = 1 ||Pt,cls −Pt,i||2 + ϵ. (2) We assign larger Wi for patches that are closer to [CLS] token, and divide each Wi by their maximum value for normalization and get wi. The ϵ is set to 1e −8 for avoiding zero value in the denominator. The patch-level knowledge selection Lpks t is defined as: Lpks t = N X i=1 wi||Pt,i −Pt−1,i||2 + ||Pt,cls −Pt−1,cls||2, (3) Pt,i denotes the computed embedding for patch i by the model of task t, and we compute the L2 distance of embeddings between the current model Fθt and old model Fθt−1. Prototype Restoration (PR) We first describe the definition of prototype offset and class center. The feature extractor Fθ computes the representation z ∈Rd from the input image Xt of task t, which is used to predict the class label with the classifier Gϕ. As for the vision transformer, we adopt the [CLS] token as the image representation for classification. Let Nt,k, Xt,k, µt,k denote the number of samples, the image set, and the class center of class k in task t. And µt,k = 1 Nt,k PNt,k i=1 Fθ(Xi t,k), i.e. averaged over all samples of this class. To introduce more realistic and forgetting-free samplelevel prototypes for alleviating classifier bias, we restore the prototype of old tasks with their class-center and current Algorithm 1: Pseudocode of Training Process Input: The number of tasks T, training samples Dt = {(xi, yi)}N i=1 of task t, class prototype µt,k of class k in task t (maintained during training), initial parameters Θ0 = {θ0, ϕ0} containing parameters of a vision transformer feature extractor Fθt, and a classifier Gϕt. CE denotes the cross entropy loss. Output: Model ΘT 1: for t ∈{1, 2, ..., T} do 2: Θt ←Θt−1 3: while not converged do 4: Sample (x, y) from Dt 5: Pt,i, Pt−1,i ←Fθt(x), Fθt−1(x) 6: Lpks t ←Compute(Pt,i, Pt−1,i) in Eq. (3) 7: Ot, Ot−1 ←Pt,y −µt,y, Pt−1,y −µt,y 8: Lt,pr ←Lmse(Ot, Ot−1) in Eq. (4) 9: Ftold,yold ←Ot + µtold,yold in Eq. (5) 10: LCIL t ←LCE t (Gϕt(Ft,y, Ftold,yold), y, yold) 11: update Θt by minimizing Lall t from Eq. (7) 12: end while 13: end for samples. We divide our prototype replay into two steps: 1) introduce supervision to make these prototype offsets taskagnostic, and 2) use this characteristic to restore the old sample-level prototypes. Supervision for Task-agnostic Prototype Offset. For the first part, we consider the model of the current and last task, i.e. Fθt and Fθt−1, applying the offset regularization. Let bs denote the batch size, we consider the training samples in a batch (xt i, yt i), i = 1, 2, ..., bs, and randomly split them into two subsets S1 and S2 of the same size ⌊bs 2 ⌋. Then, we compute the prototype offset of samples in the two subsets: {Ot,i = Fθt(xt i)−µt,yt i|i ∈S1} and {Ot−1,i = Fθt−1(xt i)− µt,yt i|i ∈S2}. We adopt the old model Fθt−1 for computing the prototype offset of subset S2, which contains previous knowledge of the offset distribution. We randomly sample ⌊bs 2 ⌋pair of prototype offset from them, and minimize the mean square error between them: Lpr t = 1 sz X (ik,jk)∈Idx Lmse(Ot,ik, Ot−1,jk), (4) where sz = ⌊bs 2 ⌋, Idx = {(i1, j1), ..., (isz, jsz)}(ik ∈ S1, jk ∈S2), Ot,ik denotes the ik-th prototype offset from S1, and Ot−1,jk has the similar meaning. Restoration of Old Task Prototype. We use the prototype offset of current sample xt i to restore prototype from old tasks: Ftold,kold = µtold,kold + (Fθt(xt i) −µt,yt i). (5) The second term in Eq. 5 is the computed offset from the current sample. The sample (xt i, yt i) is randomly selected within the current batch. It can be fused in Eq. 6 as follows, LCIL t = LCE t (Gϕt(Ft,y, Ftold,yold), y, yold), (6) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6974 Dataset CIFAR100 TinyImageNet ImageNet-Sub Setting 5 tasks 10 tasks 20 tasks 5 tasks 10 tasks 20 tasks 10 tasks Method Para.(M) Avg↑ Last↑ Avg↑ Last↑ Avg↑ Last↑ Avg↑ Last↑ Avg↑ Last↑ Avg↑ Last↑ Avg↑ Last↑ E=20 iCaRL-CNN† 11.2 51.07 40.12 48.66 39.65 44.43 35.47 34.64 22.31 31.15 21.10 27.90 20.46 50.53 41.08 iCaRL-NCM† 11.2 58.56 49.74 54.19 45.13 50.51 40.68 45.86 33.45 43.29 33.75 38.04 28.89 60.79 51.90 LUCIR† 11.2 63.78 55.06 62.39 50.14 59.07 48.78 49.15 37.09 48.52 36.80 42.83 32.55 66.16 56.21 EEIL† 11.2 60.37 52.35 56.05 47.67 52.34 41.59 47.12 34.24 45.01 34.26 40.50 30.14 63.34 54.19 RRR† 11.2 66.43 57.22 65.78 55.74 62.43 51.35 51.20 42.23 49.54 40.12 47.46 35.54 67.05 58.22 E=0 LwF MC 14.5 45.93 36.17 27.43 50.47 20.07 15.88 29.12 17.12 23.10 12.33 17.43 8.75 31.18 20.01 EWC 14.5 16.04 9.32 14.70 8.47 14.12 8.23 18.80 12.71 15.77 10.12 12.39 8.42 MUC 14.5 49.42 38.45 30.19 19.57 21.27 15.65 32.58 17.98 26.61 14.54 21.95 12.70 35.07 22.65 IL2A 14.5 63.22 55.13 57.65 45.32 54.90 45.24 48.17 36.14 42.10 35.23 36.79 28.74 PASS 14.5 63.47 55.67 61.84 49.03 58.09 48.48 49.55 41.58 47.29 39.28 42.07 32.78 61.80 50.44 SSRE 19.4 65.88 56.33 65.04 55.01 61.70 50.47 50.39 41.67 48.93 39.89 48.17 39.76 67.69 57.51 Ours 9.3 68.17 59.02 70.13 57.90 66.86 54.25 54.88 44.97 52.72 43.35 51.68 41.94 70.18 61.42 Table 1: Average and last accuracy on CIFAR100, TinyImageNet, and ImageNet-Subset under different numbers of tasks. Replay-based methods storing 20 exemplars from each previous class are denoted by †. The best overall results are in bold. where LCE t is the cross entropy (CE) loss. An overall algorithm is illustrated in Alg. 1. Learning Objective The overall learning objective combines the classification loss, sample prototype consistency loss, and patch-level knowledge selection: Lall t = LCIL t + λpksLpks t + λprLpr t . (7) Experiments Datasets. We conduct experiments on three datasets: CIFAR100, TinyImageNet, and ImageNet-Subset, as commonly used in previous works. For each experiment, we first select partial classes from the dataset as the base task and evenly split the remaining classes into each sequential task. This process can be represented with F + C × T, where F, C, T denote the number of classes in the base task, the number of classes in each task, and the number of tasks. For CIFAR100 and ImageNet-Subset, we adopt three configurations: 50 + 5 × 10, 50 + 10 × 5, 40 + 20 × 3. For TinyImageNet, the settings are: 100 + 5 × 20, 100 + 10 × 10, and 100 + 20 × 5. Comparison Methods. We compare our method with other non-exemplar class incremental learning methods: SSRE (Zhu et al. 2022), PASS (Zhu et al. 2021b), IL2A (Zhu et al. 2021a), EWC (Kirkpatrick et al. 2017), LwF-MC (Rebuffi et al. 2017), and MUC (Liu et al. 2020). We also compare with several exemplar-based methods like iCaRL (both nearest-mean and CNN) (Rebuffi et al. 2017), EEIL (Castro et al. 2018), and LUCIR (Hou et al. 2019). Implementation Details. As for the structure of the vision transformer, we use 5 transformer blocks for the encoder and 1 for the decoder, which is much more lightweight than the original version of Vit-Base. All transformer blocks have an embedding dimension of 384 and 12 self-attention heads. We train each task for 400 epochs. After task t, we save one averaged prototype (class center) for each class. We set λpks Dataset TinyImageNet ImageNet-Subset Method 5 tasks 10 tasks 20 tasks 10 tasks LwF MC 54.26 54.37 63.54 56.07 EWC 67.55 70.23 75.54 71.97 MUC 51.46 50.21 58.00 53.85 IL2A 25.43 28.32 35.46 32.43 PASS 18.04 23.11 30.55 26.73 SSRE 9.17 14.06 14.20 23.22 Ours 11.45 12.21 12.82 18.39 Table 2: Comparisons of the average forgetting with other methods. Experiments are conducted on CIFAR100, TinyImageNet, and ImageNet-Subset with 5, 10 and 20 task. and λpr to 10 in experiments. We report three common metrics of CIL task: the average and last top-1 accuracy after learning the last task on all learned tasks and average forgetting for all classes learned up to task t. we use Acci to denote the accuracy over all learned classes after task i. Then the average accuracy is defined as Avg acc = PT i=1 Acci T . And the last accuracy is AccT . Let am,n denotes the accuracy of task n after learning task m. The forgetting measure f i k of task i after learning task k is computed as f i k = maxt∈1,2,...,k−1(at,i −ak,i). The average forgetting Fk is defined as Fk = 1 k−1 Pk−1 i=1 f i k. We perform three runs of all experiments and report the mean performance. Comparison with the State-of-the-Art In Table 1, we compare our methods with several nonexemplar and exemplar-based methods. For the nonexemplar setting, we discover that our method outperforms all previous related methods in different data split setting (5/10/20 tasks) on all three datasets. Take the result of 20 tasks as an example, we surpass the best non-exemplar methods SSRE by 3.31% on 20 tasks setting of CIFAR100 (Last Avg accuracy). In addition, our method even achieves higher accuracy than all exemplar-based methods using stored samThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6975 1 2 3 4 5 6 number…of…tasks 0 10 20 30 40 50 60 accuracy(%) Tiny-ImageNet…(5…phases) LwF-MC MUC PASS SSRE Ours 1 2 3 4 5 6 7 8 9 10 11 number…of…tasks 0 10 20 30 40 50 60 accuracy(%) Tiny-ImageNet…(10…phases) 1 2 3 4 5 6 7 8 9 101112131415161718192021 number…of…tasks 0 10 20 30 40 50 60 accuracy(%) Tiny-ImageNet…(20…phases) 1 2 3 4 5 6 number…of…tasks 10 20 30 40 50 60 70 80 accuracy(%) ImageNet-Subset…(5…phases) 1 2 3 4 5 6 7 8 9 10 11 number…of…tasks 10 20 30 40 50 60 70 80 accuracy(%) ImageNet-Subset…(10…phases) 1 2 3 4 5 6 7 8 9 101112131415161718192021 number…of…tasks 10 20 30 40 50 60 70 80 accuracy(%) ImageNet-Subset…(20…phases) Figure 3: Results on TinyImageNet and ImageNet-Subset for different numbers of tasks. Our method outperforms others. Method PKS PR 5 tasks 10 tasks 20 tasks PASS 55.67 49.03 48.48 Baseline (ViT) 56.44 51.90 51.20 ✓ 58.27 56.82 52.87 ✓ 57.78 54.61 51.51 ✓ ✓ 59.02 57.90 54.25 Table 3: Ablative experiments on each component of our proposed method. Experiments are conducted on CIFAR100 and we report the top-1 accuracy in %. We use PKS, PR to denote patch-level knowledge selection and prototype restoration, respectively. ples for alleviating forgetting. This phenomenon remains the same in datasets with a larger resolution, i.e. TinyImageNet and ImageNet-Subset. For the average forgetting reported in Table 2, our method outperforms most non-exemplar-based methods. The gap (up to 4.17 %) is clearer on ImageNetSubset dataset. It demonstrates the superior performance of our method from another perspective during the incremental training process. We also show the dynamic accuracy curves in Figure 3, the results show that the proposed method (in red) has less rate of decline during all the training phases. Ablation Study Each Component. Our method consists of two components: patch-level knowledge selection and prototype restoration. We analyze the effect of each aspect in Table 3. One baseline is trained with vanilla knowledge distillation and prototype augmentation in PASS. We consider vision transformer (ViT) training as another baseline. We could discover that: (a) The patch-level knowledge selection significantly imSetting N=10 N=20 Avg↑ Last↑ F ↓ Avg↑ Last↑ F ↓ DyTox 75.47 62.10 15.43 75.10 59.41 21.60 Ours 78.35 66.47 13.12 77.63 63.90 15.76 Table 4: Results applied to Dytox on CIFAR-100 in average accuracy (%), last accuracy (%), and forgetting F (%) on 10-, and 20-task scenarios. proves the performance by 3.71%. (b) The prototype restoration also yields some gain by providing the more realistic prototype replay. (c) These two factors can collaborate with each other and achieve higher performance. It validates the significance of both our patch-level knowledge selection and prototype restoration for non-examplar class incremnetal learning setting. We introduce two modules in PASS to conduct experiments with this backbone: prototype augmentation and selfsupervision, as listed in the second row of Table 3. For example, our framework has similar or slightly higher performance with the original network in PASS (i.e.ResNet18) comparing results in the first two rows, showing that vision transformer could serve as a new baseline for further study. This also demonstrates the effect of our proposed two modules on vision transformer, instead of a stronger baseline. Since the Dynamic Structure Reorganization (DSR) in SSRE is designed on the convolutional layer, we do not conduct experiments on it. More Study of Patch-level Knowledge Selection. Since our patch-level knowledge selection is a simple but effective extension of vanilla knowledge distillation based on vision transformer, we apply it on DyTox and exemplar-based tasks for verifying its general application with more vision transThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6976 Image Patch KD Task 1 Task 3 Task 5 Task 7 Task 9 Figure 4: Visualization of our patch-level knowledge selection and its comparison with vanilla knowledge distillation. Patches with white color denote larger weights of distillation for preserving stability. Wi 5 tasks 10 tasks 20 tasks 1 56.18 51.99 50.53 ||Pt,cls −Pt,i||2 53.81 49.78 48.31 Eq. 2 (Ours) 59.02 57.90 54.25 Table 5: Experiments are conducted on CIFAR-100 and we report the top-1 accuracy in %. The first row set all Wi to 1, and the second row uses Wi proportional to the patch embedding’s distance to [CLS] embedding. former methods and problem settings. For each experiment in Table 4, we store 20 exemplars of each learned class following DyTox for fair comparisons. The vanilla knowledge distillation is replaced with our patch-level knowledge selection. We observe that the proposed method outperforms the original DyTox with a large gain in terms of Average/ Last average accuracy and Forgetting. It further proves the effectiveness of our insight on fine-grained patch distillation. Compared with vanilla knowledge distillation, our method preserves the knowledge by adaptive weights with different patches, offering more flexibility for the model to balance between plasticity and stability. Furthermore, given that our patch-level knowledge selection approach utilizes varying weights for each patch embedding during the distillation process, comparative experiments were also conducted to evaluate the effectiveness of this strategy in two distinct settings. The first one sets all distillation weight Wi to 1, and the second one computes it by Wi = ||Pt,cls −Pt,i||2 to replace Eq. 2. According to results of the first setting in Table 5, we found that placing the same weights to the distillation of all patches results in worse performance than Ours (the third row). We assume that this strict restriction preserves more information about the previous task while hurting the learning ability of the current task. Then assigning more weights for distillation on embeddings further to the [CLS] embedding, which is opposite to our method, leads to inferior results than the baseline. The results of both settings demonstrate the importance of preserving plasticity on task-relevant patches and stability on task-agnostic patches. Visualization of Patch-level Knowledge Selection. We visualize some examples and demonstrate the actual weights applied by our patch-level knowledge selection across different tasks in Figure 4. The images are picked from ImageNetSubset with the 10-task setting. Instead of using the same weight on every patch like vanilla knowledge distillation, our method can adaptively select some background patches for preserving stability, while offering more plasticity on foreground patches to learn task-relevant knowledge. Conclusions This paper introduces a novel framework using vision transformers for non-exemplar class incremental learning (NECIL) to reduce catastrophic forgetting and classifier bias. It utilizes patch embeddings for a balanced stabilityplasticity trade-off in different regions and employs unique strategies for task-relevant and task-agnostic areas. A new prototype restoration module is also introduced to preserve the decision boundary of old tasks without inducing classifier bias. The framework provides a potential baseline for future research. Acknowledgments This work is funded by NSFC (NO. 62225604, 62206135, 62202331), and the Fundamental Research Funds for the Central Universities (Nankai Universitiy, 070-63233085). Computation is supported by the Supercomputing Center of Nankai University. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6977 References Belouadah, E.; Popescu, A.; and Kanellos, I. 2021. A comprehensive study of class incremental learning algorithms for visual tasks. Neural Networks, 135: 38–54. Castro, F. M.; Mar´ın-Jim´enez, M. J.; Guil, N.; Schmid, C.; and Alahari, K. 2018. End-to-end incremental learning. In ECCV. Chen, H.; Wang, Y.; Xu, C.; Yang, Z.; Liu, C.; Shi, B.; Xu, C.; Xu, C.; and Tian, Q. 2019. Data-free learning of student networks. In ICCV. Delange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; and Tuytelaars, T. 2021. A continual learning survey: Defying forgetting in classification tasks. IEEE TPAMI. Douillard, A.; Cord, M.; Ollion, C.; Robert, T.; and Valle, E. 2020. PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. In ECCV. Douillard, A.; Ram´e, A.; Couairon, G.; and Cord, M. 2022. Dytox: Transformers for continual learning with dynamic token expansion. In CVPR. Gao, Q.; Zhao, C.; Ghanem, B.; and Zhang, J. 2022. RDFCIL: Relation-Guided Representation Learning for DataFree Class Incremental Learning. ECCV. Goodfellow, I. J.; Mirza, M.; Xiao, D.; Courville, A.; and Bengio, Y. 2013. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Hou, Q.; Han, L.; and Cheng, M.-M. 2021. Autonomous Learning of Semantic Segmentation from Internet Images (in Chinese). Sci Sin Inform, 51(7): 1084–1099. Hou, S.; Pan, X.; Loy, C. C.; Wang, Z.; and Lin, D. 2019. Learning a unified classifier incrementally via rebalancing. In CVPR. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences. Li, Z.; and Hoiem, D. 2017. Learning without forgetting. IEEE TPAMI. Liu, X.; Hu, Y.-S.; Cao, X.-S.; Bagdanov, A. D.; Li, K.; and Cheng, M.-M. 2022. Long-tailed class incremental learning. In European Conference on Computer Vision, 495–512. Springer. Liu, X.; Masana, M.; Herranz, L.; Van de Weijer, J.; Lopez, A. M.; and Bagdanov, A. D. 2018. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In 2018 24th International Conference on Pattern Recognition (ICPR), 2262–2268. IEEE. Liu, Y.; Parisot, S.; Slabaugh, G.; Jia, X.; Leonardis, A.; and Tuytelaars, T. 2020. More classifiers, less forgetting: A generic multi-classifier paradigm for incremental learning. In ECCV. Liu, Y.; Schiele, B.; and Sun, Q. 2021. Adaptive aggregation networks for class-incremental learning. In CVPR. Ma, C.; Ji, Z.; Huang, Z.; Shen, Y.; Gao, M.; and Xu, J. 2023. Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning. In ICLR. McCloskey, M.; and Cohen, N. J. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, 109–165. Elsevier. Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; and Lampert, C. H. 2017. icarl: Incremental classifier and representation learning. In CVPR. Robins, A. 1995. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science. Wang, Z.; Zhang, Z.; Ebrahimi, S.; Sun, R.; Zhang, H.; Lee, C.-Y.; Ren, X.; Su, G.; Perot, V.; Dy, J.; et al. 2022a. Dualprompt: Complementary prompting for rehearsal-free continual learning. In ECCV. Wang, Z.; Zhang, Z.; Lee, C.-Y.; Zhang, H.; Sun, R.; Ren, X.; Su, G.; Perot, V.; Dy, J.; and Pfister, T. 2022b. Learning to prompt for continual learning. In CVPR. Yan, S.; Xie, J.; and He, X. 2021. Der: Dynamically expandable representation for class incremental learning. In CVPR. Yin, H.; Molchanov, P.; Alvarez, J. M.; Li, Z.; Mallya, A.; Hoiem, D.; Jha, N. K.; and Kautz, J. 2020. Dreaming to distill: Data-free knowledge transfer via deepinversion. In CVPR. Yu, L.; Liu, X.; and Van de Weijer, J. 2022. Self-training for class-incremental semantic segmentation. IEEE Transactions on Neural Networks and Learning Systems. Yu, L.; Twardowski, B.; Liu, X.; Herranz, L.; Wang, K.; Cheng, Y.; Jui, S.; and Weijer, J. v. d. 2020. Semantic drift compensation for class-incremental learning. In CVPR. Zhai, J.-T.; Liu, X.; Bagdanov, A. D.; Li, K.; and Cheng, M.-M. 2023. Masked Autoencoders are Efficient Class Incremental Learners. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 19104–19113. Zhou, D.-W.; Wang, Q.-W.; Qi, Z.-H.; Ye, H.-J.; Zhan, D.C.; and Liu, Z. 2023. Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648. Zhu, F.; Cheng, Z.; Zhang, X.-y.; and Liu, C.-l. 2021a. ClassIncremental Learning via Dual Augmentation. NIPS. Zhu, F.; Zhang, X.-Y.; Wang, C.; Yin, F.; and Liu, C.-L. 2021b. Prototype augmentation and self-supervision for incremental learning. In CVPR. Zhu, K.; Zhai, W.; Cao, Y.; Luo, J.; and Zha, Z.-J. 2022. SelfSustaining Representation Expansion for Non-Exemplar Class-Incremental Learning. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6978
2024
775
18,601
Multi-Prompts Learning with Cross-Modal Alignment for Attribute-Based Person Re-identification Yajing Zhai1,2*, Yawen Zeng1*, Zhiyong Huang3, Zheng Qin1†, Xin Jin2†, Da Cao1 1College of Computer Science and Electronic Engineering, Hunan University, Changsha, China 2Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China 3National University of Singapore, NUS Research Institute in Chongqing {yajingzhai9,yawenzeng11}@gmail.com, huangzy@comp.nus.edu.sg, zqin@hnu.edu.cn, jinxin@eitech.edu.cn, caoda0721@gmail.com Abstract The fine-grained attribute descriptions can significantly supplement the valuable semantic information for person image, which is vital to the success of person re-identification (ReID) task. However, current ReID algorithms typically failed to effectively leverage the rich contextual information available, primarily due to their reliance on simplistic and coarse utilization of image attributes. Recent advances in artificial intelligence generated content have made it possible to automatically generate plentiful fine-grained attribute descriptions and make full use of them. Thereby, this paper explores the potential of using the generated multiple person attributes as prompts in ReID tasks with off-the-shelf (large) models for more accurate retrieval results. To this end, we present a new framework called Multi-Prompts ReID (MP-ReID), based on prompt learning and language models, to fully dip fine attributes to assist ReID task. Specifically, MP-ReID first learns to hallucinate diverse, informative, and promptable sentences for describing the query images. This procedure includes (i) explicit prompts of which attributes a person has and furthermore (ii) implicit learnable prompts for adjusting/conditioning the criteria used towards this person identity matching. Explicit prompts are obtained by ensembling generation models, such as ChatGPT and VQA models. Moreover, an alignment module is designed to fuse multi-prompts (i.e., explicit and implicit ones) progressively and mitigate the cross-modal gap. Extensive experiments on the existing attribute-involved ReID datasets, namely, Market1501 and DukeMTMC-reID, demonstrate the effectiveness and rationality of the proposed MP-ReID solution. Introduction Person re-identification (ReID) is a challenging task due to the dramatic visual appearance changes from pose, viewpoints, illumination, occlusion, low resolution, background clutter, etc. (Jin et al. 2020a; Ye et al. 2021; Zhang, Zhang, and Liu 2021). Fine-grained person attributes are robust to *These authors contributed equally. This work was done when Yajing Zhai was an intern at Eastern Institute of Technology, Ningbo, China. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Learnable Prompt ChatGPT Prompt Retrieval Retrieval Attribute Prompt Retrieval [P]yellow, shirt, black, backpack, dark, shorts, … [P]A photo of a [X] [X] [X] [X] person. [P0]The man with a black backpack is wearing a short yellow top and dark shorts. VQA Prompt [P1]The man is wearing glasses. [P2]The man has no beard. [P3]The man is wearing plaid clothes. [PN]... (a) Retrieval results with limited coarse-grained attributes (b) Retrieval results with only implicit attribute prompts (c) Retrieval results with multiple fine-grained attribute prompts Figure 1: Comparison of various usages of attributes for person ReID, where red boxes represent negative images, green boxes indicate positive results and key attribute words have been marked in red. We can see that using multiple fine human attributes as prompts in ReID brought advancements. these variations and are often exploited as efficient supplements with local descriptions that aid in the learning of more discriminative feature representations (Jia et al. 2022; Wang et al. 2022a). In particular, the common attributes include clothing color, shoes, hairstyle, gender, age, and other specific characteristics. They serve as additional information that complements and aligns images, reducing the impact of the above factors, thereby improving the overall performance of ReID (Yu et al. 2022). Recently, some researchers (Jia, Chen, and Huang 2021; Niu et al. 2022; Specker, Cormier, and Beyerer 2023; Zheng et al. 2022) begin to investigate the importance of attributes w.r.t ReID task, which demonstrate that attributes are inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6979 deed an effective piece of information that could enhance the retrieval performance in ReID. However, current attributebased ReID algorithms fail to leverage the full potential of the abundant contextual information available. That’s mainly because they rely on simplistic and naive utilization of coarse-grained attributes, as well as the complexity of accurately capturing and descriptions with the limitation of AI technology in the past. As shown in Figure 1a, certain coarse, separate, and ambiguous attributes, such as “yellow”, “shorts”, and “shirt”, are directly used to retrieve pedestrians, which are less effective compared to clear and complete abundant contextual descriptions as presented in Figure 1c. Thus, it is essential but has not been well investigated to efficiently take full advantage of fine-grained attribute information for improving ReID accuracy. With the fast-development of large models (Fu et al. 2022; Jin et al. 2022; Li et al. 2023), ReID methods gradually become more practical for real-world scenarios and gain superior performance. Besides, prompt learning (Wu et al. 2022; Zhou et al. 2022b), as a paradigm of strategies that leverages pre-trained models by incorporating additional textual description information, has achieved improved performance in many complex AI tasks (Zeng 2022; L¨uddecke and Ecker 2022; Liu et al. 2022). Building upon this inspiration, we investigate the feasibility of utilizing prompts to provide finegrained attribute information for the ReID task. Intuitively, there are two strategies for applying attributes as prompts, explicit attribute prompts and implicit attribute prompts, as shown in Figure 1. (i) Explicit attribute prompts refer to an attribute-based sentence prompt generation method, where the production process utilizes some attribute words, among which ChatGPT and visual question answer (VQA) models (Yu et al. 2019; Wang et al. 2022b) with stronger interactivity and feedback mechanism are often used. (ii) While implicit attribute prompts use a learnable textual prompt generation method, where the process does not have specific attribute information, as depicted in Figure 1b. We can see that, the better retrieval result is obtained via the implicit attribute prompt method, but it is still not accurate enough. In contrast, as shown in Figure 1c, the ReID scheme that learning from multiple attribute prompts significantly improves the retrieval performance with more fine-grained information. From this we can infer that, the utilization of fine attribute information could enable the ReID model to learn more auxiliary features and relationships, thereby improving ultimate accuracy. However, prominent challenges still remain that need to be further addressed. Firstly, the lack of such a required prompt-related ReID dataset in the large-scale practical ReID task has led to few studies have been exploited. The second challenge is that there is a gap between the attributebased text prompt and the image, making it essential to address the alignment of these two modalities. As a result, despite utilizing rich prompts for improved ReID performance is a promising approach that can lead to efficient and comprehensive results, it remains an under-explored area with the potential for further optimization. In this paper, we make the first attempt to employ the large-scale multi-prompts information in the attribute-based ReID task and propose a novel Multi-Prompts Learning framework, dubbed as MP-ReID, to support this challenging task. MP-ReID aims to retrieve one person based on a variety of fine-grained attribute information as a complement for image information to improve the retrieval performance with ChatGPT, VQA and CLIP (Radford et al. 2021). As mentioned above, the multi-prompts include explicit attribute prompts and implicit attribute prompts. 1) Explicit attribute prompts — a prompt sentence generation paradigm, which is ensembling generation models based on attribute words. 2) The other is implicit attribute prompts — a learnable prompt paradigm without intuitive attributes, which models a prompt’s context words with learnable vectors, that could be initialized with either random values or pre-trained word embeddings. Furthermore, image information with the promptable semantic feature is optimized under a cross-modal space to mitigate the cross-modal semantic gap. After that, the learned prompts are regarded as a booster to apply to the person retrieval. By conducting experiments on two well-known datasets, we validate that MP-ReID is superior to various existing methods. The main contributions of this work are summarized as follows: • To the best of our knowledge, this is the first attempt that introduces the concept of multi-prompts learning strategies to generate diverse, informative, and promptable sentences for ReID improvements. • We introduce two prompts generation paradigms: explicit attribute prompt and implicit attribute prompt, for applying fine-grained attributes to fully use the comprehensive semantics and enhance the retrieval performance. • We contribute a Multi-Prompts ReID framework, dubbed MP-ReID to mitigate the cross-modal semantic gap for this attribute-based ReID task. Meanwhile, we collect a prompt-related ReID dataset containing multiple attribute prompts about the same person, and we have released the dataset to facilitate the research community1. Related Work Prompt Learning In recent years, the use of prompt learning, which concerns providing suggestive information, has become a popular technique for incorporating knowledge in natural language processing problems (Petroni et al. 2019; Song et al. 2022; Jin et al. 2023). It involves adding language-specific instructions to the input text, enabling the pre-trained model to comprehend the downstream task and enhance the performance. ChatGPT2 and GPT-43 offer tremendous opportunities to improve open-source large language models using instruction-tuning (Peng et al. 2023) and transfer to downstream tasks with powerful generalization (Zhang et al. 2023). Moreover, there has been a recent trend towards utilizing prompt learning for improving the quality of visual representations in vision-language models (Ju et al. 2022; Rao et al. 2022). 1https://github.com/zyj20/MPReID. 2https://openai.com/blog/chatgpt 3https://openai.com/product/gpt-4 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6980 Person Retrieval Multi-Modal Transformer CLIP Text Encoder CLIP Image Encoder Explicit Prompt Implicit Prompt VQA Prompt Learnable Prompt ChatGPT Prompt Image [P] The man is wearing yellow shirt, is wearing gray pants and carrying black backpack. [P] A photo of a [X]1[X]2...[X]M person. [P1] The man is wearing glasses. [P2] The man has no beard. [P3] The man is wearing a watch. ... Learnable Prompt ChatGPT Prompt VQA Prompt Figure 2: The graphical representation of MP-ReID for ReID. Under the prompt learning paradigm, the multi-prompts generated by ChatGPT and VQA are regarded as the textual input to the multi-modal Transformer, which can enhance the retrieval of the matching person images. It is built upon three components: 1) Multi-Prompts Generation Learning; 2) Cross-Modal Alignment; 3) Person Retrieval. Attributed-based Person ReID Recently, some deep learning methods are proposed to exploit the discriminative attributes. In particular, Lin et al. (Lin et al. 2019) manually labeled pedestrian attributes for the Market1501 dataset and the DukeMTMC-reID dataset4. Besides, the authors proposed a novel attribute-based person recognition framework with an attribute re-weighting module. This aims to learn discriminative embedding and correct prediction. Zhang et al. (Zhang, Niu, and Zhang 2020) leveraged the feature aggregation strategy to make use of attribute information. Jeong et al. (Jeong, Park, and Kwak 2021) presented a new loss for learning cross-modal embeddings in the context of attribute-based person search and regarded attribute dataset as a category of people sharing the same traits. Li et al. (Li, Sun, and Li 2023) fully exploited the cross-modal description ability through a set of learnable text tokens for each person ID and gave them to the text encoder to form ambiguous descriptions with a two-stage strategy, facilitating a better visual representation. Inspired by the above work, we optimize the descriptive and visual features under the multi-prompts generation paradigm for ReID task, which contains explicit prompts and implicit prompts. In this way, textual prompts and visual features are learned from each other, achieving a win-win effect. Methodology This section provides a detailed explanation of our solution, with Figure 2 illustrating the overall framework of our MPReID. Generally speaking, our proposed framework comprises three components: multi-prompts generation, crossmodel alignment, and person retrieval. 1). The approach of multi-prompts generation learning leverages ChatGPT, 4https://vana77.github.io VQA, and learnable methods to generate three different prompts, which is given in Figure 3. These prompts are then fused together using a cross-attention mechanism; 2). Crossmodal alignment module aligns prompts-images pairs by feeding them into a multi-modal Transformer to learn the context; and 3). Person retrieval involves creating a feature representation in the prompt-visual space for identifying individuals. Multi-Prompts Generation Learning Given an attribute-based ReID dataset, images are defined as M = {m1, m2, ..., mn}, the corresponding attributes are denoted as A = {a1, a2, ..., an}, respectively. The MP-ReID first generates prompts P = {p1, p2, ..., pn}, which contains P e i ensembling explicit prompts and P l i learnable implicit prompts. For visual representation and prompt representation, we adopt the image encoder and the text encoder from CLIP as the backbone for feature extractor respectively. They are all implemented as Transformer architecture (He et al. 2021b; Zhu et al. 2023). And the ViT-B/16 network architecture (Dosovitskiy et al. 2021) is utilized for the images, which contains 12 transformer layers. With respect to prompts embedding, we convert each word into a unique numeric ID using byte pair encoding with a 49, 512 vocab size (Sennrich, Haddow, and Birch 2016). To enable parallel computation, we set the context length of each text sequence to 77, including the start [SOS] and end [EOS] tokens. Within a batch of images, we denote the index of each image as i ∈{1...N}. We calculate the similarity between the [CLS] token embedding of the image feature mi and the corresponding [EOS] token embedding of the text feature pi. And in this module, we obtain the image feature f m i , f m i = Fm(mi) (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6981 The man is wearing yellow shirt, is wearing gray pants and carrying black backpack. A photo of a [X]1[X]2...[X]M person. Is the person carrying an umbrella in the hand? Is the person wearing glasses? Is the person wearing a tie? What kind of the shirt is the person wearing? ... Question Bank the person is wearing [MASK]. the person is carrying [MASK]. the person has [MASK]. ... Templates beard striped umbrella plaid … Answer Bank The person is wearing plaid shirt. The person has no beard. The person is carrying umbrella. ... Image ChatGPT VQA Prompt ChatGPT Prompt Learnable Prompt [X]1[X]2...[X]M Learnable Tokens Figure 3: The process of multi-prompts generation learning in the proposed MP-ReID framework. The top is the question bank and answer bank of VQA model; and the bottom is the concrete multi-prompts generation from the VQA prompt, ChatGPT prompt and learnable prompt, respectively. Accordingly, for textual representation, we obtain the prompt feature f p i , which is formally formulated as, f p i ←  f e i = Fp(pe i) f l i = Fp(pl i) (2) where Fm(·) and Fp(·) are visual and textual projection function. Besides, f m i , f e i , f l i are extracted image features, explicit prompt features and implicit prompt features, respectively. P e i ensembling explicit prompts comprise P c i ChatGPT generation prompt and P v i VQA generation prompt. 1) Explicit attribute prompts. Specifically, to generate explicit attribute prompts, we adopt a prompt ensembling strategy that utilizes both ChatGPT and VQA models. Firstly, we establish the criteria and guidelines for generating prompt sentences that align with our desired outcome. We have configured it to utilize the specified instructions for sentence creation. This approach necessitates the usage of attribute words to generate prompts. Subsequently, we transmit these attribute words to ChatGPT, which leverages its large model pre-training prompt learning to automatically generate sentence prompts. Moreover, we design seven related questions and prompt sentence templates to cover as much information as possible about a person, aiming to gain the attributes from VQA are not included in the prompts from ChatGPT. Then the <question, answer> pairs obtained by a VQA model called MCAN (Yu et al. 2019) are converted into seven prompts corresponding to the image. This kind of prompt can be especially applied to situations when attributes cannot be obtained easily. For instance, we can ask these questions as follows: “Is the person wearing a tie?”, “Is the person wearing a watch?”, and “What kind of shirt is the person wearing?”. Next, we randomly assign questions from the question bank to each image and generate several attribute answers. We then pass these answer attributes through predesigned declarative sentence templates and fill in the relevant words to create sub-prompt sentences. Finally, we generate ChatGPT prompts and VQA prompts for 1,501 identities in the Market1501 dataset, as well as for 1,404 identities in the DukeMTMC-reID dataset respectively. 2) Implicit attribute prompts. The implicit prompt strategy in our MP-ReID method uses a learnable prompt approach that does not require intuitive attributes. Specifically, we call it “implicit” because these learnable prompts are training dataset-specific common text descriptions, which is not directly corresponding to a sample. Based on CoOp (Zhou et al. 2022b,a) and CLIP-ReID (Li, Sun, and Li 2023), the implicit prompt mainly aims to generate concrete text descriptions through a set of learnable text tokens for finegrained ReID tasks. That is to say, it provides some attention clues that are somewhat relevant to the tasks. For instance, let the network focus on the human body via “the photo is a [x][x][x][x] person”, not a simple/general “the photo is a [x][x][x][x]”. Cross-Modal Alignment Another technical challenge is how to utilize multi-prompts and alleviate their gaps efficiently. To address it, as shown in Figure 2, we proposed the second component of our MPReID — cross-modal alignment, which eases the modality gap between textual prompt features and visual features. Furthermore, similarity learning is used to determine whether feature vectors belong to the same people or not, sim(Mi, Pi) = Mi · Pi = uM(mi) · uP (pi) (3) where uM(·) and uP (·) are linear layers projecting embedding into a cross-modal embedding space. 1) Aligning for explicit attribute prompts. In this module, we first perform the cross-attention operation (Chen et al. 2022) with the image on both P c i ChatGPT generation prompt and P v i VQA generation prompt encoded by the CLIP text encoder, respectively. Specifically, the data is sent in a sequential manner to the cross-attention module for processing. In order to integrate prompts f p i and images f m i more effectively, the textual prompt feature serves as query (Qi). Meanwhile, the image feature and the prompt feature perform the concatenating operation, and are subsequently utilized as key (Ki) and value (Vi). Afterwards, we combine the two gained explicit prompt features, namely the ChatGPT prompt f c i and the VQA prompt f v i via concatenation (Zhai et al. 2022). Finally, the representation of the explicit prompts is an attentive combination of ChatGPT prompts’ and VQA prompts’ representations. Moreover, f e i is formulated as, f e i = MLP(Concat(f c i , f v i )) (4) Then we construct a Multi-Modal Transformer model that combines prompt and image features to unify them into a cross-modal space that can be aligned (Luo et al. 2019). After each of them receives its respective new features, the obtained features are sequentially fed into the Transformer The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6982 Baseline Market1501 DukeMTMC-reID Category Method Reference mAP R@1 mAP R@1 SAN AAAI 2020 88.00 96.10 75.50 87.90 PAT CVPR 2021 88.00 95.40 78.20 88.80 Image-based TransReID ICCV 2021 88.90 95.20 82.00 90.70 MSDPA MM 2022 89.50 95.40 82.80 90.90 DCAL CVPR 2022 87.50 94.70 80.10 89.00 AANet CVPR 2019 66.89 87.04 55.56 73.92 AMD ICCV 2021 88.64 95.94 78.26 89.21 Attribute-based UCAD IJCAI 2022 79.50 92.60 UPAR WACV 2023 40.60 55.40 CLIP-ReID AAAI 2023 89.60 95.50 82.50 90.00 Ours MP-ReID 95.50 97.70 88.90 95.70 Table 1: Performance comparison of various state-of-the-art baselines on both datasets. model together with the image features, so we can get f s i . To further enhance the performance, we use a cross-entropy loss Lcls for the CLS token f CLS i , which is responsible for the classification representation of the prompts and images. qk is the value in the target distribution, f s i = [f CLS i , f e i , f m i ] (5) Lcls = N X k=1 −qk log(MLP(f s i )) (6) We also design an image-to-prompt contrastive loss Lm2p, which is calculated as follows, Lm2p(i) = −log exp (sim (Mi, Pi)) PN a=1 exp (sim (Mi, Pa)) (7) As for explicit prompt, the text-to-image contrastive loss Lp2m is formulated as, Lp2m(i) = −log exp (sim (Mi, Pi)) PN a=1 exp (sim (Ma, Pi)) (8) Equation (7) and Equation (8) are the similarities of two embeddings from matched pair. 2) Aligning for implicit attribute prompts. As for implicit prompts, the prompts P l i are designed as “A photo of a [X]1[X]2[X]3...[X]T person”, where each [X]t (with t ∈1...T) is a learnable text token with the same dimension as the word embedding. Here, T represents the number of learnable prompt tokens. Notably, the parameters in X can be trained. We can use the obtained implicit prompt features to calculate image-to-prompt cross-entropy Lm2pce, Lm2pce(i) = N X k=1 −qk log exp (sim (Mi, Pi)) PN a=1 exp (sim (Mi, Pa)) (9) Finally, in this module, the losses are summarized as follows, Lalign = Lcls + Lm2p + Lp2m + Lm2pce (10) Person Retrieval Through the above steps, we employ the Euclidean distance to calculate the distance score between query images and gallery images. Therefore, a higher score will be generated for a positive pair of person images than those of negative counterparts. In order to optimize ReID models, two loss functions are introduced: a triplet loss Ltri (Hermans, Beyer, and Leibe 2017) and an ID loss Lid (Zheng et al. 2017). The triplet loss is used to minimize the distance between images of the same person while maximizing the distance between images of different people. The ID loss, on the other hand, is used to concretely optimize for correct identity predictions by smoothing label information. By utilizing both the triplet and ID losses, the model is able to simultaneously reduce intra-class distances and increase inter-class distances, resulting in improved accuracy in re-identifying individuals, Lid = N X k=1 −qk log(yk) (11) Ltri = max(dp −dn + α, 0) (12) where yk represents ID prediction logits of class k. dp and dn are feature distances of the positive and negative pair, and α is the margin of triplet loss. Overall, the objective function of our method MP-ReID is denoted as follows, where λtri is the balance factor of triplet loss and λid is the balance factor of ID loss, Lreid = λidLid + λtriLtri (13) And ultimately, the loss function used in MP-ReID is as follows, L = Lalign + Lreid (14) Experiments Experimental Settings Dataset. In this paper, we evaluate the proposed MP-ReID method on two well-known ReID datasets: Market1501 (Zheng et al. 2015), DukeMTMC-reID (Zheng, Zheng, and Yang 2017), as well as the attribute datasets associated with these two datasets, which were manually annotated (Lin et al. 2019). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6983 Strategies Market1501 DukeMTMC-reID LP(Baseline) AW GC VP CP CP & VP mAP R@1 R@5 R@10 mAP R@1 R@5 R@10 ✓ 89.60 95.50 82.50 90.00 ✓ ✓ 89.40 95.60 96.90 97.30 83.90 92.30 96.50 97.20 ✓ ✓ 86.30 94.00 97.60 98.60 78.10 88.60 93.60 95.10 ✓ ✓ 87.60 94.20 97.50 98.70 78.20 84.70 93.20 94.80 ✓ ✓ 90.20 95.90 98.80 99.30 87.20 94.50 97.60 98.30 ✓ ✓ 95.50 97.70 99.20 99.50 88.90 95.70 98.00 98.70 Table 2: Ablation study of prompt strategies for MP-ReID on both datasets. (Thereinto, “LP” is learnable prompts, “AW” is coarse and separate attribute words, “GC” is generation caption, “VP” is VQA prompts, “CP” is ChatGPT prompts.) Evaluation Protocols. To evaluate the performance of our approach, we employed Rank@k and mean Average Precision (mAP) as the evaluation metrics for all experiments on the two datasets (Wang et al. 2021; Farooq et al. 2022). Higher values indicate better performance. Implementation Details. We apply our method on a server equipped with the NVIDIA GeForce RTX 2080 Ti GPU. We use the Transformer-based models and the learning rate is 5 × 10−7 with a linearly growing. And the warming up is set to 10 to make the model converge faster. In our implementation, we set S = 16 and K = 4 to enable our model to learn from multiple identities and samples per identity. For feature extraction, prompt features and image features are represented as 512-dimensional vectors. Furthermore, we set the ID loss balance factor λid to 0.25 as a regularization strategy, λtri and the weight of Lalign is set to 1. Regarding the triplet loss, we set the margin parameter ξ to 0.3 to create an adequate boundary between the positive and negative samples. Moreover, we directly use the off-theshelf ChatGPT 3.5 for the explicit prompts generation. Overall Performance Comparison To demonstrate the effectiveness of our proposed method, we compared it with several state-of-the-art approaches. And we employ R@1, R@5, R@10 for convenience of representation. Table 1 presents the experimental results, and we have the following observations: 1) Our MP-ReID approach achieves better performance on both datasets, significantly outperforming state-of-the-art baselines. It is mainly because the MP-ReID model employs the multi-prompts paradigm to significantly enhance the identification performance. This suggests the presence of highly informative cues in the image and prompt that were neglected in traditional person ReID schemes. 2) Despite recent advancements in attributebased algorithms for person ReID, several popular methods such as AANet (Chen et al. 2021), AMD (Chen et al. 2021), UCAD (Yan et al. 2022), UPAR (Specker, Cormier, and Beyerer 2023) and CLIP-ReID (Li, Sun, and Li 2023) have demonstrated poor search results due to technological limitations that prevent full utilization of attribute information. On the other hand, image-based methods such as SAN (Jin et al. 2020b), PAT (Li et al. 2021), TransReID (He et al. 2021a), MSDPA (Cheng et al. 2022) and DCAL (Zhu et al. 2022), while effective in some regards, do not 1 2 3 4 5 6 7 Impact of Prompts 0 1 2 3 4 5 6 7 8 Rank@1 Market1501 DukeMTMC-reID 0 1 2 3 4 5 6 7 8 Figure 4: Ablation study on the effect of reducing various sub-prompts for MP-ReID. take into account attribute information, leading to a potential loss of valuable information for person ReID. The utilization of multi-prompts in MP-ReID significantly improves the retrieval performance of person ReID. In particular, the performance of R@1 on the Market1501 dataset and the DukeMTMC-reID dataset improves significantly by at least 5.9% and 6.1%, respectively. Ablation Studies The overall comparative analysis shows that our proposed MP-ReID solution exhibits superior performance. To further validate the importance of multi-prompts in ReID, we took CLIP-ReID with implicit prompt as a baseline and performed some ablation studies. Firstly, MP-ReID is compared with its several variants: 1) MP-ReID with the coarse and separate attributes prompt. 2) MP-ReID with generation caption prompts from an image captioning model. 3) MPReID with/without any ensembling explicit prompts, i.e., ChatGPT generation prompts, as well as VQA generation prompts. 1) Ablation study of prompt strategies for MP-ReID. Table 2 displays the performance of different component combinations of MP-ReID. Our conclusions are threefold: a) MP-ReID using coarse and separate attribute words and generation caption shows wicked retrieval results than the prompts generated by the large model ChatGPT. b) both the explicit prompt and the implicit prompt in the table show The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6984 Query Rank-1 Rank-5 Rank-10 (c) (a) (b) Figure 5: Visualization of three examples that illustrate the retrieval results about a) baseline (implicit learnable prompt); b) + coarse and separate attribute words; and c) our MP-ReID. Thereinto, the green box denotes the same ID as the query image, and the red box indicates a different ID from the query image. relatively better performance. c) MP-ReID outperforms MPReID w/ VQA generation prompt by 7.9% and 10.7% in mAP on Market1501 and DukeMTMC-reID datasets. Furthermore, the scheme of MP-ReID w/ ChatGPT generation prompt proved inferior to MP-ReID by 5.3% and 1.7% in mAP on Market1501 and DukeMTMC-reID datasets. Furthermore, research has shown that using multiple fine prompts is more effective. 2) Ablation study on multiple prompts. As Figure 4 revealed, to further gain deeper insight into the effectiveness of multi-prompts learning in MP-ReID, we compared the effect of different numbers of multi-prompts in R@1 by showcasing on Market1501 and DukeMTMC-reID datasets. Thereinto, the graph is presented by subtracting the base value of 89% from the obtained R@1. This presentation method is utilized to enhance the clarity of the graph. Significantly, we have the following observations: a) we use multiple sub-prompts, including 7 VQA prompts, 1 ChatGPT prompt and 1 learnable prompt. We gradually eliminate 1 - 4 VQA prompts and 1 ChatGPT prompt when only one learnable prompt remains in our experiments. The results have obviously shown that more prompts are more effective than few prompts for ReID, because few prompts cannot be learned much information. b) These findings certify the effectiveness of combining the ChatGPT generation prompts and VQA generation prompts these two explicit prompts and the implicit prompts components in our MP-ReID approach. In addition, the ablated results reveal the necessity of multiple prompts proposed in our framework, jointly resulting in its superior performance. Visualization This paper aims to combine multiple prompts features to enhance visual features. As Figure 5 reveals, to better comprehend our MP-ReID network, we examined and evaluated the person retrieval outcomes through visualization and analysis. Figure 5a displays the effects of the baseline CLIPReID with implicit learnable prompts. The method adopts implicit prompts integrating coarse and separate attributes see Figure 5b. And Figure 5c is our MP-ReID with multiple prompts enhanced for ReID. The green box means the same ID as the query image, and the red box reveals a different ID from the query image. We can observe that MPReID achieves the optimal performance on Rank-10, which is mainly because of the newly introduced multi-prompts learning. Our proposed prompts learning strategies facilitate the discovery of fine-grained discriminative clues by leveraging more relevant characteristic prompts among samples. Conclusions This paper introduces a new concept of multi-prompts and a novel framework for attribute-based ReID, named MPReID. Specifically, we take the first attempt to explore multiple prompts generation learning strategies with ChatGPT and VQA models, which effectively learn discriminative representations via generated multi-prompts information. For the concrete prompts generation, we classify it into explicit prompts and implicit prompts. Among them, for generating explicit prompts, large model ChatGPT and VQA are used based on a prompt ensembling paradigm, and the implicit prompts are learnable prompts. The model is then refined using well-designed losses that consider textual prompts and visual image constraints to alleviate the modality gap. Our MP-ReID has achieved state-of-the-art performance on two well-known ReID datasets. Limitations In this paper, we propose the use of multiple prompts to enhance the person re-identification task, which has been experimentally validated as effective. However, explicit and implicit integration aspects warrant further exploration. For instance, in terms of quantity, additional prompt methods beyond the current three can be considered. Furthermore, the integration strategy can be further refined. Our current integration strategy is relatively straightforward, but we believe that employing more diverse and tighter integration methods will yield even better results. At the model level, we are particularly intrigued by multi-modal large models, but due to dataset and resource constraints, we have not yet conducted extensive experimentation. We anticipate that larger models and corpora will reveal more intriguing findings. Acknowledgments The authors are highly grateful to the anonymous referees for their careful reading and insightful comments. The work is partially supported by the National Natural Science Foundation of China (No.61802121, No.62302246, No.U20A20174, No.U22A2030), the National Key R&D Projects (No.2022YFB3103500), the Natural Science Foundation of Hunan Province, China (No.2022JJ30159 and No.2023JJ20013), Technology Projects of Hunan Province (No.2015TP1004), Science and Technology Key Projects of Changsha City (No.kh2103003) and the Natural Science Foundation of Zhejiang Province, China (No.LQ23F010008) and China Scholarship Council. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6985 References Chen, S.; Zeng, Y.; Cao, D.; and Lu, S. 2022. Videoguided machine translation via dual-level back-translation. Knowledge-Based Systems, 245: 108598. Chen, X.; Liu, X.; Liu, W.; Zhang, X.-P.; Zhang, Y.; and Mei, T. 2021. Explainable person re-identification with attribute-guided metric distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11813–11822. Cheng, X.; Jia, M.; Wang, Q.; and Zhang, J. 2022. More is better: Multi-source dynamic parsing attention for occluded person re-identification. In Proceedings of the ACM International Conference on Multimedia, 6840–6849. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An image is worth 16x16 words: transformers for image recognition at scale. ICLR. Farooq, A.; Awais, M.; Kittler, J.; and Khalid, S. S. 2022. AXM-Net: Implicit cross-modal feature alignment for person re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 4477–4485. Fu, D.; Chen, D.; Yang, H.; Bao, J.; Yuan, L.; Zhang, L.; Li, H.; Wen, F.; and Chen, D. 2022. Large-scale pre-training for person re-identification with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2476–2486. He, S.; Luo, H.; Wang, P.; Wang, F.; Li, H.; and Jiang, W. 2021a. Transreid: Transformer-based object reidentification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15013–15022. He, T.; Jin, X.; Shen, X.; Huang, J.; Chen, Z.; and Hua, X.S. 2021b. Dense interaction learning for video-based person re-identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1490–1501. Hermans, A.; Beyer, L.; and Leibe, B. 2017. In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737. Jeong, B.; Park, J.; and Kwak, S. 2021. Asmr: Learning attribute-based person search with adaptive semantic margin regularizer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 12016–12025. Jia, J.; Chen, X.; and Huang, K. 2021. Spatial and semantic consistency regularizations for pedestrian attribute recognition. In Proceedings of the IEEE/CVF international conference on computer vision, 962–971. Jia, J.; Gao, N.; He, F.; Chen, X.; and Huang, K. 2022. Learning disentangled attribute representations for robust pedestrian attribute recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, 1, 1069–1077. Jin, X.; He, T.; Shen, X.; Liu, T.; Wang, X.; Huang, J.; Chen, Z.; and Hua, X.-S. 2022. Meta clustering learning for largescale unsupervised person re-identification. In Proceedings of the ACM International Conference on Multimedia, 2163– 2172. Jin, X.; Lan, C.; Zeng, W.; and Chen, Z. 2020a. Uncertaintyaware multi-shot knowledge distillation for image-based object re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 11165–11172. Jin, X.; Lan, C.; Zeng, W.; and Chen, Z. 2023. Domain prompt tuning via meta relabeling for unsupervised adversarial adaptation. IEEE Transactions on Multimedia. Jin, X.; Lan, C.; Zeng, W.; Wei, G.; and Chen, Z. 2020b. Semantics-aligned representation learning for person reidentification. In Proceedings of the AAAI Conference on Artificial Intelligence, 07, 11173–11180. Ju, C.; Han, T.; Zheng, K.; Zhang, Y.; and Xie, W. 2022. Prompting visual-language models for efficient video understanding. In Proceedings of the European Conference on Computer Vision, 105–124. Springer. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proceedings of the International Conference on Machine Learning. Li, S.; Sun, L.; and Li, Q. 2023. CLIP-ReID: Exploiting vision-language model for image re-identification without concrete text labels. In Proceedings of the AAAI Conference on Artificial Intelligence. Li, Y.; He, J.; Zhang, T.; Liu, X.; Zhang, Y.; and Wu, F. 2021. Diverse part discovery: Occluded person reidentification with part-aware transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2898–2907. Lin, Y.; Zheng, L.; Zheng, Z.; Wu, Y.; Hu, Z.; Yan, C.; and Yang, Y. 2019. Improving person re-identification by attribute and identity learning. Pattern Recognition. Liu, Y.; Wei, W.; Peng, D.; and Zhu, F. 2022. Declarationbased prompt tuning for visual question answering. In Proceedings of the Thirty-first International Joint Conference on Artificial Intelligence. L¨uddecke, T.; and Ecker, A. 2022. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7076–7086. Luo, H.; Gu, Y.; Liao, X.; Lai, S.; and Jiang, W. 2019. Bag of tricks and a strong baseline for deep person re-identification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Niu, K.; Huang, L.; Huang, Y.; Wang, P.; Wang, L.; and Zhang, Y. 2022. Cross-Modal Co-Occurrence Attributes Alignments for Person Search by Language. In Proceedings of the ACM International Conference on Multimedia, 4426–4434. Peng, B.; Li, C.; He, P.; Galley, M.; and Gao, J. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277. Petroni, F.; Rockt¨aschel, T.; Lewis, P.; Bakhtin, A.; Wu, Y.; Miller, A.; and Riedel, S. 2019. Language models as knowledge bases? Proceedings of the Association for Computational Linguistics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6986 Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the Conference on International Conference on Machine Learning, 8748– 8763. Rao, Y.; Zhao, W.; Chen, G.; Tang, Y.; Zhu, Z.; Huang, G.; Zhou, J.; and Lu, J. 2022. Denseclip: Language-guided dense prediction with context-aware prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18082–18091. Sennrich, R.; Haddow, B.; and Birch, A. 2016. Neural machine translation of rare words with subword units. In 54th Annual Meeting of the Association for Computational Linguistics, 1715–1725. Song, X.; Jing, L.; Lin, D.; Zhao, Z.; Chen, H.; and Nie, L. 2022. V2P: Vision-to-prompt based multi-modal product summary generation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 992–1001. Specker, A.; Cormier, M.; and Beyerer, J. 2023. UPAR: Unified pedestrian attribute recognition and person retrieval. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 981–990. Wang, X.; Zheng, S.; Yang, R.; Zheng, A.; Chen, Z.; Tang, J.; and Luo, B. 2022a. Pedestrian attribute recognition: A survey. Pattern Recognition, 121: 108220. Wang, Z.; Wang, Z.; Zheng, Y.; Wu, Y.; Zeng, W.; and Satoh, S. 2021. Beyond intra-modality: a survey of heterogeneous person re-identification. In Proceedings of the International Conference on International Joint Conferences on Artificial Intelligence, 4973–4980. Wang, Z.; Zhang, Z.; Lee, C.; Zhang, H.; Sun, R.; Ruoxi, X.; Su, G.; Perot, V.; Dy, J.; and Pfister, T. 2022b. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 139–149. Wu, H.; Ma, B.; Liu, W.; Chen, T.; and Nie, D. 2022. Fast and constrained absent keyphrase generation by promptbased learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 10, 11495–11503. Yan, Y.; Yu, H.; Li, S.; Lu, Z.; He, J.; Zhang, H.; and Wang, R. 2022. Weakening the influence of clothing: universal clothing attribute disentanglement for person reidentification. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, 1523–1529. Ye, M.; Shen, J.; Lin, G.; Xiang, T.; Shao, L.; and Hoi, S. C. 2021. Deep learning for person re-identification: A survey and outlook. IEEE transactions on pattern analysis and machine intelligence, 44(6): 2872–2893. Yu, Z.; Pei, J.; Zhu, M.; Zhang, J.; and Li, J. 2022. Multiattribute adaptive aggregation transformer for vehicle reidentification. Information Processing & Management, 59(2): 102868. Yu, Z.; Yu, J.; Cui, Y.; Tao, D.; and Tian, Q. 2019. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6281–6290. Zeng, Y. 2022. Point prompt tuning for temporally language grounding. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2003–2007. Zhai, Y.; Zeng, Y.; Cao, D.; and Lu, S. 2022. Trireid: Towards multi-modal person re-identification via descriptive fusion model. In Proceedings of the International Conference on Multimedia Retrieval, 63–71. Zhang, C.; Zhang, C.; Li, C.; Qiao, Y.; Zheng, S.; Dam, S. K.; Zhang, M.; Kim, J. U.; Kim, S. T.; Choi, J.; et al. 2023. One small step for generative ai, one giant leap for agi: A complete survey on chatgpt in aigc era. arXiv preprint arXiv:2304.06488. Zhang, J.; Niu, L.; and Zhang, L. 2020. Person reidentification with reinforced attribute attention selection. IEEE Transactions on Image Processing, 30: 603–616. Zhang, Z.; Zhang, H.; and Liu, S. 2021. Person reidentification using heterogeneous local graph attention networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12136–12145. Zheng, A.; Pan, P.; Li, H.; Li, C.; Luo, B.; Tan, C.; and Jia, R. 2022. Progressive attribute embedding for accurate crossmodality person re-id. In Proceedings of the ACM International Conference on Multimedia, 4309–4317. Zheng, L.; Shen, L.; Tian, L.; Wang, S.; Wang, J.; and Tian, Q. 2015. Scalable person re-identification: A benchmark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1116–1124. Zheng, L.; Zhang, H.; Sun, S.; Chandraker, M.; Yang, Y.; and Tian, Q. 2017. Person re-identification in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1367–1376. Zheng, Z.; Zheng, L.; and Yang, Y. 2017. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3754–3762. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhu, H.; Ke, W.; Li, D.; Liu, J.; Tian, L.; and Shan, Y. 2022. Dual cross-attention learning for fine-grained visual categorization and object re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4692–4702. Zhu, J.; Jin, J.; Yang, Z.; Wu, X.; and Wang, X. 2023. Learning CLIP guided visual-text fusion transformer for videobased pedestrian attribute recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2625–2628. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6987
2024
776
18,602
Mono3DVG: 3D Visual Grounding in Monocular Images Yang Zhan1, Yuan Yuan1*, Zhitong Xiong2* 1School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi’an, China 2Technical University of Munich (TUM), Munich, Germany {zhanyangnwpu, y.yuan1.ieee, xiongzhitong}@gmail.com Abstract We introduce a novel task of 3D visual grounding in monocular RGB images using language descriptions with both appearance and geometry information. Specifically, we build a large-scale dataset, Mono3DRefer, which contains 3D object targets with their corresponding geometric text descriptions, generated by ChatGPT and refined manually. To foster this task, we propose Mono3DVG-TR, an end-to-end transformer-based network, which takes advantage of both the appearance and geometry information in text embeddings for multi-modal learning and 3D object localization. Depth predictor is designed to explicitly learn geometry features. The dual text-guided adapter is proposed to refine multiscale visual and geometry features of the referred object. Based on depth-text-visual stacking attention, the decoder fuses objectlevel geometric cues and visual appearance into a learnable query. Comprehensive benchmarks and some insightful analyses are provided for Mono3DVG. Extensive comparisons and ablation studies show that our method significantly outperforms all baselines. The dataset and code will be released. Introduction For intelligent systems and robots, understanding objects based on language expressions in real 3D scenes is an important capability for human-machine interaction. Visual grounding (Deng et al. 2021; Yang et al. 2022; Zhan, Xiong, and Yuan 2023) has made significant progress in 2D scenes, but these approaches cannot obtain the true 3D extent of the objects. Therefore, recent researches (Chen, Chang, and Nießner 2020; Achlioptas et al. 2020) utilize RGB-D sensors for 3D scanning and build indoor point cloud scenes for 3D visual grounding. The latest work (Lin et al. 2023) focuses on outdoor service robots and utilizes LiDAR and an industrial camera to capture point clouds and RGB images as multimodal visual inputs. However, the practical application of these works is limited due to the expensive cost and device limitations of RGB-D scans and LiDAR scans. Monocular 3D object detection (Huang et al. 2022a; Brazil et al. 2023) can obtain the 3D coordinates of all objects in the scene and only requires RGB images. While this approach has broad applications, it overlooks the semantic understanding of the 3D space and its objects, making *Corresponding Authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. it unable to accomplish specific object localization based on human instructions. To carry out more effective humanmachine interaction on devices equipped with cameras, such as drones, surveillance systems, intelligent vehicles, and robots, it is necessary to perform visual grounding using natural language in monocular RGB images. In this work, we introduce a task of 3D object localization through language descriptions with geometry information directly in a single RGB image, termed Mono3DVG (see Fig. 1). Specifically, we build a large-scale dataset, Mono3DRefer, which provides 41,140 natural language expressions of 8,228 objects. Mono3DRefer’s descriptions contain both appearance and geometry information, generated by ChatGPT and refined manually. Geometry information can provide more precise instructions and identify invisible objects. Even if the appearance of an object is the primary visual perception for humans, they tend to use geometry information to distinguish objects. To perform inference based on the language with appearance and geometry information, we propose a novel end-toend transformer-based approach, namely Mono3DVG-TR, which consists of a multi-modal feature encoder, a dual textguided adapter, a grounding decoder, and a grounding head. First, we adopt transformer and CNN to extract textual and multi-scale visual features. Depth predictor is designed to explicitly learn geometry features. Second, to refine multiscale visual and geometry features of the referred object, we propose the dual text-guided adapter to perform textguided feature learning based on pixel-wise attention. Finally, a learnable query first aggregates the initial geometric features, then enhances text-related geometric features by text embedding and finally collects appearance features from multiscale visual features. The depth-text-visual stacking attention fuses object-level geometric cues and visual appearance into the query, fully realizing text-guided decoding. Our contributions can be summarized as follows: • We introduce a novel task of 3D visual grounding in monocular RGB images using descriptions with appearance and geometry information, termed Mono3DVG. • We contribute a large-scale dataset, which contains 41,140 expressions generated by ChatGPT and refined manually based on the KITTI, named Mono3DRefer. • We propose an end-to-end transformer-based network, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6988 or (a) Monocular 3D Visual Grounding (Ours) (b) 2D Visual Grounding (c) Monocular 3D Object Detection (d) 3D Visual Grounding Query: 3D Box 3D Box 3D Boxes 3D Boxes Query: (Ours: Query with geometry information) Q1: The second car on the left side of the road, positioned less than 10 meters away from me, is a gray vehicle measuring around 1.6 meters in height. It's parked in front of the red car and facing directly towards me. (Traditional) Q2: A grey car, the second one on the left side of the road, is on the top right of the red car. Query: h w l x y z h w l x y z h w l x y z 3D Box h w l x y z h w l x y z 3D Box Query: h w l x y z h w l x y z 3D Box Query: w h w h w h 2D Box x y w h w h 2D Box x y Query: w h w h 2D Box x y Figure 1: Introduction for 3D visual grounding in monocular images (Mono3DVG). (a) Mono3DVG aims to localize the true 3D extent of referred objects in an image using language descriptions with geometry information. (b) The counterpart 2D task does not capture the 3D extent of the referred object. (c) Localizing specific objects is not feasible for monocular 3D object detection. (d) 3D visual grounding requires laser radars or RGB-D sensors, which greatly limits its application scenarios. Dataset Publication Expression Num. Object Num. Scene Num. Range Exp. Length Vocab Scene Target SUN-Spot ICCVW’2019 7,990 3,245 1,948 – 14.04 2,690 Indoor furni. REVERIE CVPR’2020 21,702 4,140 90 – 18.00 1,600 Indoor furni. ScanRefrer ECCV’2020 51,583 11,046 704 10m 20.27 4,197 Indoor furni. Sr3d ECCV’2020 83,572∗ 8,863 1,273 10m – 196 Indoor furni. Nr3d ECCV’2020 41,503 5,879 642 10m 11.40 6,951 Indoor furni. SUNRefer CVPR’2021 38,495 7,699 7,699 – 16.30 5,279 Indoor furni. STRefer arXiv’2023 5,458 3,581 662 30m – – Outdoor human LifeRefer arXiv’2023 25,380 11,864 3,172 30m – – In/Outdoor human Mono3DRefer – 41,140 8,228 2,025 102m 53.24 5,271 Outdoor human, vehicle Table 1: Statistic comparison of visual grounding datasets in the 3D scene, where ’num.’ denotes number, ’exp.’ indicates expression, and ’furni.’ means furniture. ’*’ represents the unique text data automatically generated and the largest amount. Mono3DVG-TR, which fully aggregates the appearance and geometry features in multi-modal embedding. • We provide sufficient benchmarks based on two-stage and one-stage methods. Extensive experiments show that our method significantly outperforms all baselines. Related Work 2D Visual Grounding The earlier two-stage approaches (Zhang, Niu, and Chang 2018; Hu et al. 2017; Yu et al. 2018a; Liu et al. 2019b; Yu et al. 2018b; Chen, Kovvuri, and Nevatia 2017) adopt a pretrained detector to generate region proposals and extract visual features. It obtains the optimal proposal by calculating scores with vision-language features and sorting. Additionally, NMTree (Liu et al. 2019a) and RvG-Tree (Hong et al. 2022) utilize tree networks by parsing the expression. To capture objects’ relation, graph neural network is adopted by Yang, Li, and Yu (2019); Wang et al. (2019); Yang, Li, and Yu (2020). Recently, the one-stage pipeline has been widely used due to its low computational cost. Many works (Chen et al. 2018; Sadhu, Chen, and Nevatia 2019; Yang et al. 2019, 2020; Huang et al. 2021; Liao et al. 2022) use visual and text encoders to extract visual and textual features, and then fuse the multi-modal features to regress box coordinates. They do not depend on the quality of pre-generated proposals. Du et al. (2022) and Deng et al. (2021) first design the end-to-end transformer-based network, which has achieved superior results in terms of both speed and performance. (Li and Sigal 2021; Sun et al. 2022) propose the multi-task framework to further improve the performance. (Yang et al. 2022; Ye et al. 2022) focus on adjusting visual features by multi-modal features. Mauceri, Palmer, and Heckman (2019) present dataset for 2D visual grounding in RGB-D images. Qi et al. (2020) study 2D visual grounding for language-guided navigation in indoor scenes. However, these works cannot obtain the true 3D coordinates of the object in the real world, which greatly limits the application. Monocular 3D Object Detection The methods can be summarized into anchor-based, keypoint-based, and pseudo-depth based methods. The anchor-based method requires preset 3D anchors and regresses a relative offset. M3D-RPN (Brazil and Liu 2019) is an end-to-end network that only requires training a 3D region proposal network. Kinematic3D (Brazil et al. 2020) improves M3D-RPN by utilizing 3D kinematics to extract scene dynamics. Furthermore, some researchers predict key points and then estimate the size and location of 3D bounding boxes, such as SMOKE (Liu, Wu, and T´oth 2020), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6989 Dataset Language Context Visual Context Label Task Form Cost Form Cost SUN-Spot manual 88888 RGB-D 88899 2D bbox 2D Visual Grounding in RGBD REVERIE manual 88888 pc 88889 2D bbox Localise Remote Object ScanRefrer manual 88888 pc 88889 3D bbox 3D Visual Grounding Sr3d templated 89999 pc 88889 3D bbox 3D Visual Grounding Nr3d manual 88888 pc 88889 3D bbox 3D Visual Grounding SUNRefer manual 88888 RGB-D 88899 3D bbox 3D Visual Grounding in RGBD STRefer manual 88888 pc & RGB 88888 3D bbox 3D Visual Grounding in the Wild LifeRefer manual 88888 pc & RGB 88888 3D bbox 3D Visual Grounding in the Wild Mono3DRefer ChatGPT+manual 88999 RGB 89999 2D/3D bbox 3D Visual Grounding in RGB Table 2: The form, cost, and label of the datasets collected in Table 1 and the corresponding tasks. ’pc’ denotes point cloud and ’bbox’ means bounding box. Step 2: Expression Generation Step 3: Verification Step 1: Attribute Extraction Image Pool I hope you can play the role of making English sentences. Target object: __, about {:.1f} m in height, about {:.1f} m in length, {appearance}, relative to my position: {azimuth}, distance from me: __; It is in/on {place}, is {ordinal number}, state: __, its orientation is {}, spatial relation: __, case of occlusion: __. You'll generate more concise English descriptions. Understand the meaning from the phrases I have provided, and form one long sentence or several short sentences. Please do not add additional extraneous information or description beyond the description I have provided. Create descriptions as required. Prompt template The second car on the left side of the road, positioned less than 10 meters away from me, is a gray vehicle measuring around 1.6 meters in height and 3.7 meters in length. It's parked in front of the red car and facing directly towards me. Description ChatGPT i) height/length: 1.62/3.75m ii) orientation: facing me iii) distance: within 10m iv) azimuth: about 10° north-west v) spatial relation: in front of red car 3D spatial attribute i) appearance: grey ii) occlusion: no iii) place: left side of the road iv) ordinal number: second v) state: parking 2D visual attribute geometric information appearance information i) height/length: 1.62/3.75m ii) orientation: facing me iii) distance: within 10m iv) azimuth: about 10° north-west v) spatial relation: in front of red car 3D spatial attribute i) appearance: grey ii) occlusion: no iii) place: left side of the road iv) ordinal number: second v) state: parking 2D visual attribute geometric information appearance information Figure 2: Our data collection pipeline: i) 2D visual attributes that provide appearance information and 3D spatial attributes that provide geometric information of the target are extracted. ii) fill in the prompt template we designed with attributes, and input the complete prompt into ChatGPT to get descriptions. iii) check whether the description can uniquely identify the object. FCOS3D (Wang et al. 2021), MonoGRNet (Qin, Wang, and Lu 2019), and MonoFlex (Zhang, Lu, and Zhou 2021). However, due to the lack of depth information, pure monocular approaches have difficulty accurately localizing targets. Other works (Bao, Xu, and Chen 2020; Ding et al. 2020; Park et al. 2021; Chen, Dai, and Ding 2022) utilize extra depth estimators to supplement depth information. However, existing models only extract spatial relationships and depth information from visual content. Hence, we propose to explore the impact of language with geometry attributes on 3D object detection. 3D Visual Grounding To handle this task, Scanrefer (Chen, Chang, and Nießner 2020) and Referit3D (Achlioptas et al. 2020) first create datasets. Similar to the counterpart 2D task, earlier works adopt the two-stage pipeline which uses a pre-trained detector to generate object proposals and extract features, such as PointNet++ (Qi et al. 2017). SAT (Yang et al. 2021) adopts 2D object semantics as extra input to assist training. InstanceRefer (Yuan et al. 2021) converts this task into an instance matching problem. To understand complex and diverse descriptions in point clouds directly, Feng et al. (2021) construct a language scene graph, a 3D proposal relation graph, and a 3D visual graph. 3DVG-Trans (Zhao et al. 2021), TransRefer3D (He et al. 2021), Multi-View Trans (Huang et al. 2022b), and LanguageRefer (Roh et al. 2022) all develop transformer-based architectures. D3Net (Chen et al. 2022) and 3DJCG (Cai et al. 2022) both develop a unified framework for dense captioning and visual grounding. Liu et al. (2021) present a novel task for 3D visual grounding in RGB-D images. The previous works are all in indoor environments and target furniture as the object. To promote the application, Lin et al. (2023) introduce the task in largescale dynamic outdoor scenes based on online captured 2D images and 3D point clouds. However, capturing visual data through LiDAR or the industrial camera is expensive and not readily available for a wide range of applications. Our work focuses on the 3D visual grounding in a single image. Mono3DRefer Dataset As shown in Table 1 and Table 2, previous SUN-Spot (Mauceri, Palmer, and Heckman 2019) and REVERIE (Qi et al. 2020) only focus on 2D bounding boxes in the 3D The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6990 RoBERTa The second car on the left side of the road, positioned less than 10 meters away from me, is a gray vehicle measuring around 1.6 meters in height. It's parked in front of the red car and facing directly towards me. Linear Depth Encoder CNN Multi-modal Feature Encoder Grounding Decoder Grounding Head A Learnable Query Depth Predictor Text-guided Depth Adapter Text-guided Visual Adapter Text-guided Adapter Linear 3D bounding box Q Visual Encoder Class 2D box 3D center 3D size Orientation Depth MLP MLP MLP MLP MLP MLP MLP MLP MLP MLP MLP MLP Multi-Head Attention Multi-Head Cross Attention Multi-Scale Deformable Attention × N Add & Norm FFN Add & Norm Multi-Head Attention Multi-Head Cross Attention Multi-Scale Deformable Attention × N Add & Norm FFN Add & Norm K V K, V K V K, V Figure 3: Overview of the proposed framework. The multi-modal feature encoder first extracts textual, multi-scale visual, and geometry features. The dual text-guided adapter refines visual and geometry features of referred objects based on pixel-wise attention. A learnable query fuses geometry cues and visual appearance of the object using depth-text-visual stacking attention in the grounding decoder. Finally, the grounding head adopts multiple MLPs to predict the 2D and 3D attributes of the target. scene. Subsequently, ScanRefer (Chen, Chang, and Nießner 2020), Sr3d, Nr3d (Achlioptas et al. 2020), and SUNRefer (Liu et al. 2021) are built to investigate 3D visual grounding, but they are limited to indoor static scenes. Although STRefer and LifeRefer (Lin et al. 2023) focus on outdoor dynamic scenes, they require LiDAR and industrial cameras. To facilitate the broad application of 3D visual grounding, we employ both manually annotated and ChatGPT to annotate a large-scale dataset based on KITTI (Geiger, Lenz, and Urtasun 2012) for Mono3DVG. Data Annotation To cover all scenes and reduce inter-frame similarity, we performed scene clustering on the original KITTI dataset and sampled 2025 images from each category. The annotation pipeline of Fig. 2 consists of three stages. Step 1: Attribute extraction. The attributes of objects are divided into 2D visual attributes (appearance, occlusion, place, ordinal number, state) and 3D spatial attributes (height/length, orientation, distance, azimuth, spatial relationship). The color of appearance is preliminarily extracted by the HSV color recognition method. Occlusion and height/length are directly obtained from labels of the raw KITTI. Based on the 302 category results of scene clustering, unified rough annotations are performed for the scene place and state of objects in each category. Distance and azimuth are calculated by the coordinates of 3D boxes. Spatial relations include i) Horizontal Proximity, ii) Between, and iii) Allocentric such as far from, next to, between A and B, on the left, and in front. The judgment model is established based on 3D boxes and space geometry to preliminarily extract ordinal number, orientation, and spatial relation. Finally, to ensure correctness, we organize four people to verify and correct 2D and 3D attributes that provide appearance and geometric information. Step 2: Expression generation. We customize the prompt template for generating expressions for ChatGPT. Fill in the template with each attribute of objects and input the complete prompt into ChatGPT to obtain the descriptions. Step 3: Verification. To guarantee the correctness of descriptions, four persons from our team jointly verify the dataset. Dataset Statistics Table 1 summarizes the statistical information of the dataset. We sample 2025 frames of images from the original KITTI for Mono3DRefer, containing 41,140 expressions in total and a vocabulary of 5,271 words. In addition to the Sr3d generated through templates, Mono3DRefer has a similar number of expressions as the ScanRefer and Nr3d. For the range, 10m is the range of the whole scene pre-scanned by RGB-D sensors, 30m is the approximated perception radius with annotations for the LiDAR sensor, and 102m is the distance range of objects with annotations for our dataset. The average length of expressions generated by ChatGPT is 53.24 words involving visual appearance and geometry information. Table 2 shows that the Mono3DVG task has relatively low language data collection costs and the lowest visual data collection costs. We provide more detailed statistics and analyses in the supplementary materials. Methodology As shown in Fig. 3, we propose an end-to-end transformerbased framework, Mono3DVG-TR, which consists of four main modules: 1) the encoder; 2) the adapter; 3) the decoder; 4) the grounding head. Multi-modal Feature Encoder We leverage pre-trained RoBERTa-base (Liu et al. 2019c) and a linear layer to extract the textual embeddings pt ∈ RC×Nt, where Nt is the length of the input sentence. For the image I ∈RH×W ×3, we utilize a CNN backbone (i.e., ResNet-50 (He et al. 2016) and an additional convolutional layer) and a linear layer to obtain four level multiscale visual features f v ∈RC×Nv, where C = 256 and Nv = H 8 × W 8 + H 16× W 16 + H 32× W 32 + H 64× W 64 . Following Zhang et al. (2022), we use the lightweight depth predictor to get the geometry feature f g ∈RC×Ng, where Ng = H 16 × W 16 . Then we design visual encoder and depth encoder to conduct global context inference and generate embeddings with long-term dependencies, denoted as pv ∈RC×Nv, pg ∈ RC×Ng. The depth encoder is composed of one transformer The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6991 × M × L (a) (b) K V Q MSDA Add & Norm MHSA Add & Norm FFN Add & Norm MHCA Add & Norm FFN Add & Norm Figure 4: Detail of visual encoder (a) and depth encoder (b). (a) (b) Q MHCA MSDA Q V Add Concat Up/Down Sample Q K MHA Q,K V Norm & Add Norm & Add Add V Pixel-wise Attention K V MHCA Split Concat Figure 5: Detail of text-guided visual and depth adapter. encoder layer to encode geometry embeddings. In Fig. 4(a), visual encoder replaces multi-head self-attention (MHSA) with multi-scale deformable attention (MSDA) to avoid excessive attention computation on multi-scale visual features. Moreover, we insert an additional multi-head cross-attention (MHCA) layer between MSDA layer and feed-forward network (FFN), providing textual cues for visual embeddings. Dual Text-guided Adapter To exploit the appearance and geometry information in text, the dual adapter is proposed. In Fig. 5(b), the depth adapter takes the geometry embedding pg as the query for MHCA and takes the text embedding pt as the key and value. Then, a multi-head attention (MHA) layer is used to apply implicit text-guided self-attention to the geometry features. Original geometry embedding pg as the value. The refined geometry feature is denoted as p ′′ g . Visual adapter requires splitting and concatenating multi-scale visual embeddings pv before and after MHCA which uses p 1 16 v with the size of H 16 × W 16 as the query. Then, MSDA is used instead of MHA, and the refined visual feature is denoted as p ′′ v. Then, we linearly project p 1 16 v and the output of MHCA in the visual adapter to obtain the original visual feature map F orig ∈RC× H 16 × W 16 and the text-related F text ∈ RC× H 16 × W 16 , respectively. To explore the alignment relationship and fine-grained correlation between vision and language, we compute the attention score sij ∈R H 16 × W 16 for each region (i, j) in the feature map as follows: F orig = ∥F orig∥2 , F text = ∥F text∥2 , (1) ac ij = F c orig(i, j) ⊙F c text(i, j), c = 1, 2, . . . , C (2) sij = C X c=1 ac ij. (3) where, ∥·∥2 and ⊙indicate l2-norm and element-wise product respectively. Then, we further model the semantic similarity S 1 16 with the size of H 16 × W 16 between each pixel feature and the text feature using the Gaussian function: S 1 16 = α · exp(−(1 −sij)2 2σ2 ), (4) where, α and σ are a scaling factor and standard deviation, respectively, and both are learnable parameters. We upsample S 1 16 using bilinear interpolation and downsample S 1 16 using max pooling. Then we concatenate the flattened score maps to obtain the multi-scale attention score S ∈RNv: S = Concat[Up(S 1 16 ), S 1 16 , Down(S 1 16 ), Down(S 1 16 )]. (5) Based on pixel-wise attention scores, the visual and geometry features are focused on the regions relevant to the textual description. We use the features p ′′ v and p ′′ g and scores (S 1 16 ∈RNd is flattened) to perform element-wise multiplication, resulting in adapted features of the referred object: ˜pv = p ′′ v · S, ˜pg = p ′′ g · S 1 16 . (6) Grounding Decoder As shown in Fig. 3, the n-th decoder layer consists of a block composed of MHA, MHCA, and MSDA, and an FFN. The learnable query pq ∈RC×1 first aggregates the initial geometric information, then enhances text-related geometric features by text embedding, and finally collects appearance features from multi-scale visual features. This depth-textvisual stacking attention adaptively fuses object-level geometric cues and visual appearance into the query. Grounding Head Our grounding head employs multiple MLPs for 2D and 3D attribute prediction. The output of the decoder, i.e., the learnable query, is denoted by ˜pq ∈RC×1. Then, ˜pq is separately fed into a linear layer for predicting the object category, a 3layer MLP for the 2D box size (l, r, t, b) and projected 3D box center (x3D, y3D), a 2-layer MLP for the 3D box size (h3D, w3D, l3D), a 2-layer MLP for the 3D box orientation θ, and a 2-layer MLP for the depth dreg. (l, r, t, b) represents the distances between the four sides of the 2D box and the projected 3D center point (x3D, y3D). Similar to (Zhang et al. 2022), the final predicted depth dpred is computed. Loss Function We group the category, 2D box size, and projected 3D center as 2D attributes, and the 3D box size, orientation, and depth as 3D attributes. The loss for 2D is formulated as: L2D = λ1Lclass + λ2Llrtb + λ3LGIoU + λ4Lxy3D, (7) where, λ1∼4 is set to (2, 5, 2, 10) following (Zhang et al. 2022). Lclass is Focal loss (Lin et al. 2017) for predicting nine categories. Llrtb and Lxy3D adopt the L1 loss. LGIoU is the GIoU loss (Rezatofighi et al. 2019) that constrains the 2D bounding boxes. The loss for 3D is defined as: L3D = Lsize3D + Lorien + Ldepth. (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6992 Method Type Unique Multiple Overall Time cost Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 (ms) CatRand Two-Stage 100 100 24.47 24.43 38.69 38.67 0 Cube R-CNN + Rand Two-Stage 32.76 14.61 13.36 7.21 17.02 8.60 153 Cube R-CNN + Best Two-Stage 35.29 16.67 60.52 32.99 55.77 29.92 153 ZSGNet + backproj One-Stage 9.02 0.29 16.56 2.23 15.14 1.87 31 FAOA + backproj One-Stage 11.96 2.06 13.79 2.12 13.44 2.11 144 ReSC + backproj One-Stage 11.96 0.49 23.69 3.94 21.48 3.29 97 TransVG + backproj Tran.-based 15.78 4.02 21.84 4.16 20.70 4.14 80 Mono3DVG-TR (Ours) Tran.-based 57.65 33.04 65.92 46.85 64.36 44.25 110 Table 3: Comparison with baselines. The underline means performance exceeding our bolded results. We use the 3D IoU oriented loss (Ma et al. 2021), MultiBin loss (Chen et al. 2020), and Laplacian aleatoric uncertainty loss (Chen et al. 2020) as Lsize3D, Lorien, and Ldepth to optimize the predicted 3D size, orientation, and depth. Following (Zhang et al. 2022), we use Focal loss to supervise the prediction of the depth map, denoted as Ldmap. Finally, our overall loss is formulated as: Loverall = L2D + L3D + Ldmap. (9) Experiments Implementation Details. We split our dataset into 29,990, 5,735, and 5,415 expressions for train/val/test sets respectively. We train 60 epochs with a batch size of 10 by AdamW with 10−4 learning rate and 10−4 weight decay on one GTX 3090 24-GiB GPU. The learning rate decays by a factor of 10 after 40 epochs. The dropout ratio is set to 0.1. Evaluation metric. Similar to (Chen, Chang, and Nießner 2020; Liu et al. 2021; Lin et al. 2023), we use the accuracy with 3D IoU threshold (Acc@0.25 and Acc@0.5) as our metrics, where the threshold includes 0.25 and 0.5. Baselines. To explore the difficulty and enable fair comparisons, we design several baselines and validate these methods using a unified standard. Two-stage: 1) CatRand randomly selects a ground truth box that matches the object category as the prediction result. This baseline measures the difficulty of our task and dataset. 2) (Cube R-CNN (Brazil et al. 2023) + Rand) randomly selects a bounding box that matches the object category as the prediction result from predicted object proposals of Cube R-CNN, the best monocular 3D object detector. 3) (Cube R-CNN (Brazil et al. 2023) + Best) selects a bounding box that best matches the ground truth box from predicted object proposals. This baseline provides the upper bound on how well the two-stage approaches work for our task. One-stage: 2DVG backproj baselines adapt the results of 2D visual grounding to 3D by using back-projection. We select three SOTA one-stage methods, i.e., ZSGNet (Sadhu, Chen, and Nevatia 2019), FAOA (Yang et al. 2019) , ReSC (Yang et al. 2020), and the transformer-based TransVG (Deng et al. 2021). To analyze the importance of other information besides the category, we report metrics of these baselines on ’unique’ and ’multiple’ subsets in Table 3. The ’unique’ subset means cases where there is one object that matches the category, while the ’multiple’ subset contains multiple confused objects with the same category. To analyze the task difficulty, we report metrics at varying levels of depth d as near: 0 < d ≤15m, medium: 15m < d ≤35m, far: 35m < d ≤∞in Table 4. Considering that occlusion or truncation of the objects adds challenge to the task, we also show metrics at varying levels of difficulty as easy: no occlusion and truncation < 0.15, moderate: no/partial occlusion and truncation < 0.3, hard: others. For more convincing results, we show the average of 5 evaluations with different random seeds for CatRand and Cube R-CNN Rand. Quantitative Analysis and Task Difficulty In Table 3, CatRand achieves 100% accuracy on the ’unique’ subset but only 24% on the ’multiple’. Cube R-CNN Rand also performs better on the ’unique’ subset compared to the ’multiple’. If there is only one car in an image, inputting the ”car” is sufficient. However, if there are multiple cars, additional information beyond the category is necessary. The significant gap between Cube R-CNN Best and CatRand on the ’unique’ subset indicates tremendous research potential in monocular 3D object detection. Overall, while our result is close to the CatRand, there is still room for improvement. In Table 4, CatRand performs much better on the ’far’ subset compared to ’near’ and ’medium’. Our method and other baselines show a decreasing performance as the depth increases. The ’far’ subset contains fewer ambiguous objects, so CatRand’s random selection of ground truth can achieve better results. Other methods rely on predicted bounding boxes. Generally, objects that are farther away from the camera are more challenging to accurately predict their depth and 3D extent. Cube R-CNN Best exhibits excellent results on Acc@0.25. The accuracy gap between CatRand and our method on the ’far’ subset indicates that accurately predicting the depth of target objects based on a single image and natural language is a challenge in our task. For ’easy-moderate-hard’ subsets, Cube R-CNN Best has suboptimal results on Acc@0.25, but a lower Acc@0.5, indicating that the best object detector has the ability to detect occluded or truncated objects, but the accuracy needs to be improved. Our method fully fuses visual and textual features to accurately detect occluded and truncated objects, achieving better results than CatRand. Our method outperforms all 2DVG backproj baselines by a significant margin in Tables 3-4. It is inefficient to obtain accurate 3D bounding boxes from 2D localization results by back projection. The methods of 2DVG can only predict the extent of the object in the 2D plane and lack the ability to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6993 Method Type Near / Easy Medium / Moderate Far / Hard Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 CatRand Two-Stage 31.16/47.29 31.05/47.26 35.49/33.92 35.49/33.92 52.11/30.83 52.11/30.74 Cube R-CNN + Rand Two-Stage 17.40/21.12 11.45/11.41 18.01/17.85 8.15/8.01 14.91/10.56 6.38/5.18 Cube R-CNN + Best Two-Stage 67.76/59.66 41.45/33.05 60.69/60.56 30.35/33.45 34.72/46.25 17.01/22.52 ZSGNet + backproj One-Stage 24.87/21.33 0.59/3.35 16.74/13.87 3.71/0.63 2.15/7.57 0.07/0.84 FAOA + backproj One-Stage 18.03/17.51 0.53/3.43 15.64/12.18 3.95/1.34 4.86/8.83 0.62/0.90 ReSC + backproj One-Stage 33.68/27.90 0.59/5.71 24.03/19.23 6.15/1.97 4.24/14.41 1.25/1.02 TransVG + backproj Tran.-based 29.34/28.88 0.86/6.95 25.05/16.41 8.02/2.75 4.17/12.91 0.97/1.38 Mono3DVG-TR (Ours) Tran.-based 64.74/72.36 53.49/51.80 75.44/69.23 55.48/48.66 45.07/49.01 15.35/29.91 Table 4: Results for ’near’-’medium’-’far’ subsets and ’easy’-’moderate’-’hard’ subsets. The underline means performance exceeding our bolded results. Cube R-CNN Best On the right side road of the intersection, there is a lone black car that measures approximately 1.5 meters in height. It is currently driving away from me towards my north-east direction, and is situated around 20-30 meters away. ReSC backproj TransVG backproj Ours Query My approximate position is 10 meters away from a pedestrian wearing high heels, who is standing on the left side of the road. They are facing towards the right and positioned next to a bicycle. The black car, standing at a height of about 1.4 meters, is located on the third lane to my left, about 20 meters away from me and positioned around 10 degrees northwest of me. It is the second car on the lane, facing away from me, and currently moving straight ahead. Figure 6: Qualitative results from baseline methods and our Mono3DVG-TR. Blue, green, and red boxes denote the ground truth, prediction with IoU higher than 0.5, and prediction with IoU lower than 0.5, respectively. estimate depth, resulting in inaccurate 3D localization. Qualitative Analysis Fig. 6 displays the 3D localization results of Cube R-CNN Best, ReSC backproj, TransVG backproj, and our proposed method. Although the approximate range of objects can be obtained, Cube R-CNN Best fails to provide precise bounding boxes. ReSC backproj and TransVG backproj depend on the accuracy of 2D boxes and are unable to estimate depth, thus unable to provide accurate 3D bounding boxes. Our method includes text-RGB and text-depth two branches to make full use of the appearance and geometry information for multi-modal fusion, but there are also some failures. We provide more detailed analyses in the supplementary. Ablation Studies We conduct detailed ablation studies to validate the effectiveness of our proposed network and report the Acc@0.25 and Acc@0.5 overall on the Mono3DRefer test set. In Table 5, we report results of a comprehensive ablation experiment on the main components. The first row shows the results by directly using visual and geometry features of the CNN backbone and depth predictor to decode. The second row shows a significant improvement with the addition of the enGrouning Decoder Encoder Adapter Acc0.25 Acc@0.5 V. D. V. D. ✓ 47.31 24.38 ✓ ✓ ✓ 60.21 38.52 ✓ ✓ ✓ ✓ 61.98 40.12 ✓ ✓ ✓ ✓ ✓ 64.36 44.25 Table 5: The ablation studies of the proposed components of our approach. ’V.’ and ’D.’ denote visual and depth. coder. In the third row, we only utilize the text-guided visual adapter. After adding the complete adapter, the results can be improved by approximately 4%-5%. We provide more detailed analyses of ablation studies in the supplementary. Conclusion We introduce the novel task of Mono3DVG, which localizes 3D objects in RGB images by descriptions. Notably, we contribute a large-scale dataset, Mono3DRefer, which is the first dataset that leverages the ChatGPT to generate descriptions. We also provide a series of benchmarks to facilitate future research. Finally, we hope that Mono3DVG can be widely applied since it does not require strict conditions such as RGB-D sensors, LiDARs, or industrial cameras. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6994 Acknowledgments This work was supported in part by grants from the National Science Fund for Distinguished Young Scholars (No.61825603), the National Key Research and Development Project (No.2020YFB2103900), and the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (No.CX2023030). References Achlioptas, P.; Abdelreheem, A.; Xia, F.; Elhoseiny, M.; and Guibas, L. 2020. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In ECCV, 422– 440. Bao, W.; Xu, B.; and Chen, Z. 2020. MonoFENet: Monocular 3D Object Detection With Feature Enhancement Networks. IEEE Transactions on Image Processing, 29: 2753– 2765. Brazil, G.; Kumar, A.; Straub, J.; Ravi, N.; Johnson, J.; and Gkioxari, G. 2023. Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild. In CVPR, 13154–13164. Brazil, G.; and Liu, X. 2019. M3d-rpn: Monocular 3d region proposal network for object detection. In ICCV, 9287–9296. Brazil, G.; Pons-Moll, G.; Liu, X.; and Schiele, B. 2020. Kinematic 3d object detection in monocular video. In ECCV, 135–152. Cai, D.; Zhao, L.; Zhang, J.; Sheng, L.; and Xu, D. 2022. 3djcg: A unified framework for joint dense captioning and visual grounding on 3d point clouds. In CVPR, 16464– 16473. Chen, D. Z.; Chang, A. X.; and Nießner, M. 2020. Scanrefer: 3d object localization in rgb-d scans using natural language. In ECCV, 202–221. Chen, D. Z.; Wu, Q.; Nießner, M.; and Chang, A. X. 2022. D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding. In ECCV, 487– 505. Chen, K.; Kovvuri, R.; and Nevatia, R. 2017. Query-guided regression network with context policy for phrase grounding. In ICCV, 824–832. Chen, X.; Ma, L.; Chen, J.; Jie, Z.; Liu, W.; and Luo, J. 2018. Real-time referring expression comprehension by singlestage grounding network. arXiv preprint arXiv:1812.03426. Chen, Y.; Tai, L.; Sun, K.; and Li, M. 2020. MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships. In CVPR, 12093–12102. Chen, Y.-N.; Dai, H.; and Ding, Y. 2022. Pseudo-stereo for monocular 3d object detection in autonomous driving. In CVPR, 887–897. Deng, J.; Yang, Z.; Chen, T.; Zhou, W.; and Li, H. 2021. TransVG: End-to-End Visual Grounding With Transformers. In ICCV, 1769–1779. Ding, M.; Huo, Y.; Yi, H.; Wang, Z.; Shi, J.; Lu, Z.; and Luo, P. 2020. Learning depth-guided convolutions for monocular 3d object detection. In CVPR workshops, 1000–1001. Du, Y.; Fu, Z.; Liu, Q.; and Wang, Y. 2022. Visual Grounding with Transformers. In 2022 IEEE International Conference on Multimedia and Expo, 1–6. Feng, M.; Li, Z.; Li, Q.; Zhang, L.; Zhang, X.; Zhu, G.; Zhang, H.; Wang, Y.; and Mian, A. 2021. Free-form description guided 3d visual graph network for object grounding in point cloud. In ICCV, 3722–3731. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 3354–3361. He, D.; Zhao, Y.; Luo, J.; Hui, T.; Huang, S.; Zhang, A.; and Liu, S. 2021. Transrefer3d: Entity-and-relation aware transformer for fine-grained 3d visual grounding. In Proceedings of the 29th ACM International Conference on Multimedia, 2344–2352. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770–778. Hong, R.; Liu, D.; Mo, X.; He, X.; and Zhang, H. 2022. Learning to Compose and Reason with Language Tree Structures for Visual Grounding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(2): 684–696. Hu, R.; Rohrbach, M.; Andreas, J.; Darrell, T.; and Saenko, K. 2017. Modeling relationships in referential expressions with compositional modular networks. In CVPR, 1115– 1124. Huang, B.; Lian, D.; Luo, W.; and Gao, S. 2021. Look before you leap: Learning landmark features for one-stage visual grounding. In CVPR, 16888–16897. Huang, K.-C.; Wu, T.-H.; Su, H.-T.; and Hsu, W. H. 2022a. MonoDTR: Monocular 3D Object Detection With DepthAware Transformer. In CVPR, 4012–4021. Huang, S.; Chen, Y.; Jia, J.; and Wang, L. 2022b. Multiview transformer for 3d visual grounding. In CVPR, 15524– 15533. Li, M.; and Sigal, L. 2021. Referring transformer: A onestep approach to multi-task visual grounding. In Advances in Neural Information Processing Systems, volume 34, 19652– 19664. Liao, Y.; Zhang, A.; Chen, Z.; Hui, T.; and Liu, S. 2022. Progressive Language-Customized Visual Feature Learning for One-Stage Visual Grounding. IEEE Transactions on Image Processing, 31: 4266–4277. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal loss for dense object detection. In ICCV, 2980– 2988. Lin, Z.; Peng, X.; Cong, P.; Hou, Y.; Zhu, X.; Yang, S.; and Ma, Y. 2023. WildRefer: 3D Object Localization in Largescale Dynamic Scenes with Multi-modal Visual Data and Natural Language. arXiv preprint arXiv:2304.05645. Liu, D.; Zhang, H.; Wu, F.; and Zha, Z.-J. 2019a. Learning to assemble neural module tree networks for visual grounding. In ICCV, 4673–4682. Liu, H.; Lin, A.; Han, X.; Yang, L.; Yu, Y.; and Cui, S. 2021. Refer-it-in-rgbd: A bottom-up approach for 3d visual grounding in rgbd images. In CVPR, 6032–6041. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6995 Liu, X.; Wang, Z.; Shao, J.; Wang, X.; and Li, H. 2019b. Improving referring expression grounding with cross-modal attention-guided erasing. In CVPR, 1950–1959. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019c. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Liu, Z.; Wu, Z.; and T´oth, R. 2020. Smoke: Single-stage monocular 3d object detection via keypoint estimation. In CVPR Workshops, 996–997. Ma, X.; Zhang, Y.; Xu, D.; Zhou, D.; Yi, S.; Li, H.; and Ouyang, W. 2021. Delving Into Localization Errors for Monocular 3D Object Detection. In CVPR, 4721–4730. Mauceri, C.; Palmer, M.; and Heckman, C. 2019. Sun-spot: An rgb-d dataset with spatial referring expressions. In ICCV Workshops, 1883–1886. Park, D.; Ambrus, R.; Guizilini, V.; Li, J.; and Gaidon, A. 2021. Is pseudo-lidar needed for monocular 3d object detection? In ICCV, 3142–3152. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, volume 30. Qi, Y.; Wu, Q.; Anderson, P.; Wang, X.; Wang, W. Y.; Shen, C.; and Hengel, A. v. d. 2020. Reverie: Remote embodied visual referring expression in real indoor environments. In CVPR, 9982–9991. Qin, Z.; Wang, J.; and Lu, Y. 2019. Monogrnet: A geometric reasoning network for monocular 3d object localization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 8851–8858. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In CVPR, 658–666. Roh, J.; Desingh, K.; Farhadi, A.; and Fox, D. 2022. Languagerefer: Spatial-language model for 3d visual grounding. In Conference on Robot Learning, 1046–1056. Sadhu, A.; Chen, K.; and Nevatia, R. 2019. Zero-shot grounding of objects from natural language queries. In ICCV, 4694–4703. Sun, M.; Suo, W.; Wang, P.; Zhang, Y.; and Wu, Q. 2022. A proposal-free one-stage framework for referring expression comprehension and generation via dense cross-attention. IEEE Transactions on Multimedia. Wang, P.; Wu, Q.; Cao, J.; Shen, C.; Gao, L.; and Hengel, A. v. d. 2019. Neighbourhood watch: Referring expression comprehension via language-guided graph attention networks. In CVPR, 1960–1968. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In ICCV, 913–922. Yang, L.; Xu, Y.; Yuan, C.; Liu, W.; Li, B.; and Hu, W. 2022. Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning. In CVPR, 9499–9508. Yang, S.; Li, G.; and Yu, Y. 2019. Dynamic graph attention for referring expression comprehension. In ICCV, 4644– 4653. Yang, S.; Li, G.; and Yu, Y. 2020. Relationship-embedded representation learning for grounding referring expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(8): 2765–2779. Yang, Z.; Chen, T.; Wang, L.; and Luo, J. 2020. Improving one-stage visual grounding by recursive sub-query construction. In ECCV, 387–404. Yang, Z.; Gong, B.; Wang, L.; Huang, W.; Yu, D.; and Luo, J. 2019. A fast and accurate one-stage approach to visual grounding. In ICCV, 4683–4693. Yang, Z.; Zhang, S.; Wang, L.; and Luo, J. 2021. Sat: 2d semantics assisted training for 3d visual grounding. In ICCV, 1856–1866. Ye, J.; Tian, J.; Yan, M.; Yang, X.; Wang, X.; Zhang, J.; He, L.; and Lin, X. 2022. Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for Endto-End Visual Grounding. In CVPR, 15502–15512. Yu, L.; Lin, Z.; Shen, X.; Yang, J.; Lu, X.; Bansal, M.; and Berg, T. L. 2018a. Mattnet: Modular attention network for referring expression comprehension. In CVPR, 1307–1315. Yu, Z.; Yu, J.; Xiang, C.; Zhao, Z.; Tian, Q.; and Tao, D. 2018b. Rethinking diversified and discriminative proposal generation for visual grounding. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, 1114–1120. Yuan, Z.; Yan, X.; Liao, Y.; Zhang, R.; Wang, S.; Li, Z.; and Cui, S. 2021. Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring. In ICCV, 1791– 1800. Zhan, Y.; Xiong, Z.; and Yuan, Y. 2023. RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data. IEEE Transactions on Geoscience and Remote Sensing, 61: 1–13. Zhang, H.; Niu, Y.; and Chang, S.-F. 2018. Grounding referring expressions in images by variational context. In CVPR, 4158–4166. Zhang, R.; Qiu, H.; Wang, T.; Guo, Z.; Xu, X.; Qiao, Y.; Gao, P.; and Li, H. 2022. MonoDETR: depth-guided transformer for monocular 3D object detection. arXiv preprint arXiv:2203.13310. Zhang, Y.; Lu, J.; and Zhou, J. 2021. Objects are different: Flexible monocular 3d object detection. In CVPR, 3289– 3298. Zhao, L.; Cai, D.; Sheng, L.; and Xu, D. 2021. 3DVGTransformer: Relation modeling for visual grounding on point clouds. In ICCV, 2928–2937. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6996
2024
777
18,603
Amodal Scene Analysis via Holistic Occlusion Relation Inference and Generative Mask Completion Bowen Zhang1, Qing Liu2, Jianming Zhang2, Yilin Wang2, Liyang Liu1, Zhe Lin2, Yifan Liu1† 1Australian Institute for Machine Learning, The University of Adelaide 2 Adobe Research {b.zhang,akide.liu,yifan.liu04}@adelaide.edu.au, {qingl,jianmzha,yilwang,zlin}@adobe.com Abstract Amodal scene analysis entails interpreting the occlusion relationship among scene elements and inferring the possible shapes of the invisible parts. Existing methods typically frame this task as an extended instance segmentation or a pair-wise object de-occlusion problem. In this work, we propose a new framework, which comprises a Holistic Occlusion Relation Inference (HORI) module followed by an instancelevel Generative Mask Completion (GMC) module. Unlike previous approaches, which rely on mask completion results for occlusion reasoning, our HORI module directly predicts an occlusion relation matrix in a single pass. This approach is much more efficient than the pair-wise de-occlusion process and it naturally handles mutual occlusion, a common but often neglected situation. Moreover, we formulate the mask completion task as a generative process and use a diffusion-based GMC module for instance-level mask completion. This improves mask completion quality and provides multiple plausible solutions. We further introduce a largescale amodal segmentation dataset which consists of highquality human annotations for amodal masks and occlusion relations, including mutual occlusions. Experiments on the newly proposed dataset and two public benchmarks demonstrate the advantages of our method on both efficient occlusion reasoning and plausible amodal mask completion. code public available at https://github.com/zbwxp/Amodal-AAAI. Introduction Humans can naturally perceive the occlusion relationship among multiple scene elements and infer the possible shapes for the invisible parts, and this ability is known as amodal perception (Nanay 2018; Mohan and Valada 2022; Zhu et al. 2017). In computer vision, amodal scene analysis has been proposed to match the human intelligence (Li and Malik 2016), and this task can provide useful information for many real-world applications. For example, occlusion reasoning can facilitate risk assessment in AI systems, where existing methods are data-driven and may become less reliable when applied on heavily occluded objects (Zhu et al. 2019; Kortylewski et al. 2020). Amodal scene analysis can also benefit image editing tasks. By decomposing the 2D scene into layer-wise representations based on the predicted occlusion relationship and amodal shapes, users can easily move Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. things around and generate new RGB content (Zhan et al. 2020a; Zheng et al. 2021). Many existing works try to solve amodal problems by extending traditional instance segmentation methods, such as Mask R-CNN (He et al. 2017) and DETR (Carion et al. 2020). In those methods, heads for amodal mask prediction are added to the instance segmentation model (Follmann et al. 2019; Qi et al. 2019; Xiao et al. 2021; Tran et al. 2022). These methods treat amodal segmentation as an object recognition task, and entangle instance segmentation and amodal mask completion in a single framework. Consequently, the problem becomes too challenging and satisfying results can hardly be obtained. In addition, these methods only focus on amodal mask prediction and do not explicitly address the occlusion reasoning problem, which is an essential part for amodal scene analysis. They also often use mean Average Precision (mAP) as the main evaluation metric, which cannot reflect the mask quality or fidelity in many cases. Furthermore, since these methods are often trained for a set of predefined object classes, their application is strictly constrained by the training data. Another approach decouples amodal analysis from a perspective on object recognition. Taking instance masks of visible regions as input, these methods interpret the occlusion relationship among the objects of interest and perform amodal mask completion in a separate step (Yan et al. 2019a; Ling et al. 2020; Zhan et al. 2020b; Nguyen and Todorovic 2021). These approaches focus more on amodal analysis and can generally achieve better results for occlusion reasoning and mask completion. However, current approaches in this direction typically rely on a time-consuming process of pairwise object de-occlusion. The occlusion order reasoning is based on mask completion results, making the overall performance unstable due to the difficulty of the amodal segmentation task. In addition, many prior works utilize a deterministic method to solve amodal mask completion, which is limited in its ability to capture multiple potential and plausible shapes that may exist under occlusion. In this work, we propose a new amodal scene analysis framework based on the second approach, where we decouple the problem from object recognition. Since many existing works (Cheng et al. 2022; Jain et al. 2022; Li et al. 2022a) on instance and panoptic segmentation tasks already achieved impressive performance, in this work we assume The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6997 X O O O O X O X X O O M X X X X O M Holistic Occlusion Relation Inference Image with visible masks Generative Mask Completion Amodal mask A A A’ B B B’ Mutual occlusion The occluders The occludee Figure 1: Overview of the proposed framework. Given an image with visible object masks, we propose the Holistic Occlusion Relation Inference (HORI) module to infer the occlusion relationship matrix for all objects in a single pass. The colored edges of the rows and columns in the matrix match the corresponding colored object mask on the left. ‘O’ and ‘X’ in the matrix indicate whether an object is occluding or occluded by another object, while ‘M’ indicates mutual occlusion. For instance, region A′ is occluded by B, and B′ is occluded by A. The occlusion relation matrix, along with the image and its visible masks, is then fed to our Generative Mask Completion (GMC) module to generate the amodal masks at the instance level. instance segmentation masks are available for objects of interest in the scene. As shown in Fig. 1, we first introduce a Holistic Occlusion Relation Inference (HORI) module which achieves single-pass occlusion reasoning by directly predicting an occlusion relationship matrix. We then present a Generative Mask Completion (GMC) module which performs instance-level amodal completion through a diffusionbased sampling process and enables multiple plausible outputs. More specifically, the HORI module adapts the architecture of Mask2former (Cheng et al. 2022) and takes instance masks as additional inputs. It leverages the attention mechanism of Mask2former and attends to the visible region of each instance directly to learn a dense ordering relationship. By outputting an occlusion relationship matrix, the module is much more efficient than the previous pair-wise de-occlusion methods (Yan et al. 2019a; Ling et al. 2020; Zhan et al. 2020b; Nguyen and Todorovic 2021) and can naturally handle mutual occlusion, which is a common but often neglected situation. Then, given the instance masks and ordering relationship for objects in the scene, we apply a GMC module to perform instance-level amodal mask completion. The GMC module adapts a diffusion-based sampling process and is conditioned on the visible mask to infer multiple potential shapes of an amodal mask. Compared with previous deterministic methods, the GMC module also produces amodal masks with higher fidelity. In our experiments, we evaluate our model on two popular amodal benchmarks, COCOA (Zhu et al. 2017) and KINS (Qi et al. 2019), where we demonstrate that our HORI module achieves new state-of-the-art results for Ordering Accuracy (O-Acc) with highly efficient single-pass inference. Additionally, our GMC module outperforms existing methods not only on mean Intersection-over-Union (mIoU), but also on more advanced metrics that evaluate the fidelity of the predicted shapes for amodal mask completion. To facilitate a more comprehensive model evaluation for amodal scene analysis, especially for the case of mutual occlusion, we further introduce a large-scale amodal segmentation dataset, Amodal Scene in the Wild (ASW). The evaluation set of ASW will be released with the paper, which consists of 2, 000 images of diverse scenes and 14, 969 high-quality amodal masks with occlusion ordering. Among the 13, 240 occlusion relationships revealed in the annotation, 2, 515 are mutual. We demonstrate our proposed method can achieve accurate mutual occlusion prediction both quantitatively and qualitatively on the ASW dataset. We summarize our main contributions as follows: • We propose a new framework to solve amodal scene analysis by two modules. The Holistic Occlusion Relation Inference (HORI) module interprets the occlusion relationship among multiple scene elements in a single pass, while the Generative Mask Completion (GMC) module predicts diverse high-fidelity amodal shapes by formulating the task as a generative sampling process. • We investigate mutual occlusion, a common situation in real-world scenes but largely overlooked by previous methods and benchmarks. To this end, we introduce a new dataset that consists of diverse scenes and highquality annotations for amodal masks and occlusion relations, including 2, 515 mutual occlusion cases. Related Work Occlusion Reasoning. A number of works have studied occlusion reasoning as a multi-view problem (Kang, Szeliski, and Chai 2001; Yamaguchi, McAllester, and Urtasun 2014; Gilroy, Jones, and Glavin 2019) while inferring occlusion relationship from a single image is more challenging (Hoiem et al. 2007; Hsiao and Hebert 2014; Jiang et al. 2020). Earlier works have explored prior and templatebased methods (Tighe, Niethammer, and Lazebnik 2014; Wu, Tenenbaum, and Kohli 2017). Another line of works uses closed contour to express the object, and the orientation of each contour pixel to describe the order relationship (Ren, Fowlkes, and Malik 2006; Wang and Yuille 2016; Lu et al. 2019). In the domain of amodal analysis, the ordering recovery method introduced in (Zhan et al. 2020b) works by learning the relationship between pairwise synthetic data and it has inspired several later works, such as (Nguyen and Todorovic 2021; Yan et al. 2019a). Amodal Segmentation and Mask Completiong. Amodal segmentation aims at detecting objects and recovering their complete shapes. Many existing works inherit the network architecture from popular instance segmentation methods and apply amodal mask supervision on top of the training (Zhu et al. 2017; Qi et al. 2019; Sun, Kortylewski, and Yuille 2022; Xiao et al. 2021; Mohan and Valada 2022). These methods usually apply multiple layers of convoluThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6998 tions and deformable convolutions to strengthen the model’s ability to infer the invisible region. Although they perform well on some simple rigid objects, they are more likely to fail on objects with irregular or elongated shapes (e.g. table legs and human arms). Another line of work focuses on amodal mask completion, where instance segmentation masks are provided as input. This approach simplifies the problem by decoupling amodal analysis from object recognition and thus can achieve better results. Besides early unsupervised contour completion methods that are constrained on toy examples (Kimia, Frankel, and Popescu 2003; Silberman et al. 2014), 3D templates and synthetic data have been used broadly (Kar et al. 2015; Ehsani, Mottaghi, and Farhadi 2018; Yan et al. 2019b). (Zhan et al. 2020b) proposes a selfsupervised scene de-occlusion method by learning amodal masks from 2D synthetic occlusions, which is followed by (Nguyen and Todorovic 2021; Yan et al. 2019a), while (Ling et al. 2020) learns from synthetic data similarly but develops a variational generative framework for the task. Diffusion Models. Diffusion probabilistic models (SohlDickstein et al. 2015; Ho, Jain, and Abbeel 2020) employ a forward Markov chain to diffuse the data to noise and learn the reversal of such a diffusion process. Conditional diffusion models encode additional information (e.g., semantic layout) into the generation process and improve largely the generation performance, inspiring a variety of tasks including image generation (Jolicoeur-Martineau et al. 2020; Dhariwal and Nichol 2021; Vahdat, Kreis, and Kautz 2021; Rombach et al. 2022; Ho et al. 2022), image editing (Nichol et al. 2021; Saharia et al. 2022a; Kawar et al. 2022; Couairon et al. 2022; Zeng et al. 2022), super-resolution (Li et al. 2022b; Saharia et al. 2022b), etc. In this work, we use diffusion model to complete binary masks instead of generating RGB values, which has rarely been explored. Methods The task of amodal segmentation involves generating the amodal mask Mamodal for an input image I ∈RH×W ×3 and its corresponding instance masks Minst ∈RH×W ×N, where N represents the number of objects present in the image. The instance masks provide information about the visible areas of the objects, while the amodal masks provide the complete shapes, including the occluded regions. To achieve this, our approach involves first inferring the occlusion relationships between all the objects using holistic occlusion relation inference. This provides a holistic understanding of how each object occludes other objects in the image. This occlusion relationship is then used as input for the generative mask completion module, which generates the complete amodal mask Mamodal for each object. It should be noted that the inference of instance masks Minst is typically done using instance or entity segmentation methods, which are not the focus of this paper. Instead, we focus on the generation of amodal masks, which is a critical task in many computer vision applications such as object tracking, scene understanding, and robotic perception. Holistic Occlusion Relation Inference Module To enable concurrent ordering, an efficient method is required to encode arbitrary numbers of binary masks into a model structure for further processing. As illustrated in Fig. 2, our framework first passes the image through a Resnet50 backbone and a deformable-DETR decoder to produce feature maps of different levels with rich semantic information. The binary masks Minst indicating the object’s visible parts are provided, and the number of masks N can vary for different images. We dynamically duplicate the initial token N times to match the number of visible masks, and these tokens are then passed through the Occlusion Reasoning Block (ORB). Each ORB block requires three inputs: the feature map F ∈Rh×w×C where C representing the number of channels of the feature map, the visible binary masks Minst, and the tokens T ∈RN×C to be matched with the visible masks. These three inputs are first passed through a masked multihead cross-attention (MMHCA) module, where T serves as queries, F as keys and values, and Minst as the attention mask. The computation is as follows with l indicating the layer index. Ql = fQ(Tl), Kl = fK(Fl), Vl = fV (Fl). M =  0 if Minst = 1 −∞ otherwise Tl+1 = softmax (M + QlKT l )Vl + Tl (1) This masked attention will gather the information in Fl within the Minst and update the Tl. During the inference, the first T0 is generated by repeating a random initialized learnable token embedding for N times. After the first ORB block, the originally identical tokens will become diverse and gradually correspond to each mask region in Minst. We use this mechanism to encode the spatial information of Minst into the model. The output Tl+1 is then carried out by a self-attention module and a feed-forward network (FFN) as in a regular transformer decoder. Each ORB block produces three outputs: an occlusion relation matrix, N occlusion ratio predictions, and N binary mask predictions. The occlusion relation matrix is generated from the attention map of the multi-head self-attention (MHSA) process on T. The attention map naturally encodes the ordering relationships between all elements in T, and we use it to generate the occlusion relation matrix. To introduce more non-linearity, we attach an additional MLP module to the attention map. The produced occlusion relation matrix is then used for ordering recovery. The occlusion ratio prediction and the binary mask predictions of the ORB block follow a similar structure to DETR. First, the T tokens are attached to a fully-connected layer that predicts the occlusion ratio of the current token. Then, the output is attached to another MLP module that generates the kernel for a 1 × 1 convolution operation. The kernel is then applied to the 1/4 resolution feature map to generate a binary mask prediction, representing the current token’s amodal prediction. Those outputs are then supervised with their corresponding amodal masks and occlusion ratios sequentially where nine ORB The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6999 Backbone & Decoder N visible inst masks N Amodal pred masks ORB … … … ORB ORB Feature … cat Feature Feature Feature repeat N times Image Masked Attention Self-attention FFN Feature as K, V N masks Add & Norm Add & Norm Add & Norm Q Q K V attn map MLP … occlusion relationship matrix … Occlusion Reasoning Block Figure 2: The overall structure of the Holistic Occlusion Relation Inference (HORI) module. The HORI module combines a Resnet50 backbone and a deformable DETR decoder to extract variable-resolution features from input images. Then the core component, the Occlusion Reasoning Block (ORB), employs feature maps as keys and values and employs N visible instance masks for masked attention during the learning of token embeddings, which are later used for occlusion reasoning. Importantly, the HORI module’s role is to incorporate visible instances and amodal masks into the attention process through the ORB. It does NOT aim to predict amodal masks but rather enhances occlusion reasoning capabilities. blocks are attached to feature maps of three resolutions sequentially, with each ORB having its corresponding supervision. Training Supervisions. The corresponding losses for the ORB block are as follows: Lmask = LCE + Ldice Lmatrix = LCE + Ldice Lall = λ1Lmask + λ2Lmatrix + λ3Lratio (2) The first loss function is Lmask, which is inherited from DETR for amodal mask supervision. This loss function is used to ensure that the generated amodal masks are accurate and match the ground truth masks. The target for Lmask is the ground truth amodal mask. The second loss function is Lmatrix, which is used to supervise the occlusion relationships between objects. The target for Lmatrix is a N × N binary matrix indicating the occlusion relationships between the N objects in the image. This loss function is designed to ensure that the model correctly infers occlusion relationships between objects in the image. Since the target can also be treated as a binary mask, we use the same loss as in Lmask. The third loss function is Lratio, which is used to supervise the occlusion ratio of each object in the image. The occlusion ratio r ∈[0, 1) indicates the amount of occlusion a given object is experiencing, with 0 indicating no occlusion and 1 indicating complete occlusion. This loss function ensures that the model learns to accurately predict the occlusion ratio of each object in the image. The loss is computed using mean squared error. The weights λ1, λ2, λ3 are hyperparameters that control the relative importance of each loss function in the training process. Noting that during the generation of the amodal prediction, the binary mask Minst is concatenated to the feature map. This approach can sometimes result in instances without occlusion converging to a trivial solution, where the visible mask channel has a very high weight. However, we intentionally designed the structure in this way to reduce the burden on the model to handle instances without occlusion. In experiments, we found this strategy to be highly effective. The ORB block is essential for producing accurate ordering recovery, with all pairs playing a crucial role. The masked attention mechanism encodes the visible parts and trains the model to interpolate the invisible parts of the amodal prediction, resulting in reasonable ordering relationships. The ORB structure facilitates the tokenization of binary masks into tokens that pair with feature maps. Supervision of the amodal predictions helps the model encode visible information accurately and interpret the invisible parts. Lastly, supervision on the order map enables the model to learn the ordering relationships within all the objects. Handling Mutual Occlusion. Existing datasets assume that the occlusion relationship between objects is bipartite, meaning that one object can only be the ‘occluder’ or the ‘occludee’ in one object pair. However, during our experiments, we found that this assumption is often invalid, and there are instances where the occlusion relationship between two objects is mutual, typically happening when objects are interacting with each other. Examples of mutual occlusion relationships can be seen in Fig. 3. To avoid mutual occlusion annotations, previous datasets such as COCOA (shown in the first row of the figure) split instances into multiple parts. However, this approach can result in inconsistent annotations (see Fig. 3). To construct our ASW dataset (see Sec. ), we purposefully allow mutual occlusion to make the amodal mask annotation consistent. Previous methods for pairwise ordering, such as those proposed in (Zhan et al. 2020b; Nguyen and Todorovic 2021), rely on comparing the sizes of the intruding areas The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7000 Figure 3: COCOA (1st row) splits instances into multiple parts (indexed by numbers) to avoid the need for mutual occlusion annotations, resulting in inconsistent instance and amodal masks. Our ASW dataset (2nd row) addresses this by annotating each instance with its complete shape directly. of the predicted amodal mask of one object with the visible mask of the other object to make their ordering predictions. These methods were designed and trained on synthetic data with pairwise bipartite ordering relationships. It is nontrivial to generate synthetic data with mutual occlusion. As a result, these previous methods fail to adapt to our newly presented fully annotated ASW dataset. In contrast, our HORI module (Fig. 2) can easily handle the mutual occlusion prediction by simply having two output channels for the MLP in the ORB block to encode both ‘occluder’ and ‘occludee’ status, which can be true at the same time, for each object with respect to another one. Generative Mask Completion Module After successfully inferring the order relationships among the objects, the next step in our approach is to process the mask completion in order to obtain the final amodal mask predictions. Previous methods are typically trained deterministically using human-annotated or synthesized amodal masks, which is not well-suited for amodal segmentation. Though the annotations are carefully annotated by human experts, they represent only one possible solution while other plausible shapes may exist in the invisible region. For instance, consider the example shown in Fig. 3 (Row 2, 3rd image from left) where two people are standing next to each other and their arms can be at any arbitrary angles. Thus, training a deterministic model using a single amodal ground truth won’t capture the stochastic nature of the problem. To overcome this challenge, we propose to formulate the amodal mask completion as a generative process and use a diffusion-based Generative Mask Completion (GMC) module to achieve instance-level mask completion. More specifically, we inherit from the basic structure of the latentdiffusion model proposed in (Rombach et al. 2022). Our model requires the original image, the visible mask of an object, and a condition mask indicating the areas that are possibly under occlusion (which do not need to be very accurate). Using the result from the previous HORI module, we can easily interpolate the condition mask by grouping all the occluders together. With these inputs, the diffusion model is then trained following the standard procedure as in (Rombach et al. 2022). (a) img (b) gt (c) de-occ (d) ASBU (e) Ours Figure 4: Visual comparison of the amodal mask completion results from different methods. Though de-occ (c) and ASBU (d) got higher IoU in these cases, Ours (e) generates more sharp and reasonable shapes . The GMC module largely improves mask completion quality and provides multiple plausible solutions to infer the amodal shapes in the invisible regions. Qualitative comparisons between our approach and existing deterministic approaches are shown in Fig. 4. It is important to note that, though our GMC predictions are visually better, its Intersection-over-Union (IoU) results may not necessarily be higher. For example, in the first row, the output of ‘ASBU’ has a slightly higher IoU than ‘Ours’ (82.4 vs 80.1). However, ‘Ours’ looks more natural, resembling a human stretching an arm, while the ‘ASBU’ output is not very sensible. This phenomenon applies to the other examples as well: man-made objects, such as the paper bag, the sugar bag, and the chair, should have sharp corners and straight edges, but these cannot be captured by the regular deterministic approaches, resulting in round corners and uneven edges. This urges us to use more advanced metrics to evaluate the plausibility and fidelity of the predicted amodal shapes in the following experiments. Experiments Datasets. We conducted extensive experiments to demonstrate the effectiveness of our model on multiple datasets compared with various works. We evaluated our model on three datasets: COCOA, KINS, and ASW. COCOA is a subset of COCO2014 and comprises 2, 500 images with 22, 163 instances as the training split and 1, 323 images with 12, 753 instances as the evaluation split. The dataset includes general scenes, such as indoors, outdoors, portraits, and sports events, making it a comprehensive dataset for amodal segmentation. KINS is a subset of the KITTI dataset, which is a large-scale traffic dataset. KINS includes 7, 474 images with 95, 311 instances as the training split and 7, 517 images with 92, 492 instances as the testing split. All the scenes in KINS are related to traffic, making it a suitable dataset for testing our model’s effectiveness in traffic-related scenarios. ASW is our proposed amodal scene in the wild dataset, which includes images collected from daily life situations. The dataset contains 33, 049 images with 316, 592 annotations as the training split and 2, 000 images with 14,969 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7001 annotations as the evaluation split. The evaluation split includes 13, 240 occlusion relationships, of which 2, 515 are mutual occlusions. This indicates that mutual occlusion is rather common in daily life situations. More details about ASW dataset are included in the supplementary material. Training schedule. For our HORI module, we trained on the training split of each dataset and evaluated on their respective evaluation split. We used a batch size of 4 globally, and we trained for 20, 000 iterations on all three datasets. Regarding our GMC diffusion model, we only trained it using the training set of ASW and applied it globally to all three datasets for mask completion. We followed the standard diffusion training schedule, including multiple periods of cosine learning schedule until convergence. Evaluation metrics. We assessed the accuracy of ordering recovery by calculating the average pairwise accuracy among pairs with occlusion (O-Acc), a commonly-used metric in this area. It should be noted that each pair (e.g., A, B) has two predictions. There are four possible outcomes: A not adjacent to B, A occluding B, B occluding A, and A and B mutually occluding. In addition, we used mean intersection-over-union (mIoU) to evaluate the quality of predicted amodal masks for amodal completion. To ensure the fidelity of the predicted shapes, we measured Fr´echet Inception Distance (FID) and Kernel Inception Distance (KID). These metrics are commonly used to evaluate the similarity between the distributions of predicted and ground truth masks. Note that, GMC can predict multiple amodal masks for a given visible instance mask as shown in Fig. 6, for a fair comparison, we fix the prediction to be one. Main Results Comparison of the performance on COCOA dataset. Performance comparison of our model and previous methods on the COCOA dataset is presented in Tab. 1. Our HORI module demonstrates the strongest O-Acc performance on COCOA with a score of 90.9%. In terms of mIoU, both HORIpredicted masks and GMC-refined masks exhibit stronger performance than previous methods with a score of 86.89%. However, as noted before, a higher mIoU score may not necessarily indicate better performance. In this case, although both HORI and GMC models have strong mIoU scores, HORI has the lowest fidelity among all methods. This suggests that mIoU may not be the most suitable metric for nondeterministic tasks such as amodal completion. Our GMC model exhibits the strongest fidelity toward the ground truth. Comparison of the performance on KINS dataset. The KINS dataset solely contains traffic scenes and exhibits a strong inductive bias where larger objects are closer to the camera. Our HORI module can holistically perform ordering recovery and potentially leverage this bias better, whereas previous methods trained pairwise may not capture this information. Therefore, as shown in Tab. 2, HORI outperforms previous methods in ordering recovery by a significant margin. While our GMC module is not specifically trained on traffic-related scenes, it still achieves competitive performance in terms of mask completion quality. However, since the mIoU scores are already high (over 94%) and amodal shapes are relatively uniform in this dataset, the ordering Methods COCOA O-Acc mIoU FID KID CSDNet 84.7 De-occlusion 87.10 81.35 9.391 0.0034 ASBU 90.33 84.22 ASBU† 88.00 82.17 8.816 0.0033 HORI (ours) 90.90 86.36 12.579 0.0062 GMC (ours) 86.89 7.204 0.0019 Table 1: Comparison of ordering recovery and amodal completion on the COCOA dataset. Our method outperforms previous approaches on both tasks. We further evaluated the fidelity of predicted masks to the ground truth masks (smaller is better). † indicates results obtained by retraining using officially released code. Methods KINS O-Acc mIoU CSDNet 86.4 De-occlusion 92.50 94.76 ASBU 92.65 94.83 HORI (ours) 95.22 93.79 GMC (ours) 93.53 Table 2: Comparison of ordering recovery and amodal completion on the KINS dataset. Our O-Acc score outperforms previous methods by a large margin, demonstrating the superior ability of HORI to predict ordering in complex scenes. recovery task, rather than mask completion, is the primary factor affecting the overall performance. Comparison of the performance on ASW dataset. We reimplement de-occlusion and ASBU on the ASW datasets under the same settings to obtain results for a fair comparison. However, these methods cannot predict mutual occlusion relationships, resulting in a significant performance gap compared to our proposed method (87.27% vs. 82.01% vs. 80.93%) as shown in Tab. 3. Similar to the COCOA result, both HORI-predicted masks and GMC-refined masks exhibit stronger performance in terms of mIoU than previous methods. However, HORI has the poorest fidelity among all methods. By applying GMC, we can obtain mask predictions that are not only good in mIoU but also more sensible predictions. Visualization results on the ASW testing set are shown in Fig. 5. Results in Fig. 6 also show GMC is capable of generating multiple reasonable predictions. Discussions Ablations on the structure of HORI. The proposed HORI module includes multiple novel structural designs, which are ablated one-by-one in Tab. 4. By simply using masked attention to encode visible instance masks to the model structure, the O-Acc score reaches 85.6. The ‘ins concat’ operation involves concatenating visible instance masks with feature maps before amodal prediction. ‘Matrix inference’ uses the predicted occlusion relationship matrix instead of amodal predicted masks to interpolate the final occlusion relationThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7002 (a) image (b) gt (c) De-occ (d) ASBU (e) Ours Figure 5: Amodal mask completion results on ASW evaluation set. Compared with existing methods, our approach generates more visually plausible amodal shapes. The inferred amodal masks are more consistent with the ground truth. Particularly for man-made objects, our GMC module generates sharper edges and corners with higher quality. gt 1st generate 2nd generate Figure 6: Amodal completion results showcasing GMC’s capability of generating multiple reasonable predictions. Methods ASW O-Acc mIoU FID KID De-occlusion 80.93 88.32 4.466 0.0012 ASBU 82.01 88.84 4.438 0.0011 HORI (ours) 87.27 90.24 4.946 0.0022 GMC (ours) 90.53 4.046 0.0009 Table 3: Comparison of ordering recovery and amodal completion on the ASW dataset. Beyond the mIoU score, our model also achieves better FID and KID, indicating the superior ability to generate high-quality amodal masks. ship. ‘Upsample inputs’ involves scaling the image’s shortest edge to 1024 during inference. These operations all have a positive impact on O-Acc. Efficiency on object occlusion reasoning. As we proposed a novel occlusion relation matrix to infer the occlusion relationship among all the instances, our method is more efficient than the pair-wise relation reasoning framework (Nguyen and Todorovic 2021; Zhan et al. 2020b), especially for complicated real-world scenarios. Our end-toend method only requires one forward pass to infer all the occlusion correlations among the instances in the images. In contrast, previous methods require a complicated pipeline by first segmenting all the instances, then inputting every instance pair into the relation reasoning network. Hence, MA IC MI UI O-Acc ✓ 85.6 ✓ ✓ 87.2 ✓ ✓ ✓ 89.4 ✓ ✓ ✓ ✓ 90.9 Table 4: Ablations for the HORI module on COCOA dataset. O-Acc is reported here. As different components are added to the HORI module, the performance gradually improves. MA refers to Masked Attention, IC refers to Ins Cancat, MI refers to Matrix Inference and UI refers to Upsample Inputs. the computational cost and the inference time will increase linearly according to the number of occlusion pairs. Taking COCOA as an example, there are 1323 images with 22630 occlusion pairs. We test the inference speed on a single 3090Ti GPU card for our framework and the pairwise reasoning framework (Nguyen and Todorovic 2021). We achieve an average inference speed of 5 FPS to get the occlusion relationship while ASBU (Nguyen and Todorovic 2021) only achieves 1.68 FPS. Conclusion In this paper, we propose a novel framework for amodal scene analysis that comprises the Holistic Occlusion Relation Inference (HORI) module and the Generative Mask Completion (GMC) module. The HORI module predicts an occlusion relationship matrix in a single pass, which largely improves the inference efficiency and enables reasoning for mutual occlusion. The GMC module formulates amodal mask completion as a generative process and provides multiple high-quality plausible solutions. Our experimental results on COCOA, KINS, and the proposed ASW benchmark demonstrate state-of-the-art performance and robustness to various occlusion scenarios. Our framework and benchmark can serve as essential baselines for future amodal scene analysis research, with potential applications in robotics, autonomous driving, and image editing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7003 Acknowledgments Y. Liu acknowledges the support of start-up funding from The University of Adelaide for their participation in this work. We express our gratitude to The University of Adelaide High-Performance Computing Services for providing the GPU Compute Resources, and to Mr. Wang Hui and Dr. Fabien Voisin for their valuable technical support. This work stands as a testament to the dedication and expertise of https://yifaninmemory.vmv.re/ Dr. Yifan Liu. We are deeply grateful to Dr. Liu for her unparalleled contribution and leadership throughout this project. Her insights and guidance have been invaluable. We wish to honour and remember Dr. Yifan Liu for her enduring impact and legacy in this field. References Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European conference on computer vision, 213–229. Springer. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022. Masked-attention mask transformer for universal image segmentation. In CVPR, 1290–1299. Couairon, G.; Verbeek, J.; Schwenk, H.; and Cord, M. 2022. Diffedit: Diffusion-based semantic image editing with mask guidance. arXiv preprint arXiv:2210.11427. Dhariwal, P.; and Nichol, A. Q. 2021. Diffusion Models Beat GANs on Image Synthesis. In Beygelzimer, A.; Dauphin, Y.; Liang, P.; and Vaughan, J. W., eds., NeurIPS. Ehsani, K.; Mottaghi, R.; and Farhadi, A. 2018. Segan: Segmenting and generating the invisible. In CVPR, 6144–6153. Follmann, P.; K¨onig, R.; H¨artinger, P.; Klostermann, M.; and B¨ottger, T. 2019. Learning to see the invisible: End-to-end trainable amodal instance segmentation. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 1328–1336. IEEE. Gilroy, S.; Jones, E.; and Glavin, M. 2019. Overcoming occlusion in the automotive environment—A review. IEEE Transactions on Intelligent Transportation Systems, 22(1): 23–35. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2961–2969. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. In NIPS. Ho, J.; Saharia, C.; Chan, W.; Fleet, D. J.; Norouzi, M.; and Salimans, T. 2022. Cascaded Diffusion Models for High Fidelity Image Generation. J. Mach. Learn. Res., 23(47): 1–33. Hoiem, D.; Stein, A. N.; Efros, A. A.; and Hebert, M. 2007. Recovering occlusion boundaries from a single image. In ICCV, 1–8. IEEE. Hsiao, E.; and Hebert, M. 2014. Occlusion reasoning for object detectionunder arbitrary viewpoint. IEEE transactions on pattern analysis and machine intelligence, 36(9): 1803– 1815. Jain, J.; Li, J.; Chiu, M.; Hassani, A.; Orlov, N.; and Shi, H. 2022. OneFormer: One Transformer to Rule Universal Image Segmentation. arXiv preprint arXiv:2211.06220. Jiang, Z.; Liu, B.; Schulter, S.; Wang, Z.; and Chandraker, M. 2020. Peek-a-boo: Occlusion reasoning in indoor scenes with plane representations. In CVPR, 113–121. Jolicoeur-Martineau, A.; Pich´e-Taillefer, R.; des Combes, R. T.; and Mitliagkas, I. 2020. Adversarial score matching and improved sampling for image generation. Kang, S. B.; Szeliski, R.; and Chai, J. 2001. Handling occlusions in dense multi-view stereo. In CVPR, volume 1, I–I. IEEE. Kar, A.; Tulsiani, S.; Carreira, J.; and Malik, J. 2015. Amodal completion and size constancy in natural scenes. In ICCV, 127–135. Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; and Irani, M. 2022. Imagic: Text-based real image editing with diffusion models. arXiv preprint arXiv:2210.09276. Kimia, B. B.; Frankel, I.; and Popescu, A.-M. 2003. Euler spiral for shape completion. International journal of computer vision, 54(1-3): 159–182. Kortylewski, A.; Liu, Q.; Wang, H.; Zhang, Z.; and Yuille, A. 2020. Combining compositional models and deep networks for robust object classification under occlusion. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 1333–1341. Li, F.; Zhang, H.; Liu, S.; Zhang, L.; Ni, L. M.; Shum, H.Y.; et al. 2022a. Mask dino: Towards a unified transformerbased framework for object detection and segmentation. arXiv preprint arXiv:2206.02777. Li, H.; Yang, Y.; Chang, M.; Chen, S.; Feng, H.; Xu, Z.; Li, Q.; and Chen, Y. 2022b. Srdiff: Single image superresolution with diffusion probabilistic models. Neurocomputing, 479: 47–59. Li, K.; and Malik, J. 2016. Amodal instance segmentation. In European Conference on Computer Vision, 677– 693. Springer. Ling, H.; Acuna, D.; Kreis, K.; Kim, S. W.; and Fidler, S. 2020. Variational amodal object completion. NeurIPS, 33: 16246–16257. Lu, R.; Xue, F.; Zhou, M.; Ming, A.; and Zhou, Y. 2019. Occlusion-shared and feature-separated network for occlusion relationship reasoning. In ICCV, 10343–10352. Mohan, R.; and Valada, A. 2022. Amodal panoptic segmentation. In CVPR, 21023–21032. Nanay, B. 2018. The importance of amodal completion in everyday perception. i-Perception, 9(4): 2041669518788887. Nguyen, K.; and Todorovic, S. 2021. A weakly supervised amodal segmenter with boundary uncertainty estimation. In ICCV, 7396–7405. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7004 Qi, L.; Jiang, L.; Liu, S.; Shen, X.; and Jia, J. 2019. Amodal instance segmentation with kins dataset. In CVPR, 3014– 3023. Ren, X.; Fowlkes, C. C.; and Malik, J. 2006. Figure/ground assignment in natural images. In ECCV, 614–627. Springer. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In CVPR, 10684–10695. Saharia, C.; Chan, W.; Chang, H.; Lee, C.; Ho, J.; Salimans, T.; Fleet, D.; and Norouzi, M. 2022a. Palette: Image-toimage diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, 1–10. Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2022b. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence. Silberman, N.; Shapira, L.; Gal, R.; and Kohli, P. 2014. A contour completion model for augmenting surface reconstructions. In ECCV, 488–503. Springer. Sohl-Dickstein, J.; Weiss, E. A.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. Sun, Y.; Kortylewski, A.; and Yuille, A. 2022. Amodal Segmentation Through Out-of-Task and Out-of-Distribution Generalization With a Bayesian Model. In CVPR, 1215– 1224. Tighe, J.; Niethammer, M.; and Lazebnik, S. 2014. Scene parsing with object instances and occlusion ordering. In CVPR, 3748–3755. Tran, M.; Vo, K.; Yamazaki, K.; Fernandes, A.; Kidd, M.; and Le, N. 2022. AISFormer: Amodal Instance Segmentation with Transformer. arXiv preprint arXiv:2210.06323. Vahdat, A.; Kreis, K.; and Kautz, J. 2021. Score-based Generative Modeling in Latent Space. In Advances in neural information processing systems. Wang, P.; and Yuille, A. 2016. Doc: Deep occlusion estimation from a single image. In ECCV, 545–561. Springer. Wu, J.; Tenenbaum, J. B.; and Kohli, P. 2017. Neural scene de-rendering. In CVPR, 699–707. Xiao, Y.; Xu, Y.; Zhong, Z.; Luo, W.; Li, J.; and Gao, S. 2021. Amodal segmentation based on visible region segmentation and shape prior. In AAAI, volume 35, 2995–3003. Yamaguchi, K.; McAllester, D.; and Urtasun, R. 2014. Efficient joint segmentation, occlusion labeling, stereo and flow estimation. In ECCV, 756–771. Springer. Yan, X.; Wang, F.; Liu, W.; Yu, Y.; He, S.; and Pan, J. 2019a. Visualizing the invisible: Occluded vehicle segmentation and recovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7618–7627. Yan, X.; Wang, F.; Liu, W.; Yu, Y.; He, S.; and Pan, J. 2019b. Visualizing the invisible: Occluded vehicle segmentation and recovery. In ICCV, 7618–7627. Zeng, Y.; Lin, Z.; Zhang, J.; Liu, Q.; Collomosse, J.; Kuen, J.; and Patel, V. M. 2022. SceneComposer: Any-Level Semantic Image Synthesis. arXiv preprint arXiv:2211.11742. Zhan, X.; Pan, X.; Dai, B.; Liu, Z.; Lin, D.; and Loy, C. C. 2020a. Self-supervised scene de-occlusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3784–3792. Zhan, X.; Pan, X.; Dai, B.; Liu, Z.; Lin, D.; and Loy, C. C. 2020b. Self-supervised scene de-occlusion. In CVPR, 3784– 3792. Zheng, C.; Dao, D.-S.; Song, G.; Cham, T.-J.; and Cai, J. 2021. Visiting the Invisible: Layer-by-Layer Completed Scene Decomposition. IJCV, 129(12): 3195–3215. Zhu, H.; Tang, P.; Park, J.; Park, S.; and Yuille, A. 2019. Robustness of object recognition under extreme occlusion in humans and computational models. arXiv preprint arXiv:1905.04598. Zhu, Y.; Tian, Y.; Metaxas, D.; and Doll´ar, P. 2017. Semantic amodal segmentation. In CVPR, 1464–1472. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7005
2024
778
18,604
High-Quality Real-Time Rendering Using Subpixel Sampling Reconstruction Boyu Zhang1, 3, Hongliang Yuan2, 3* 1University of California, Los Angeles 2Xiaomi Cooperation 3Tencent AI Lab bobo8496@ucla.edu, hercules.yuan@gmail.com Abstract Generating high-quality, realistic rendering images for realtime applications generally requires tracing a few samplesper-pixel (spp) and using deep learning-based approaches to denoise the resulting low-spp images. Existing denoising methods necessitate a substantial time expenditure when rendering at high resolutions due to the physically-based sampling and network inference time burdens. In this paper, we propose a novel Monte Carlo sampling strategy to accelerate the sampling process and a corresponding denoiser, subpixel sampling reconstruction (SSR), to obtain high-quality images. Extensive experiments demonstrate that our method significantly outperforms previous approaches in denoising quality and reduces overall time costs, enabling real-time rendering capabilities at 2K resolution. Introduction Rendering realistic images for virtual worlds is a key objective in many computer vision and graphics tasks (Huo and Yoon 2021; Xu et al. 2022; Huang et al. 2023; Li et al. 2023; Li, Ngo, and Nagahara 2023), with applications in animation production (Dahlberg, Adler, and Newlin 2019), VR/AR world generation (Overbeck et al. 2018), virtual dataset synthesis (Ge et al. 2022), etc. One widely used technique for this purpose is Monte Carlo (MC) sampling (Seila 1982), which is highly versatile but typically requires a large number of samples to achieve accurate results. Despite the relentless advancements in computational capabilities, the temporal expenditure for executing realistic rendering remains a practical constraint, with high-quality images often taking hours to generate. Using low samples-per-pixel (spp) can speed up this process but lead to visually distracting noise. To mitigate this issue, post-processing techniques have been developed, known as MC denoising, which normally have lower time costs than physically-based renderers and are widely used in modern game engines(Chaitanya et al. 2017; NVIDIA 2021; Xiao et al. 2020). Most existing MC denoising methods (Edelsten, Jukarainen, and Patney 2019; Xiao et al. 2020; Chaitanya et al. 2017; Is¸ık et al. 2021; Meng et al. 2020; Hasselgren et al. 2020; Fan et al. 2021) employ deep learning-based *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. approaches to remove noise from images generated with more than 1-spp. While (Chaitanya et al. 2017; Meng et al. 2020; Fan et al. 2021; Thomas et al. 2022) attempt to develop methods to accelerate the overall process by working with low-sample data, they have yet to achieve real-time frame rates at high resolutions, as 1-spp remains time-consuming. Other approaches (Edelsten, Jukarainen, and Patney 2019; Xiao et al. 2020; Is¸ık et al. 2021) focus on designing more efficient post-processing modules in the image space to handle noisy images, but they tend to produce aliased rendered pixels at low-sample images. Additionally, the complex network structures of these works impose heavy burdens on inference time. To achieve real-time performance for the generation of high-resolution, realistic images, we introduce a novel MC sampling strategy, subpixel sampling. This strategy is designed to curtail the temporal demands of physically-based rendering. Complementing this, we also propose a denoising method subpixel sampling reconstruction (SSR), which is tailored to the subpixel sampling strategy. Subpixel sampling. The subpixel sampling strategy generates images with less than 1-spp. To obtain this, we divide each frame at the target resolution into consecutive, nonoverlapping tiles with size 2 × 2 and then compute only one ray-traced pixel per tile (we refer to it as 1/4-spp). This strategy allows us to use these reliable samples to interpolate the missing pixels with the GBuffers (OpenGL 1998) at the target resolution. We developed a vulkan-based (Sellers and Kessenich 2016) hybrid ray tracer to export datasets. By utilizing subpixel sampling, the cost of rendering time can be reduced by a third. Subpixel sampling reconstruction. Our reconstruction contains two parts: a temporal feature accumulator and a reconstruction network. The former warps previous frames to align with the current frame at the target resolution and accumulates subpixel samples and GBuffers from the previous frame based on the temporal accumulation factor, which is computed according to the correlation of the current and previous frames, effectively expanding the perception field of pixels. Once subpixel samples are collected, we move on to the second component, our reconstruction network. This is a multi-scale U-Net (Ronneberger, Fischer, and Brox 2015) with skip connections, which enables us to reconstruct the desired high-resolution image. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7006 The key points of our contribution can be summarized as follows: • We propose a novel Monte Carlo sampling strategy termed as subpixel sampling, which significantly curtails the sampling time required for physically-based rendering to one-third. • We introduce a denoising network, SSR, to reconstruct high-quality image sequences at real-time frame rates from rendering outcomes utilizing the subpixel sampling strategy. • Our model yields superior results compared to existing state-of-the-art approaches and achieves real-time reconstruction performance of 2K resolution with 130 FPS. • A realistic synthesised dataset is built through our subpixel sampling ray tracer. We will release the dataset and code for research purpose. Related Work Monte Carlo Denoising Monte Carlo (MC) denoising techniques are extensively applied in the realm of rendering realistic images. Traditional best-performing MC denoisers were mainly based on local neighborhood regression models (Zwicker et al. 2015), includes zero-order regression (Rousselle, Knaus, and Zwicker 2012; Delbracio et al. 2014; Li, Wu, and Chuang 2012; Kalantari, Bako, and Sen 2015; Rousselle, Manzi, and Zwicker 2013; Moon et al. 2013), first-order regression (Bauszat, Eisemann, and Magnor 2011; Bitterli et al. 2016; Moon, Carr, and Yoon 2014) and even higherorder regression models (Moon et al. 2016).The filteringbased methods are based on using the auxiliary feature buffers to guide the construction of image-space filters. Most of the above methods run in offline rendering. To increase the effective sample count, real-time denoisers leverage temporal accumulation between frames over time to amortize supersampling (Yang et al. 2009), i.e. temporal anti-aliasing (TAA). The previous frame is reprojected according to the motion vector and blended with the current frame using a temporal accumulation factor, which can be constant (Schied et al. 2017; Mara et al. 2017; Meng et al. 2020) or changed (Schied, Peters, and Dachsbacher 2018) across different frames. The fixed temporal accumulation factor inevitably leads to ghosting and temporal lag. By adaptively setting the parameters, the temporal filter can rapidly adapt to temporal variations, efficiently responding to abrupt frame-to-frame changes. Yang et al. (Yang, Liu, and Salvi 2020) survey recent TAA techniques and provide an in-depth analysis of the image quality trade-offs with these heuristics. Koskela et al. (Koskela et al. 2019) propose a blockwise regression for real-time path tracing reconstruction and also do accumulation to improve temporal stability. Deep Learning-Based Denoising Recently, in the wake of advancements in powerful modern GPUs, numerous studies have leveraged CNN to construct MC denoisers. (Bako et al. 2017; Vogels et al. 2018) use deep CNN to estimate the local per-pixel filtering kernels used to compute each denoised pixel from its neighbors. Layer-based denoiser (Munkberg and Hasselgren 2020) designs a hierarchical kernel prediction for multi-resolution denoising and reconstruction. Owing to the substantial burdens of predicting large filtering kernels, these methods mostly target offline renderings. There are also other methods (Kuznetsov, Khademi Kalantari, and Ramamoorthi 2018; Xu et al. 2019; Gharbi et al. 2019; Yu et al. 2021; Back et al. 2022) that target denoising at more than 4 spp. To reduce the overhead of kernel prediction, Fan et al. (Fan et al. 2021) predict an encoding of the kernel map, followed by a high-efficiency decoder to construct the complete kernel map. Chaitanya et al. (Chaitanya et al. 2017) propose a recurrent connection based on U-Net (Ronneberger, Fischer, and Brox 2015) to improve temporal stability. Hasselgren et al. (Hasselgren et al. 2020) introduce a neural spatio-temporal joint optimization of adaptive sampling and denoising with a recurrent feedback loop. Hofmann et al. (Hofmann et al. 2021) also utilize the neural temporal adaptive sampling architecture to denoise rendering results with participating media. Xiao et al. (Xiao et al. 2020) presente a neural supersampling method for TAA, which is similar to deep-learned supersampling (DLSS) (Edelsten, Jukarainen, and Patney 2019). Meng et al. (Meng et al. 2020) denoise 1-spp noisy input images with a neural bilateral grid at realtime frame rates. Mustafa et al. (Is¸ık et al. 2021; Thomas et al. 2022) adopte spatial kernels to filter the noisy image guiding by features. (Firmino, Frisvad, and Jensen 2023) designe adaptive sampling for optimizing MC denoising. (Balint et al. 2023) employe pyramid filters to recover renderings. Compared with these denoising frameworks targeting more than 1-spp, our approach is tailored to operate efficiently with 1/4-spp, cutting off rendering time expenditure. Method Subpixel Sampling To mitigate the substantial computational cost associated with rendering in cases where the samples-per-pixel (spp) exceeds 1, we devise subpixel sampling that empowers us to produce images with 1/4-spp. 1/4-spp pattern Our strategy involves dividing each frame into non-overlapping 2 × 2 tiles and applying MC ray tracing methods to solve the rendering equation (Kajiya 1986) for one pixel in each tile. We term this process 1/4-spp pattern. To maintain data balance, we shift the sampling position to ensure that each pixel is sampled in the consecutive four frames at time steps t to t + 3, as illustrated in Fig. 1a. GBuffers We leverage the rasterization pipeline to efficiently produce high-resolution GBuffers. In detail, we dump 1/4-spp RGB color c ∈R3 (Fig. 2a) and features f ∈ R15. These features comprise four 3D vectors (albedo, normal, shadow, and transparent) and three 1D vectors (depth, metallic, and roughness), as shown in Figs. 2b to 2h. Mask map As the sampled subpixels undergo ray tracing at a high resolution, their RGB values are reliable for the target resolution. In this context, we generate an additional mask map to denote reliable pixels. This map distinctly assigns a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7007 (a) Subpixel sampling strategy (b) Pixels (c) Mask map Figure 1: (a) Sampling of a 2 × 2 tile from consecutive four frames. The sampled and unsampled pixels are drawn in color and in black (with value 0), respectively. (b) Pixels of a sub-patch example in a rendered image. (c) The corresponding mask map of patch (b) is depicted in white pixels with a value of 1, while black pixels indicate a value of 0. value of 1 to sampled positions and 0 to unsampled positions, as shown in Fig. 1c. It performs as a confidence map and is expected to guide our temporal feature accumulator to predict reasonable weights. To this end, we incorporate the mask map into the GBuffers. Demodulation Similar to the previous approach (Chaitanya et al. 2017), we utilize the albedo (or base color) to demodulate the RGB image. Then, the resulting untextured irradiance x is transformed into log space using the natural logarithm function, i.e., ln(1 + x). However, our method differs in that once the untextured irradiance has been reconstructed, we re-modulate it using the accumulated albedo predicted by our temporal feature accumulator. Subpixel Sampling Reconstruction We designed subpixel sampling reconstruction (SSR) to recover temporally stable video from 1/4-spp image sequences at real-time frame rates. Fig. 3 shows the detailed architecture of SSR, which comprises two modules: the temporal feature accumulator (in green) and the reconstruction network (in blue). Temporal Feature Accumulator The temporal feature accumulator module consists of two neural networks, each with two convolution layers that have a spatial support of 3 × 3 pixels. One network receives all features and mask of current frame as input and outputs reference embedding. The other computes embeddings for the current features ft and warped previous features ft−1. These two embeddings are then pixel-wise multiplied to the reference embedding and then through softmax(·) to get α and β (α + β = 1) blending factors for current features and previous features, (a) RGB Color (b) Albedo (c) Normal (d) Transparent (e) Shadow (f) Depth (g) Metallic (h) Roughness Figure 2: Dumped buffers from our ray tracer. respectively. All features in Fig. 2 are accumulated through above process. Take untextured irradiance as an example, as illustrated in Fig. 4, we use the following equation to accumulate untextured irradiance e over the frame: ea t = αW(ea t−1) + βet, (1) where ea t is accumulated irradiance until t frame, et is irradiance for t frame. For the first frame, we set ea t−1 to et. W(·) is a warping operator that reprojects previous frame to current one using motion vector. The temporal feature accumulator serves a vital role in producing temporally stable results. Firstly, it can detect and remove disoccluded pixels and ghosting artifacts that traditional motion vectors cannot handle accurately. Secondly, since our input images are sparsely sampled, this module helps gather more finely sampled pixels across frames. Reconstruction Network Our reconstruction network extends U-Net (Ronneberger, Fischer, and Brox 2015) with skip connections (Mao, Shen, and Yang 2016). In contrast to other U-Net-based denoising methods (Chaitanya et al. 2017), our approach predicts two coarse-scale images at the first two decoder stages rather than predicting dense features at these stages. This modification not only leads to faster inference but also results in high-quality images with superior quantitative metrics (see network ablation). To generate a high-quality image for the current frame, we concatenate the current and accumulated features and feed them into our reconstruction network. Additionally, we input the warped denoised image from the previous frame, which enhances the temporal stability of image sequences (see network ablation). The reconstruction network consists of three encoder layers that produce three scale features. Retaining temporal feedback at multiple scales is also a crucial step. To achieve this, we downsample the warped denoised image from the previous frame using a pool with a stride of two and pass it to each encoding stage. At the decoder stage, we concatenate the features and the warped denoised image at the same scale and feed them into a tile with two convolution layers. At the first two decoder stages, the image in RGB space is produced and upsampled. This upsampled image is then passed to the next decoder stage. The multi-scale feedback enables our network to own a sufficiently large temporal receptive field and efficiently generate high-quality, temporally stable results. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7008 Figure 3: Subpixel sampling reconstruction consists of two modules: the temporal feature accumulator (left) and the reconstruction network (right) . The numbers under each network layer represent the output channels at corresponding layers. The operator ⊙denotes dot product between features. c⃝indicates concatenation operation. ⊕and ⊗represent element-wise addition and multiplication, respectively. Note that all frames shown here are demodulated by albedo. Figure 4: Illustration of accumulating untextured irradiance by our temporal feature accumulator. Warped irradiance by motion vector has ghosting artifacts (red arrow), which can be removed by giving lower weight in these areas. Loss We use the symmetric mean absolute percentage error (SMAPE): ℓ(r, d) = 1 3N p=N X p=1 c=3 X c=1 |dp,c −rp,c| |dp,c| + |rp,c| + ε, (2) where N is the number of pixels and ε is a tiny perturbation, d and r are the denoised frame and the corresponding reference frame, respectively. Our loss combines two parts, the first one is computed on a sequence of 5 continuous frames, including spatial loss ℓs = ℓ(r, d), temporal loss ℓt = ℓ(∆r, ∆d) where ∆is temporal gradient computed between two consecutive frames, relative edge loss ℓe = L1( ∇d r+ε, ∇r r+ε), where gradient ∇is computed using a High Frequency Error Norm (HFEN), an image comparison metric from medical imag(a) BistroInterior (b) BistroExterior (c) Sponza (d) Diningroom (e) Angel (f) Warmroom Figure 5: An overview of our generated dataset. ing (Ravishankar and Bresler 2011). As suggested by Chaitanya et al. (Chaitanya et al. 2017), we assign higher weight to three loss functions (ℓs, ℓt and ℓe) of frames later in the sequence to amplify temporal gradients. For a training sequence of 5 images, we use (0.05, 0.25, 0.5, 0.75, 1). The second part is warped temporal loss ℓwt = ℓ(ωr, ωd) where ωr = r4−W(r3), W(·) is a warping operator that reprojects previous frame to current one. We also include albedo loss ℓa = ℓ(aacc, ar). aacc is accumulated albedo computed by our temporal feature accumulator. We only compute albedo loss on the last frame and warped temporal loss on the last two frames. We use a weighted combination of these losses as the overall loss: ℓ= λsℓs + λtℓt + λeℓe + λwℓwt + λaℓa. (3) Experiments Datasets and Metrics Datasets As subpixel sampling is a novel strategy, the field currently lacks dedicated datasets specifically designed for this purpose. We utilized a vulkan-based (Sellers and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7009 (a) Ours (b) Input (c) AFS (d) ANF (e) NSRR (f) RAE (g) SSR (h) Ref Figure 6: Visual results on scenes BistroInterior, BistroExterior and Sponza. Kessenich 2016) hybrid ray tracer to generate our subpixel sampling dataset. To optimize our approach for application in games and advanced virtual rendering, we conducted distinct training sessions for each 3D scene instead of collective training. This approach is in concordance with the paradigm utilized in NVIDIA DLSS (NVIDIA 2021). Since our input images were generated at 1/4-spp, a large number of images were imperative to train a robust denoiser. The training process was carried out across six scenes, see Fig. 5. The BistroInterior and BistroExterior (Lumberyard 2017) scenes contain more than one million triangles and transparency, diffuse, specular, and soft shadow effects. All scenes contain 100 to 1000 frames with a resolution of 1024×2048. We also rendered a validation set of 10 frames and a 50 frames test set for each scene. The ground truth image is rendered at 32768-spp for reference. Metrics All comparison approaches are evaluated by three image quality metrics: peak signal to noise ratio (PSNR), structural similarity index (SSIM) (Wang et al. 2004), and root mean squared error (RMSE). Higher PSNR and SSIM imply superior performance, while lower RMSE indicates better. Implementation Details We randomly selected 5 consecutive frames for training each scene. To maximize the utilization of the GPUs, we also randomly cropped the inputs, including the noisy image and auxiliary features, to a resolution of 256x256. The kernel size is 3 × 3 at all layers. The weight coefficients for Ls, Lt, Le, Lw, and La are 0.7, 0.1, 0.2, 0.4, and 5.0, respectively. We conducted all experiments using the PyTorch framework (Paszke et al. 2019) on 8 NVIDIA Tesla A100 GPUs. Adam optimizer (Kingma and Ba 2015) with β1 = 0.9, β2 = 0.999, and ϵ = 1e −8 is used with the initial learning rate set to 1 × 10−4. The learning rate is halved at one-third and two-thirds of the total number of iterations. We set batch size to 8 and trained our model for 200 epochs. Each scene required approximately 9 hours of training time. We compare our method to several cutting-edge Monte Carlo denoising and reconstruction techniques, including the fastest-running method RAE (Chaitanya et al. 2017), ANF (Is¸ık et al. 2021), which achieves the best denoising performance on more than 1-spp images, the offline method AFS (Yu et al. 2021), and the super-resolution approach NSRR (Xiao et al. 2020). Notably, while NSRR is primarily designed for super-resolution, it demonstrates adaptability for the sparse sampling task, as elucidated in the supplementary materials. Meanwhile, its practical applications in 3A game rendering further establish NSRR as a pertinent benchmark in our evaluation. We replicated all the methods using their default settings. Time Analysis Rendering To showcase the efficiency of our subpixel sampling, we test the rendering time of each stage on the NVIDIA RTX 3090 GPU at a resolution of 1024 × 2048, see Tab. 2. The subpixel sampling strategy significantly reduced the sampling time from 12.79ms to 4.35ms, resulting The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7010 Method BistroInterior BistroExterior Sponza Diningroom Warmroom Angel Ave PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM AFS 22.86 .7650 24.60 .8071 25.50 .8119 25.41 .8637 29.55 .8021 22.06 .8601 25.00 .8183 ANF 23.20 .7583 22.14 .7201 23.98 .8219 22.23 .7226 30.91 .8774 25.86 .8813 24.72 .7969 NSRR 23.87 .8104 25.54 .8538 24.93 .8113 27.17 .8843 36.40 .9740 34.94 .9804 28.81 .8857 RAE 24.03 .8351 24.11 .8006 27.74 .8898 29.87 .9007 34.32 .9675 29.18 .9161 28.21 .8849 SSR 28.99 .8945 29.97 .9121 31.79 .9410 32.48 .9375 37.34 .9799 38.04 .9876 33.10 .9421 Table 1: Quantitative comparison results on six scenes. We choose four baseline methods to compare with our SSR method. The best result is in bold, and the second-best is underlined in each column. Strategy R (ms) T&S (ms) Sampling (ms) Overall(ms) w-SS 0.85 1.72 4.35 6.92 w/o-SS 0.85 1.72 12.79 15.36 Table 2: Average rendering time of six scenes. R implies the rendering stage rasterization, and T&S stands for rendering transparent and shadow stage. w-SS denotes rendering with our subpixel sampling (1/4-spp), while w/o-SS means without it (1-spp). Methods 1024×2048 1024×1080 Time (ms) FPS Time (ms) FPS AFS 41.8 24 25.6 39 ANF 33.0 30 19.8 51 NSRR 34.5 29 21.7 46 RAE 10.4 96 6.22 160 SSR 7.6 131 4.56 220 Table 3: Comparison results of inference time. Our SSR achieves 130 frames per second (FPS) at 2K resolution and 220 FPS at 1080p resolution. in a 34% reduction in time. With the employment of subpixel sampling, the average total rendering time is 6.92 ms, compared to 15.36 ms without it, resulting in an approximate 3× improvement. Reconstruction We also conducted an evaluation of the inference time for SSR and compared it against other methods. The comparison was carried out using the same frame for each scene, and the average results are presented in Tab. 3, which shows the average inference time for all six scenes at 1024 × 2048 and 1024 × 1080 resolution using an NVIDIA Tesla A100. Our SSR is capable of reaching a remarkable 130 FPS when operating at 2K resolution and 220 FPS at 1080p images. At both resolutions, SSR provides a frame rate improvement of approximately 37% compared to the previously fastest method. Tab. 2 and Tab. 3 show the time of rendering and reconstruction respectively, while their combined cost is displayed in Fig. 7. Figure 7: Speed-quality comparison on the 1-spp and 1/4spp scenes at resolution 1024×2048, where higher PSNR and FPS (top right) is most desirable. Quantitative Evaluation Quantitative comparison results are shown in Tab. 1. Average results are reported on the 50 test videos of six scenes. Our method delivers the best performance in all scenes. We only show the results of PSNR and SSIM due to space limitations, and please refer to our supplemental material for more comparison results. To show the improvements in speed and quality achieved by our method, we generated six scenes at 1-spp instead of using the subpixel sampling strategy, maintaining all other parameters identical to 1/4-spp scenes. We assessed the entire generation time, including both rendering and reconstruction, and reported the speed and quality comparisons in Fig. 7. SSR performs best on both 1-spp and 1/4-spp datasets, with tiny declines in quality performance as the sampling rate decreases (FPS ranges from 43 to 68 and PSNR varies from 34.40 to 33.10). In contrast, previous methods aimed at datasets larger than 1-spp exhibited dramatic performance degradation. Qualitative Evaluation Here we provide qualitative evaluations of our model. However, we encourage the reader to watch supplementary videos for a more comprehensive understanding. Fig. 6 compares reconstructed images in several scenes visually. We included all comparison results for six scenes in the supplementary material. Our method outperforms all other methods by a considerable margin across all scenes. Previous The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7011 (a) w/o-shadow (b) w-shadow (c) Noisy (d) Ref Figure 8: (a) and (b) show the reconstruction results without and with employing shadow(c), respectively. (d) is the 32768-spp reference image. The shadow feature assists SSR in pinpointing more precise contours. Method RN TFA WP PSNR/SSIM Base ✓ 23.55/.8152 Base+TFA ✓ ✓ 32.13/.9245 Base+TFA+WP ✓ ✓ ✓ 33.10/.9421 Table 4: Ablation study. We evaluate different modules on six scenes. PSNR and SSIM are shown on average. state-of-the-art methods, designed for denoising renderings with more than 1-spp, are not as effective at denoising renderings at 1/4-spp. AFS was originally designed for offline rendering, and transformer models (Vaswani et al. 2017; Liu et al. 2021) require significant memory to train and perform inference. RAE, NSRR, and ANF feed previous and current features directly into the network, which leads to blurred and aliased details. Different from them, SSR computes the correlation for each pixel between normal and depth features of the current and previous frames, thus having the capacity to generate high-quality details. Ablation Study GBuffers ablation We incorporated certain features from the Gbuffers that have not been utilized in existing Monte Carlo denoising methods and conducted corresponding ablation experiments to investigate their effectiveness. Shadow. Our training images are generated by subpixel sampling. As a result of 1/4-spp light occlusion, more than three-quarters of the pixels remain at a value of zero, which motivates us to identify reliable pixels to train our model. Thus, we took the shadow feature as an additional input. Our feature accumulator collects the noisy shadows from the current frame and combines them with the history shadow. This accumulated shadow information aids in detecting continuous edges of shadows and improves temporal stability, as shown in Fig. 8. Transparent. We also appended the transparent feature to SSR for training, but we do not accumulate transparent before feeding it into the reconstruction network. This is due to the transparent feature is scarce and contains rare noise in a whole image, as shown in Fig. 2d. Accumulating the transparent feature yields a minor improvement but also comes with an increased time cost. So we chose to feed the transparent feature into our reconstruction network directly. By utilizing transparent, SSR acquires the ability to produce (a) w/o-T (b) w-T (c) Transparent (d) Ref Figure 9: (a) and (b) show the reconstruction results without and with employing transparent (c), respectively. (d) is the 32768-spp reference image. SSR can capture the information from the transparent features and restore clear transparent objects. transparent objects, such as clear glass cups, as illustrated in Fig. 9. Additionally, in cases where a scene does not contain any transparent objects, such as the BistroExterior scene, we included the transparent feature with a value of zero. Without using shadow and transparency, SSR only achieves a PSNR of 27.67 when tested on BistroInterior, while employing shadow brings an improvement to 28.22. By including both shadow and transparency, our model produces a higher PSNR of 28.99. Network ablation We verified the effectiveness of different modules in our approach, including the temporal feature accumulator (TFA) and the warped previous output (WP), as shown in Tab. 4. Results are presented as an average across six scenes. TFA demonstrates a noticeable enhancement, exhibiting a 36.4% increase in PSNR and a 13.4% improvement in SSIM. Similarly, the application of WP showcases its effectiveness in the third row. Conclusion We presented a novel Monte Carlo subpixel sampling strategy to facilitate accelerated rendering. Additionally, we proposed a denoising network, subpixel sampling reconstruction (SSR), to effectively restore high-quality image sequences in real-time from subpixel sampling pattern. Experiments substantiated that our approach yields superior denoised results in comparison to prevailing cutting-edge methods while also achieving real-time performance at 2K resolution. Limitations and Future Work. While our method offers a real-time pattern for reconstructing high-quality images, there is still potential for enhancing the inference time. 16bit precision TensorRT can be leveraged to expedite processing. We intend to deploy SSR within our game engine in the coming stages. Furthermore, we explored the integration of Swin Transformer (Liu et al. 2021) into the initial layer of our reconstruction network, resulting in a PSNR improvement of approximately 0.23 and a 1.1 ms increase in inference time. Striking the right balance between speed and quality remains a pivotal objective in our forthcoming research. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7012 References Back, J.; Hua, B.-S.; Hachisuka, T.; and Moon, B. 2022. Self-Supervised Post-Correction for Monte Carlo Denoising. In Proceedings of ACM SIGGRAPH Conference (SIGGRAPH). Bako, S.; Vogels, T.; Mcwilliams, B.; Meyer, M.; Nov´aK, J.; Harvill, A.; Sen, P.; Derose, T.; and Rousselle, F. 2017. Kernel-Predicting Convolutional Networks for Denoising Monte Carlo Renderings. ACM Transactions on Graphics (TOG), 36(4). Balint, M.; Wolski, K.; Myszkowski, K.; Seidel, H.-P.; and Mantiuk, R. 2023. Neural Partitioning Pyramids for Denoising Monte Carlo Renderings. In Proceedings of ACM SIGGRAPH Conference (SIGGRAPH). Bauszat, P.; Eisemann, M.; and Magnor, M. 2011. Guided Image Filtering for Interactive High-Quality Global Illumination. In Proceedings of the Eurographics Conference on Rendering (EG), 1361–1368. Goslar, DEU: Eurographics Association. Bitterli, B.; Rousselle, F.; Moon, B.; Iglesias-Guiti´an, J. A.; Adler, D.; Mitchell, K.; Jarosz, W.; and Nov´ak, J. 2016. Nonlinearly Weighted First-Order Regression for Denoising Monte Carlo Renderings. Computer Graphics Forum (CGF), 35(4): 107–117. Chaitanya, C.; Kaplanyan, A.; Schied, C.; Salvi, M.; Lefohn, A.; Nowrouzezahrai, D.; and Aila, T. 2017. Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder. ACM Transactions on Graphics (TOG), 36: 1–12. Dahlberg, H.; Adler, D.; and Newlin, J. 2019. MachineLearning Denoising in Feature Film Production. In ACM SIGGRAPH 2019 Talks. New York, NY, USA: Association for Computing Machinery. ISBN 9781450363174. Delbracio, M.; Mus´e, P.; Chauvier, J.; Phelps, N.; and Morel, J.-M. 2014. Boosting Monte Carlo Rendering by Ray Histogram Fusion. ACM Transactions on Graphics (TOG), 33. Edelsten, A.; Jukarainen, P.; and Patney, A. 2019. Truly next-gen: Adding deep learning to games and graphics. In NVIDIA Sponsored Sessions (Game Developers Conference). Fan, H.; Wang, R.; Huo, Y.; and Bao, H. 2021. Real-time Monte Carlo Denoising with Weight Sharing Kernel Prediction Network. Computer Graphics Forum (CGF), 40(4): 15– 27. Firmino, A.; Frisvad, J. R.; and Jensen, H. W. 2023. Denoising-Aware Adaptive Sampling for Monte Carlo Ray Tracing. In Proceedings of ACM SIGGRAPH Conference (SIGGRAPH). Ge, Y.; Behl, H.; Xu, J.; Gunasekar, S.; Joshi, N.; Song, Y.; Wang, X.; Itti, L.; and Vineet, V. 2022. Neural-Sim: Learning to Generate Training Data with NeRF. In Proceedings of the European Conference on Computer Vision (ECCV), 477–493. Springer. Gharbi, M.; Li, T.-M.; Aittala, M.; Lehtinen, J.; and Durand, F. 2019. Sample-Based Monte Carlo Denoising Using a Kernel-Splatting Network. ACM Transactions on Graphics (TOG). Hasselgren, J.; Munkberg, J.; Salvi, M.; Patney, A.; and Lefohn, A. 2020. Neural temporal adaptive sampling and denoising. In Computer Graphics Forum (CGF), volume 39, 147–155. Hofmann, N.; Hasselgren, J.; Clarberg, P.; and Munkberg, J. 2021. Interactive Path Tracing and Reconstruction of Sparse Volumes. volume 4. New York, NY, USA: Association for Computing Machinery. Huang, X.; Zhang, Y.; Ni, B.; Li, T.; Chen, K.; and Zhang, W. 2023. Boosting point clouds rendering via radiance mapping. In Proceedings of the AAAI conference on artificial intelligence (AAAI). Huo, Y.; and Yoon, S.-e. 2021. A survey on deep learningbased Monte Carlo denoising. Computational visual media, 7: 169–185. Is¸ık, M.; Mullia, K.; Fisher, M.; Eisenmann, J.; and Gharbi, M. 2021. Interactive Monte Carlo Denoising Using Affinity of Neural Features. ACM Transactions on Graphics (TOG). Kajiya, J. T. 1986. The Rendering Equation. In ACM on Computer Graphics and Interactive Techniques (CGIT). Kalantari, N. K.; Bako, S.; and Sen, P. 2015. A Machine Learning Approach for Filtering Monte Carlo Noise. ACM Transactions on Graphics (TOG), 34(4). Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In Bengio, Y.; and LeCun, Y., eds., International Conference on Learning Representations (ICLR). Koskela, M.; Immonen, K.; M¨akitalo, M.; Foi, A.; Viitanen, T.; J¨a¨askel¨ainen, P.; Kultala, H.; and Takala, J. 2019. Blockwise Multi-Order Feature Regression for Real-Time PathTracing Reconstruction. volume 38. New York, NY, USA: Association for Computing Machinery. Kuznetsov, A.; Khademi Kalantari, N.; and Ramamoorthi, R. 2018. Deep Adaptive Sampling for Low Sample Count Rendering. Computer Graphics Forum (CGF), 37: 35–44. Li, C.; Ngo, T. T.; and Nagahara, H. 2023. Inverse Rendering of Translucent Objects using Physical and Neural Renderers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12510–12520. Li, T.-M.; Wu, Y.-T.; and Chuang, Y.-Y. 2012. SURE-Based Optimization for Adaptive Sampling and Reconstruction. ACM Transactions on Graphics (TOG), 31(6). Li, Z.; Wang, Q.; Cole, F.; Tucker, R.; and Snavely, N. 2023. Dynibar: Neural dynamic image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4273–4284. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. CoRR, abs/2103.14030. Lumberyard, A. 2017. Amazon Lumberyard Bistro, Open Research Content Archive (ORCA). http://developer.nvidia.com/orca/amazon-lumberyardbistro. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7013 Mao, X.-J.; Shen, C.; and Yang, Y.-B. 2016. Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS). Mara, M.; McGuire, M.; Bitterli, B.; and Jarosz, W. 2017. An Efficient Denoising Algorithm for Global Illumination. In Proceedings of High Performance Graphics (HPG). Meng, X.; Zheng, Q.; Varshney, A.; Singh, G.; and Zwicker, M. 2020. Real-time Monte Carlo Denoising with the Neural Bilateral Grid. In Dachsbacher, C.; and Pharr, M., eds., Eurographics Symposium on Rendering - DL-only Track. The Eurographics Association. ISBN 978-3-03868-117-5. Moon, B.; Carr, N.; and Yoon, S.-E. 2014. Adaptive Rendering Based on Weighted Local Regression. ACM Transactions on Graphics (TOG), 33(5). Moon, B.; Jun, J. Y.; Lee, J.; Kim, K.; Hachisuka, T.; and Yoon, S.-E. 2013. Robust Image Denoising Using a Virtual Flash Image for Monte Carlo Ray Tracing. Computer Graphics Forum (CGF), 32(1): 139–151. Moon, B.; McDonagh, S.; Mitchell, K.; and Gross, M. 2016. Adaptive Polynomial Rendering. ACM Transactions on Graphics (TOG), 35(4). Munkberg, J.; and Hasselgren, J. 2020. Neural Denoising with Layer Embeddings. Computer Graphics Forum (CGF), 39(4): 1–12. NVIDIA. 2021. Deep Learning Super Sampling (DLSS) Technology. https://www.nvidia.com/enus/geforce/technologies/dlss/. OpenGL. 1998. Deferred Shading. https://learnopengl.com /Advanced-Lighting/Deferred-Shading. Overbeck, R. S.; Erickson, D.; Evangelakos, D.; and Debevec, P. 2018. The making of welcome to light fields VR. In Proceedings of ACM SIGGRAPH 2018 Conference (SIGGRAPH). Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; K¨opf, A.; Yang, E. Z.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; and Chintala, S. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. CoRR, abs/1912.01703. Ravishankar, S.; and Bresler, Y. 2011. MR Image Reconstruction From Highly Undersampled k-Space Data by Dictionary Learning. IEEE Transactions on Medical Imaging (TMI), 30(5): 1028–1041. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. CoRR, abs/1505.04597. Rousselle, F.; Knaus, C.; and Zwicker, M. 2012. Adaptive Rendering with Non-Local Means Filtering. ACM Transactions on Graphics (TOG), 31(6). Rousselle, F.; Manzi, M.; and Zwicker, M. 2013. Robust Denoising using Feature and Color Information. Computer Graphics Forum (CGF). Schied, C.; Kaplanyan, A.; Wyman, C.; Patney, A.; Chaitanya, C. R. A.; Burgess, J.; Liu, S.; Dachsbacher, C.; Lefohn, A.; and Salvi, M. 2017. Spatiotemporal VarianceGuided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination. In Proceedings of High Performance Graphics (HPG). Schied, C.; Peters, C.; and Dachsbacher, C. 2018. Gradient Estimation for Real-Time Adaptive Temporal Filtering. Proc. ACM Comput. Graph. Interact. Tech., 1(2). Seila, A. F. 1982. Simulation and the Monte Carlo method. Sellers, G.; and Kessenich, J. 2016. Vulkan programming guide: The official guide to learning vulkan. AddisonWesley Professional. Thomas, M. M.; Liktor, G.; Peters, C.; Kim, S.; Vaidyanathan, K.; and Forbes, A. G. 2022. Temporally stable real-time joint neural denoising and supersampling. In ACM on Computer Graphics and Interactive Techniques (CGIT). Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS). Vogels, T.; Rousselle, F.; Mcwilliams, B.; R¨othlin, G.; Harvill, A.; Adler, D.; Meyer, M.; and Nov´ak, J. 2018. Denoising with Kernel Prediction and Asymmetric Loss Functions. ACM Transactions on Graphics (TOG), 37(4). Wang, Z.; Bovik, A.; Sheikh, H.; and Simoncelli, E. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing (TIP), 13(4): 600–612. Xiao, L.; Nouri, S.; Chapman, M.; Fix, A.; Lanman, D.; and Kaplanyan, A. 2020. Neural Supersampling for Real-Time Rendering. ACM Transactions on Graphics (TOG), 39(4). Xu, B.; Zhang, J.; Wang, R.; Xu, K.; Yang, Y.-L.; Li, C.; and Tang, R. 2019. Adversarial Monte Carlo Denoising with Conditioned Auxiliary Feature Modulation. ACM Transactions on Graphics (TOG), 38(6). Xu, J.-P.; Zuo, C.; Zhang, F.-L.; and Wang, M. 2022. Rendering-aware hdr environment map prediction from a single image. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Yang, L.; Liu, S.; and Salvi, M. 2020. A Survey of Temporal Antialiasing Techniques. Computer Graphics Forum (CGF), 39(2): 607–621. Yang, L.; Nehab, D.; Sander, P. V.; Sitthi-amorn, P.; Lawrence, J.; and Hoppe, H. 2009. Amortized Supersampling. ACM Transactions on Graphics (TOG), 28(5): 1–12. Yu, J.; Nie, Y.; Long, C.; Xu, W.; Zhang, Q.; and Li, G. 2021. Monte Carlo Denoising via Auxiliary Feature Guided SelfAttention. ACM Transactions on Graphics (TOG). Zwicker, M.; Jarosz, W.; Lehtinen, J.; Moon, B.; Ramamoorthi, R.; Rousselle, F.; Sen, P.; Soler, C.; and Yoon, S.-E. 2015. Recent Advances in Adaptive Sampling and Reconstruction for Monte Carlo Rendering. Computer Graphics Forum (CGF), 34(2): 667–681. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7014
2024
779
18,605
SHaRPose: Sparse High-Resolution Representation for Human Pose Estimation Xiaoqi An1,2, Lin Zhao1,2*, Chen Gong1, Nannan Wang2, Di Wang2, Jian Yang1* 1PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Understanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology 2 State Key Laboratory of Integrated Services Networks, Xidian University {xiaoqi.an, linzhao, chen.gong, csjyang}@njust.edu.cn, {nnwang, wangdi}@xidian.edu.cn Abstract High-resolution representation is essential for achieving good performance in human pose estimation models. To obtain such features, existing works utilize high-resolution input images or fine-grained image tokens. However, this dense highresolution representation brings a significant computational burden. In this paper, we address the following question: “Only sparse human keypoint locations are detected for human pose estimation, is it really necessary to describe the whole image in a dense, high-resolution manner?” Based on dynamic transformer models, we propose a framework that only uses Sparse High-resolution Representations for human Pose estimation (SHaRPose). In detail, SHaRPose consists of two stages. At the coarse stage, the relations between image regions and keypoints are dynamically mined while a coarse estimation is generated. Then, a quality predictor is applied to decide whether the coarse estimation results should be refined. At the fine stage, SHaRPose builds sparse high-resolution representations only on the regions related to the keypoints and provides refined high-precision human pose estimations. Extensive experiments demonstrate the outstanding performance of the proposed method. Specifically, compared to the state-of-the-art method ViTPose, our model SHaRPose-Base achieves 77.4 AP (+0.5 AP) on the COCO validation set and 76.7 AP (+0.5 AP) on the COCO test-dev set, and infers at a speed of 1.4× faster than ViTPose-Base. Code is available at https://github.com/AnxQ/sharpose. Introduction 2D human pose estimation (HPE) is a fundamental task in the field of computer vision. Its main goal is to locate a set of anatomical keypoints that correspond to the human body’s joints and limbs in an image. HPE has been well studied (Guo 2020; Zhang et al. 2021; Chang et al. 2020) and forms the foundation for many downstream tasks such as action recognition (Kawai, Yoshida, and Liu 2022; Chao et al. 2017; Xu et al. 2022a; Duan et al. 2022) and abnormal behavior detection (Tang et al. 2021; Qiu et al. 2022). Due to its potential applications in the real world, HPE remains an active area of research (Niemirepo, Viitanen, and Vanne 2020; Yu et al. 2021; Zhang, Zhu, and Ye 2019; Li et al. 2022, 2021c; Jiang et al. 2023). *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Keypoint Decoder Image Encoder Coarse stage: Identify body parts Fine Stage: Sparse High-Resolution Shared Modules Figure 1: A brief view of SHaRPose. The coarse stage selects image parts contributed to the keypoints, and the fine stage builds high-resolution representations upon them. In recent years, great progress has been made in human pose estimation (Toshev and Szegedy 2014; Newell, Yang, and Deng 2016; Xiao, Wu, and Wei 2018; Wang et al. 2021a; Chen et al. 2017). Most of the leading methods output heatmaps and then take the peak of heatmaps as the keypoint position. Hence, similar to other dense prediction tasks such as semantic segmentation (Ke et al. 2022; Guo et al. 2022) and depth estimation (Shen et al. 2021; Luo et al. 2020), it’s necessary to obtain high-resolution representation to ensure the inference accuracy (Badrinarayanan, Kendall, and Cipolla 2017; Lin et al. 2017; Chen et al. 2018). For example, Stacked Hourglass (Newell, Yang, and Deng 2016) achieves high-quality image representation by stacking a symmetric encoding-decoding structure, while HRNet (Wang et al. 2021a) utilizes multiple parallel convolution branches to preserve high-resolution feature representations. ViTPose (Xu et al. 2022b) achieves notable performance using an 8 × 8 fine-grained patch splitting setting. However, it is observed that increasing the resolution of feature representation (i.e., the number of image tokens for transformer-based methods) results in an intensive computational burden. As shown in Table.1, this is particularly significant in Transformer-based methods because the complexity of Transformers is quadratic to the number of tokens (Khan et al. 2022). In this paper, we aim to improve the efficiency of transformer-based models for human pose estimation, and we think about the following question: Since we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 691 Model Input size FPS FLOPS AP HRNet 256×192 194 15.8 75.1 384×288 152(-21%) 35.5(+125%) 76.3 ViTPose 256×192 340 18.6 75.8 384×288 143(-58%) 44.1(+136%) 76.9 Table 1: Computational cost for high-resolution input Layer #1 Feature Heatmap Decoder Layer #N Feature … #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 Figure 2: Decoder’s response of ViTPose. Each heatmap is generated by feeding the output of each intermediate Transformer layer to the heatmap decoder. only want the keypoint locations, which are sparse relative to the entire image, do we truly need high-resolution feature representation for all contents? Based on this thinking, we conduct experiments using ViTPose (Xu et al. 2022b), as shown in Fig.2. Each heatmap is obtained by redirecting the intermediate layer’s output to the decoder. These heatmaps provide an intuitive visualization of the image regions that the decoder is focusing on. We can observe that only in the first few layers, the output of the Transformer causes a global response on the decoder, while in the subsequent layers, the decoder’s response is clearly concentrated on the sparse local areas containing keypoints. This means that during the inference, a large part of the image tokens like those only containing background information do not provide effective context information. Thus, Focusing solely on keypoint-related image regions may be sufficient to achieve accurate estimation results. And this can significantly reduce the computation costs. Inspired by this idea, we propose a method that only needs Sparse High-resolution Representation to do human Pose estimation, named SHaRPose. The framework is based on pure transformers and makes use of the correlation mining capabilities of Transformer (Bach et al. 2015; Chefer, Gur, and Wolf 2021; Liang et al. 2022) to identify significant image regions for keypoint detection. An overview of our framework is illustrated in Fig.1. The inference process is divided into two stages: The initial stage of our network processes coarse-grained patches as inputs, leading to diminished computational expenses owing to the decreased token count. Then a quality predictor module is applied to judge the roughly predicted pose. If the module yields a high confidence score, the network inference terminates. If not, the input image is split into finer-grained patches and fed into the fine stage to get refined results. To avoid computational burden on redundant patches, only the image patches with strong correlations to keypoints are split into finer-grained patches, while patches with weaker correlations are retained in the coarse-grained state. Hence, the proposed approach prevents heavy computational loads caused by processing unnecessary high-resolution image patches. Overall, the main contributions of this paper are as follows: • SHaRPose proposes to use sparse high-resolution representations, which is the first time that a dynamic optimization strategy has been introduced into the pose estimation task as far as our knowledge goes. • SHaRPose greatly improves the efficiency of pure transformer models in the task of pose estimation. We reduce 25% of GFLOPs and achieve a 1.4× higher FPS compared to ViTPose-Base. • SHaRPose shows competitive performance with much higher throughput than the existing state-of-the-art models on the MS COCO dataset. We achieve 77.4 AP (+0.5 AP) on COCO validation set and 76.7 AP (+0.5 AP) on COCO test-dev set compared to ViTPose-Base. Related Works Vision Transformer for Pose Estimation Vision Transformers (ViT) crop and map 2D images or image feature representations into token tensors to model longrange dependencies. With the overwhelming performance of Transformers in various computer vision tasks (Dosovitskiy et al. 2021; Liu et al. 2021; Xia et al. 2022; Liu et al. 2022; Carion et al. 2020; Wang et al. 2021b), some works have introduced Transformers to pose estimation, because the capability of ViT to capture long-range dependencies is of notable value in modeling the structure of the human body (Ramakrishna et al. 2014; Tompson et al. 2014; Wei et al. 2016). PRTR (Li et al. 2021a) proposes a cascaded transformer structure to achieve end-to-end keypoint coordinate regression. TransPose (Yang et al. 2021) utilizes a transformer encoder to process feature maps and to produce interpretable heatmap responses. HRFormer (Yuan et al. 2021) adopts the structure of HRNet (Wang et al. 2021a) and inserts attention blocks into branches to achieve larger receptive fields. On the other hand, TFPose (Mao et al. 2021) uses a set of keypoint queries to regress coordinates from transformers, while TokenPose (Li et al. 2021b) proposes token-based heatmap representations to model the body parts explicitly. ViTPose (Xu et al. 2022b) explores the feasibility of using a plain transformer as the backbone network for pose estimation and achieves excellent prediction accuracy with the help of masked image modeling (He et al. 2022) and multidataset training. In general, compared with the pure CNN-based methods (Wang et al. 2021a; Newell, Yang, and Deng 2016; Toshev and Szegedy 2014), the transformer-based models are more likely to achieve good results with the help of global attention. However, this also leads to a larger computation cost. In this paper, a sparse high-resolution representation mechanism is explored, which saves considerable computation while retaining the global modeling advantages and high precision of the transformer-based methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 692 Transformer Blocks Transformer Blocks … … Keypoint Related Patch Recognition Global Attention Maps Fine-grained patchify Quality Predictor … … Coarse Heatmap Refined Heatmap Qualified Image Tokens Keypoint Tokens Quality Token Element wise add Position Embedding Res.↓ Coarse stage Fine stage Keypoint Decoder Res.↑ Figure 3: The overall structure of SHaRPose. The attention maps yielded by the transformer in the coarse stage is used for selecting keypoint-related patches in the fine stage. Only these keypoint-related patches are processed in finer granularity in the fine stage. The parameters of the Transformer blocks and the keypoint decoder are shared between the two stages. Dynamic Vision Transformer To mitigate the issue of computational resource consumption resulting from global feature interaction in Transformers, many methods have been proposed, among which dynamic optimization is one of the major categories. The simplest approaches involve reducing the number of input tokens to Transformer by pruning them: DynamicViT (Rao et al. 2021) uses a lightweight detector to determine which tokens to keep, ToMe (Bolya et al. 2023) fuses similar tokens based on their similarity, and EviT (Liang et al. 2022) evaluates the importance of image blocks based on class attention. On the other hand, some methods gradually adjust the input granularity from a coarse level. QuadTree (Tang et al. 2022) obtains attention from different scales at each layer and performs cross-scale weighting to capture comprehensive representations, thus reducing the number of tokens involved in attention. DVT (Wang and Torresani 2022) uses adaptive patch size to reduce the calculation on easy samples. CF-ViT (Chen et al. 2023) designs two stages using different granularity patches and reorganizes the specific fine-grained tokens with the coarse-grained tokens to refine the prediction in the second stage. The above-mentioned works have achieved good tradeoffs between accuracy and performance. However, the success of these methods is mainly demonstrated in the classification task. In this work, we adapt dynamic transformers to the pose estimation task. Because retaining global context is helpful for human pose estimation, and discarding tokens may cause the model to produce biased predictions, we follow the second category of dynamic transformer methods, designing the framework in a coarse-to-fine manner. Method Overall Structure As depicted in Fig.3, SHaRPose contains two stages with a shared keypoint decoder. The coarse stage consists of a Transformer and a quality predictor module. The fine stage includes a keypoint-related patch selection module and a Transformer sharing the same parameters as the one in the coarse stage. In this section, we will present our framework stage-bystage and give a detailed introduction to each module. Coarse-Inference Stage The goal of this stage is to capture relations between image regions and keypoints, as well as give a coarse inferred heatmap and decide whether the heatmap is accurate enough. To accomplish the objective, a set of keypoint tokens and a quality token are introduced as the queries. Token Input Denote the input image X ∈RH×W ×C, given the specific patch size ph, pw and an input scaling factor sc, we compose the input token sequence as follows: Xc = Resample(X) ∈RH·sc×W ·sc×C Xc 0 = h v1 0; v2 0; . . . vNc 0 ; k1 0; k2 0; . . . kM 0 ; q0 i , (1) where vi 0 is the visual token, obtained from the Re-sampled image Xc. First, Xc is split into Nc = H·sc ph × W ·sc pw patches, then a linear projection f : p →v ∈RD is applied to get the corresponding vi 0. M is the number of keypoints, and {ki 0 ∈ RD}M i=1 are keypoint tokens from M learnable embeddings, representing the query of keypoints. q0 ∈RD is a quality token also from a learnable embedding, which will be used to estimate the quality of the predicted human pose. Transformer Encoder After the composition of the input token sequence, a K-layers transformer V (Dosovitskiy et al. 2021) is applied to obtain the output sequence: Xc K = V(Xc 0) = h v1 K; v2 K; . . . vNc K ; k1 K; k2 K; . . . kM K ; qK i . (2) Keypoint Decoder The keypoint decoder builds heatmaps from M output keypoint tokens {ki K}M i=1 through an unified multiple linear projection module: Hc i = D(ki K) ∈R ˆ H× ˆ W , (3) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 693 … Reshape kp. related … Attn. map MLP Reshape Figure 4: Compose the input of the fine stage. The attention scores ˆAh;k between visual tokens and keypoint tokens are just part of the full attention matrix Ah;k. Only high-score image patches (blue) are further split into fine-grained patches. An MLP is applied to incorporate the coarse-stage information into the fine stage. where ˆH × ˆW is the heatmap size, ˆH and ˆW are both 1/4 of the original image size H and W. Through the decoder, the coarse-predicted heatmaps {Hc i}M i=1 are acquired. Quality Predictor Inspired by (Zhao et al. 2021; Fang et al. 2017), we use a learnable quality embedding q0 to obtain the quality of the predicted keypoints by grubbing information from both visual tokens and keypoint tokens. Then, the quality predictor module produces the quality score of the prediction through the information fused in the quality token: Q = MLP(qK). (4) With the estimated quality score Q, we set a threshold Qthres. Only if Q < Qthres, the image will be split into finer-grained patches and processed in the fine stage. This allows the model to dynamically distinguish hard and easy samples. Therefore, the number of images that go through the fine stage can be reduced, which can further increase the throughput. Fine-Inference Stage In this stage, the model generates sparse high-resolution representations and makes high-precision predictions of poses by leveraging the attention obtained in the coarse stage. Keypoint-Related Patch Recognition In order to decide which image regions need high-resolution feature representations, a kind of relevance score between image patches and keypoints is required. As shown in Fig.4, consider a slice of the attention matrix in a layer of the Transformer V, which is defined as follows: ˆAh;k = h ˆaN c+1 h;k ; ˆaN c+2 h;k ; . . . ; ˆaN c+M h;k i ∈RM×N c, (5) where ˆaN c+i h;k is the attention score vector between the keypoint token ki k−1 and all coarse-grained image tokens {vi k−1}N c i=1 at head h. This vector reflects the interaction between the keypoint token and the visual tokens. Following (Chen et al. 2023), we also use exponential moving average (EMA) to combine attentions from each Transformer layer: Ah;k = β · Ah;k−1 + (1 −β) · ˆAh;k, (6) in which we set β = 0.99. Then we take the accumulated attention matrix of the last layer Ah;K and mix the attention vector of each keypoint and different heads in the following form to get the final visual token correlation score: s = 1 HM H X h=1 M X i=1 aN c+i h;K , (7) where aN c+i h;K is the ith column of the matrix Ah;K, H and M denote the number of heads and the number of keypoints, respectively. According to s, we can rank and select the image patches which are important to estimate human pose. As shown in Fig.4, We select a set Xhigh consisting of N h = ⌊α · N c⌋patches with higher scores from {vi K}N c i=1, while the remaining patches form the set Xlow. Fine Inference To perform the fine stage inference, we first need to construct high-resolution representations. Similar to Eq.1 in Section , given the scaling ratio in the fine stage sf, the full visual tokens can be obtained as Xf full = {vi f}Nf i=0, where Nf = H·sf ph · W ·sf pw is the number of all finegrained image tokens. The initial visual tokens of the fine stage {ˆvi 0} ˆ Nf i=0 can be constructed as follows: The first part is composed of tokens not closely associated with keypoints, which can be directly taken from Xlow. The second part comprises tokens generated from Xhigh, which are more relevant to keypoints. Denote a singular visual token from Xhigh as vj K, which is further split into N = (sf/sc)2 fine-grained tokens. The computation of the new input visual tokens is formulated as {MLP(vj K)+vci f }N i=0, where{vci f }N i=0 are fine image tokens from Xf full at the corresponding location of vj K. Thus, the input token sequence can be formed by: Xf 0 = h ˆv1 0; ˆv2 0; . . . ˆv ˆ Nf 0 ; k1 0; k2 0; . . . kM 0 i , (8) where ˆNf = N · ⌊α · Nc⌋+ ⌊(1 −α) · Nc⌋is the number of visual tokens, ki 0 is the same initial keypoint token embedding as in Eq.1. We present the process of building the fine stage visual tokens in Fig.4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 694 Model Input Size Feat. Dim. Depth Patch Size sc sf α FPS Params GFLOPs SHaRPose-Small 256×192 384 12 16×16 0.5 1.0 0.5 498.3 28.4M 4.9 SHaRPose-Small 384×288 384 12 16×16 0.5 1.0 0.5 395.3 48.3M 11.0 SHaRPose-Base 256×192 768 12 16×16 0.5 1.0 0.4 392.8 93.9M 17.1 SHaRPose-Base 384×288 768 12 16×16 0.5 1.0 0.3 196.6 118.1M 32.9 Table 2: Configurations of the instantiated SHaRPose models. We provide the detailed parameters for constructing both the Base and the Small models. And the specific model sizes are presented in the last columns of the table. Then, similar to Eq.2, a transformer sharing the same parameters as the one in the coarse stage is applied to get the output of the fine stage by Xf K = V(Xf 0 ). Finally, the keypoint tokens are fed into a shared decoder defined in Eq.3 to get the fine inferred heatmaps Hf i = D(ki K). Loss Function For training the network, we impose supervision both on the output heatmaps and the pose confidence that the quality predictor infers: L = Lheatmap + λLqp, (9) in which λ is a hyper-parameter to balance the loss terms. Lheatmap is the heatmap mean square error loss, including the coarse stage and the fine stage: Lheatmap = 1 M M X i  Lmse(Hc i, Hgt i ) + Lmse(Hf i , Hgt i )  , (10) in which Hgt i is the ground-truth heatmap. Lqp is an L2norm loss between the quality predictor’s output Q and the coarse stage’s ground-truth OKS, which denotes the object keypoint similarity: Lqp = Q −OKSgt 2 . (11) Experiments Experiment Setup Datasets We conduct experiments on COCO (Lin et al. 2014) and MPII (Andriluka et al. 2014) datasets. Following the customary strategy of Top-Down methods (Xiao, Wu, and Wei 2018; Wang et al. 2021a; Newell, Yang, and Deng 2016), we utilize the COCO 2017 dataset, which comprises 200k images and 250k person instances. The dataset is segregated into three subsets: train, valid, and test-dev, containing 150k, 5k, and 20k samples, respectively. We train our model on the train subset and test it on the valid and test-dev subsets. The MPII dataset, which comprises over 40k person instances and 25k images, is also employed for training and evaluation. Evaluation Metrics Following (Wang et al. 2021a; Yuan et al. 2021; Xu et al. 2022b; Li et al. 2021b), we use the standard average precision (AP) as evaluation metric on the COCO dataset, which is calculated based on OKS. On the other hand, we perform head-normalized percentage of correct keypoint (PCKh) (Andriluka et al. 2014) on the MPII dataset and report the PCKh@0.5 score. Implementation Details The SHaRPose framework offers variability on three aspects: 1) the embedding size, which specifies the number of features carried by each token; 2) the parameter α, which determines the proportion of image patches utilized for generating high-resolution representations; 3) the threshold of the predicted pose quality Qthres, which controls the number of samples that enter the fine stage. In this paper, we instantiate SHaRPose with two different sizes by scaling the embedding size. Other configurations like the depth (the number of Transformer blocks) are set the same. The detailed configurations of the instantiated SHaRPose models are presented in Table.2. Training Details To ensure a fair comparison, all experiments presented in this paper are conducted using the MMPose framework (SenseTime 2020) on four NVIDIA RTX 3090 GPUs. The default data pipelines of MMPose are utilized. The masked autoencoder pretrain (He et al. 2022) is used as in (Xu et al. 2022b) for the purpose of exploring the potential capabilities of pure Transformers. UDP (Huang et al. 2020) is used for post-processing. The model is trained for 210 epochs with a learning rate of 5e-4, which is decreased to 5e-5 and 5e-6 at the 170th and 200th epochs, respectively. In particular, we aim to predict the confidence values as accurately as possible with the quality predictor. Because the convergence rate of the quality predictor is much faster than that of the heatmap (Zhao et al. 2021), we set λ = 0 in the first 180 epochs and λ = 0.03 in the subsequent epochs based on empirical analysis. Results Comparison to state-of-the-art methods on COCO We compare the performance and efficiency of our proposed method with several state-of-the-art (SOTA) approaches. Validation set As shown in Table.3, under the input resolution 256 × 192, our SHaRPose-Small model achieves an AP of 74.2, which is a significant improvement of +8.6 AP over the TokenPose-T model and a +0.4 AP improvement over ViTPose-Small, while maintaining a faster inference speed. Furthermore, our SHaRPose-Base model achieves an AP of 75.5, which is a +0.4 AP improvement over HRNetW48 and is 1.9× faster than it. Notably, our model also demonstrates faster inference speed than TokenPose-L/D6, HRFormer, and ViTPose-Base, with comparable accuracy. At the higher input resolution of 384 × 288, our model’s advantages become even more pronounced. The SHaRPoseBase model achieves a SOTA performance of 77.4 AP while maintaining lower GLOPs and higher throughput compared to other methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 695 Method Input AP AP 50 AP 75 AP L AP M AR FPS ↑ GFLOPs ↓ TokenPose-T(Li et al. 2021b)† 256×192 65.6 86.4 73.0 71.5 63.1 72.1 348.1 1.2 ViTPose-Small(Xu et al. 2022b)† 256×192 73.8 90.3 81.3 75.8 67.1 79.1 360.3 5.7 SHaRPose-Small† 256×192 74.2↑0.4 90.2 81.8 80.3 71.2 79.5 498.3↑38% 4.9↓14% SimpleBaseline(Xiao, Wu, and Wei 2018) 256×192 73.6 90.4 81.8 80.1 70.1 79.1 195.1 12.8 HRNet-W48(Wang et al. 2021a) 256×192 75.1 90.6 82.2 81.8 71.5 80.4 193.5 15.8 HRFormer-Base(Yuan et al. 2021) 256×192 75.6 90.8 82.8 82.6 71.7 80.8 122.3 14.7 TokenPose-L/D6(Li et al. 2021b) 256×192 75.4 90.0 81.8 82.4 71.8 80.4 348.2 9.9 ViTPose-Base(Xu et al. 2022b) 256×192 75.8 90.7 83.2 78.4 68.7 81.1 340.2 18.6 SHaRPose-Base 256×192 75.5 90.6 82.3 82.2 72.2 80.8 392.8↑15% 17.1↓10% HRNet-W48(Wang et al. 2021a) 384×288 76.3 90.8 82.9 83.4 72.3 81.2 152.3 35.5 ViTPose-Base(Xu et al. 2022b) 384×288 76.9 90.9 83.2 83.9 73.1 82.1 143.3 44.1 SHaRPose-Small† 384×288 75.2 90.8 83.0 81.2 72.0 80.9 395.3 11.0 SHaRPose-Base 384×288 77.4↑0.5 91.0 84.1 84.2 73.7 82.4 196.6↑37% 32.9↓25% Table 3: Comparison on COCO validation set. The same detection results with 56AP are used for human instances. No extra training data is involved for all results. The FPS(frame-per-second) is evaluated under an identical environment. † denotes the small-scale models. The underlined numbers emphasize the compared results. The best results are highlighted in bold. Methods Input AP AP 50 AP 75 AP L AP M AR FPS ↑ GFLOPs ↓ SimpleBaseline(Xiao, Wu, and Wei 2018) 384×288 73.7 91.9 81.1 70.3 80.0 79.0 153.5 28.7 UDP-HRNet-W48(Huang et al. 2020) 384×288 76.5 92.7 84.0 73.0 82.4 81.6 152.3 35.5 DARK-HRNet-W48(Zhang et al. 2020) 384×288 76.2 92.5 83.6 72.5 82.4 81.1 150.4 32.9 TokenPose-L/D24(Li et al. 2021b) 384×288 75.9 92.3 83.4 72.2 82.1 80.8 117.2 22.1 ViTPose-Base(Xu et al. 2022b) 384×288 76.2 92.7 83.7 72.6 82.3 81.3 143.3 44.1 SHaRPose-Base 384×288 76.7↑0.5 92.8 84.4 73.2 82.6 81.6 196.6↑37% 32.9↓25% Table 4: Comparison on COCO test-dev set, same detection results with 60.9AP is used for human instaces. We only report single dataset training results at resolution 384 × 288. Test-dev set Table.4 demonstrates the results of the SOTA methods on COCO test-dev. SHaRPose-Base with 384×288 as input achieves 76.7AP. Compared to HRNet with UDP and DARK post-processing, our model achieves +0.2 AP and +0.5 AP higher accuracy and nearly 1.3x faster inference speed. Compared to ViTPose-Base, our model has a +0.5 AP improvement and nearly 1.4× higher throughput. Comparison to state-of-the-art methods on MPII The results on MPII test set evaluated by PCKh@0.5 are displayed in Table.5. The input resolution is 256 × 256, and the ground-truth bounding boxes are used by default. Our SHaRPose-Base model achieves a PCKh score of 91.4, outperforming other methods while also demonstrating 2-3 times higher throughput. Ablation Study Influence of α The parameter α is crucial in controlling the sparsity level of the high-resolution representation, and it impacts the calculation consumed by the fine stage. As shown in Table.6, for 256x192 input resolution, augmenting alpha from 0 to 0.4 can bring significant accuracy improvement, but a marginal gain is observed with subsequent increments. Thus, considering the balance of accuracy and efficiency, we set α = 0.4. For 384x288 input resolution, increasing α from 0.3 to 0.5 has little effect on accuracy but significantly increases computational costs. Therefore, setting α to 0.3 is sufficient to achieve accurate results. Model Simple Baseline HRNet W48 TokenPose L/D24 OKDHP SHaRPose Base Mean↑ 89.0 90.1 90.2 90.6 91.4 FPS↑ 66.9 47.1 65.5 212.4 Table 5: Comparison on MPII val set. SHaRPose demonstrates a significant advantage. α 0.0 0.3 0.4 0.5 1.0 AP 68.4 74.8 75.5 75.5 75.7 GFLOPs 13.3 15.8 17.1 18.2 24.9 (a) 256×192 α 0.3 0.5 AP 77.4 77.5 GFLOPs 32.9 38.9 (b) 384×288 Table 6: The effect of α at different settings Effect of quality predictor To evaluate the impact of the quality predictor, we adjust the value of Qthres based on the same SHaRPose-Base model on COCO dataset, using the ground-truth bounding boxes. Fig.6 illustrates the number of samples that terminate inference after the coarse stage and how the overall AP varies with different values of Qthres. We observe that as the value of Qthres decreases, the model tends to skip more samples in the fine stage, resulting in a decrease in AP, but the AP 50 and AP 75 only change a little. Therefore, the appropriate choice of Qthres depends on the specific application scenario and the required level of acThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 696 Input Attn. Kp. Rel. Input Attn. Kp. Rel. Input Attn. Kp. Rel. Figure 5: Visualization of keypoint-related regions. Three samples are chosen as examples. The first column gives the input image, the second column presents the accumulated attention map, and the third column shows the selected image regions. 0% 0% 0% 0% 0% 1 0.95 0.85 0.8 0 0.2 0.4 0.6 0.8 70 80 90 100 AP AP .5 AP .75 AP (M) AP (L) 0.9 Quality score threshold Dropped sample ratio AP 78% 77% 74% 72% 69% 65% 61% 57% 52% 47% 40% 33% 24% 13% 3% Figure 6: AP and the ratio of dropped samples on different settings of Qthres. Each bar demonstrates the ratio of dropped samples with the Qthres given, and the red line denotes the accuracy. Method AP AP 50 FPS GFLOPs DynamicViT 71.8 89.4 182.8 12.8 EViT 72.7 90.1 424.1 12.8 SHaRPose-S 74.2 90.2 498.3 4.9 SHaRPose-B 75.5 90.6 392.8 17.1 Table 7: Comparison with pruning-based dynamic Transformers curacy. In the experiments of section , we set Qthres = 0.95 for comparison with other SOTA methods. Necessity of the coarse-to-fine design To demonstrate the necessity of the coarse-to-fine architecture for pose estimation, we analyze from two perspectives: firstly, we perform comparative experiments on two pruning-based dynamic Transformers, namely DynamicViT(Rao et al. 2021) and EViT (Liang et al. 2022). We introduce keypoint tokens and employ a processing pipeline consistent with SHaRPose. As shown in Table.7, although EViT exhibits higher efficiency, its accuracy is compromised. This indicates that dynamic pruning in localization tasks limits the model’s ability to generate precise outcomes. Secondly, we individually remove components of our framework, as shown in Table.8. The fine stage is indispensable for accuracy imMethod AP AP 50 FPS GFLOPs Coarse-None 67.9 88.8 453.4 5.7 None-Fine 74.7 90.6 302.9 17.5 Coarse-Coarse 68.4 89.0 417.7 13.3 Coarse-NoSel-Fine 75.7 89.0 239.5 24.9 Coarse-Sel-Fine 75.5 90.6 392.8 17.1 Table 8: Comparison of different configurations of the proposed two stage framework provement, while the coarse stage, responsible for identifying keypoints-related image patches, plays a crucial role in reducing FLOPs. Visualization Selected keypoint-related image patches Fig.5 presents some samples to visualize the keypoint-related regions. The second column exhibits the attention map that is accumulated between keypoint tokens and image patches, while the third column shows the keypoint-related regions, which are responsible for generating the high-resolution representation. It can be observed that the attention mechanism is primarily focused on the human instance, which aligns with the original design objective. Moreover, the attention intensity is particularly noticeable on the head since the COCO dataset contains more keypoints on the head. Conclusion In this paper, we provide an efficient pose estimation framework using only sparse high-resolution representations, named SHaRPose. Specifically, we introduce token-based keypoint representations into the coarse-to-fine framework to explicitly capture image parts that require high-resolution representations. In addition, we introduce a quality evaluation module, so that the model can quickly complete the inference of simple samples. Our quantitative experiments demonstrate the high accuracy and efficiency of our model. The visualization results also show the effectiveness of the proposed modules. This work provides directions for enhancing the computational efficiency of pose estimation methods using dynamic optimization strategies. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 697 Acknowledgments The authors would like to thank the editor and the anonymous reviewers for their critical and constructive comments and suggestions. This work was supported by the National Natural Science Fund of China under Grant No.62172222,62072354,62361166670, the Postdoctoral Innovative Talent Support Program of China under Grant 2020M681609, and the Fundamental Research Funds for the Central Universities under Grant QTZX23084. References Andriluka, M.; Pishchulin, L.; Gehler, P.; and Schiele, B. 2014. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In CVPR, 3686–3693. Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; M¨uller, K.-R.; and Samek, W. 2015. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE, 10. Badrinarayanan, V.; Kendall, A.; and Cipolla, R. 2017. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12): 2481–2495. Bolya, D.; Fu, C.-Y.; Dai, X.; Zhang, P.; Feichtenhofer, C.; and Hoffman, J. 2023. Token Merging: Your ViT but Faster. In ICLR. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-End Object Detection with Transformers. In ECCV, 213–229. Chang, S.; Yuan, L.; Nie, X.; Huang, Z.; Zhou, Y.; Chen, Y.; Feng, J.; and Yan, S. 2020. Towards Accurate Human Pose Estimation in Videos of Crowded Scenes. In ACMMM, 4630–4634. Chao, L.; Qiaoyong, Z.; Di, X.; and Shiliang, P. 2017. Skeleton-based action recognition with convolutional neural networks. In ICMEW, 597–600. Chefer, H.; Gur, S.; and Wolf, L. 2021. Transformer Interpretability Beyond Attention Visualization. In CVPR, 782– 791. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2018. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4): 834–848. Chen, M.; Lin, M.; Li, K.; Shen, Y.; Wu, Y.; Chao, F.; and Ji, R. 2023. CF-ViT: A General Coarse-to-Fine Method for Vision Transformer. In AAAI, volume 37, 7042–7052. Chen, Y.; Shen, C.; Wei, X.-S.; Liu, L.; and Yang, J. 2017. Adversarial PoseNet: A Structure-Aware Convolutional Network for Human Pose Estimation. In ICCV, 1221–1230. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Duan, H.; Zhao, Y.; Chen, K.; Lin, D.; and Dai, B. 2022. Revisiting Skeleton-based Action Recognition. In CVPR, 2959–2968. Fang, H.-S.; Xie, S.; Tai, Y.-W.; and Lu, C. 2017. RMPE: Regional Multi-person Pose Estimation. In ICCV, 2353– 2362. Guo, S.; Liu, L.; Gan, Z.; Wang, Y.; Zhang, W.; Wang, C.; Jiang, G.; Zhang, W.; Yi, R.; Ma, L.; and Xu, K. 2022. ISDNet: Integrating Shallow and Deep Networks for Efficient Ultra-high Resolution Segmentation. In CVPR, 4351–4360. Guo, W. 2020. Multi-Person Pose Estimation in Complex Physical Interactions. In ACMMM, 4752–4755. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked Autoencoders Are Scalable Vision Learners. In CVPR, 15979–15988. Huang, J.; Zhu, Z.; Guo, F.; and Huang, G. 2020. The Devil Is in the Details: Delving Into Unbiased Data Processing for Human Pose Estimation. In CVPR, 5699–5708. Jiang, T.; Lu, P.; Zhang, L.; Ma, N.; Han, R.; Lyu, C.; Li, Y.; and Chen, K. 2023. RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose. arXiv:2303.07399. Kawai, R.; Yoshida, N.; and Liu, J. 2022. Action Detection System Based on Pose Information. In ACMMM Asia, 40, 1–3. Ke, L.; Danelljan, M.; Li, X.; Tai, Y.-W.; Tang, C.-K.; and Yu, F. 2022. Mask Transfiner for High-Quality Instance Segmentation. In CVPR, 4402–4411. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S. W.; Khan, F. S.; and Shah, M. 2022. Transformers in Vision: A Survey. ACM Computing Surveys, 54(10s). Li, K.; Wang, S.; Zhang, X.; Xu, Y.; Xu, W.; and Tu, Z. 2021a. Pose Recognition with Cascade Transformers. In CVPR, 1944–1953. Li, L.; Zhao, L.; Xu, L.; and Xu, J. 2022. Towards High Performance One-Stage Human Pose Estimation. In ACMMM Asia, 37, 1–5. Li, Y.; Zhang, S.; Wang, Z.; Yang, S.; Yang, W.; Xia, S.-T.; and Zhou, E. 2021b. TokenPose: Learning Keypoint Tokens for Human Pose Estimation. In ICCV, 11293–11302. Li, Z.; Ye, J.; Song, M.; Huang, Y.; and Pan, Z. 2021c. Online Knowledge Distillation for Efficient Pose Estimation. In ICCV, 11740–11750. Liang, Y.; Ge, C.; Tong, Z.; Song, Y.; Wang, J.; and Xie, P. 2022. EViT: Expediting Vision Transformers via Token Reorganizations. In ICLR. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017. Feature Pyramid Networks for Object Detection. In CVPR, 936–944. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In ECCV, 740–755. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In ICCV, 9992–10002. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 698 Liu, Z.; Ning, J.; Cao, Y.; Wei, Y.; Zhang, Z.; Lin, S.; and Hu, H. 2022. Video Swin Transformer. In CVPR, 3192– 3201. Luo, X.; Huang, J.-B.; Szeliski, R.; Matzen, K.; and Kopf, J. 2020. Consistent Video Depth Estimation. ACM Transactions on Graphics, 39(4). Mao, W.; Ge, Y.; Shen, C.; Tian, Z.; Wang, X.; and Wang, Z. 2021. TFPose: Direct Human Pose Estimation with Transformers. arXiv:2103.15320. Newell, A.; Yang, K.; and Deng, J. 2016. Stacked Hourglass Networks for Human Pose Estimation. In ECCV, 483–499. Niemirepo, T. T.; Viitanen, M.; and Vanne, J. 2020. Binocular Multi-CNN System for Real-Time 3D Pose Estimation. In ACMMM, 4553–4555. Qiu, J.; Yan, X.; Wang, W.; Wei, W.; and Fang, K. 2022. Skeleton-Based Abnormal Behavior Detection Using Secure Partitioned Convolutional Neural Network Model. IEEE Journal of Biomedical and Health Informatics, 26(12): 5829–5840. Ramakrishna, V.; Munoz, D.; Hebert, M.; Andrew Bagnell, J.; and Sheikh, Y. 2014. Pose Machines: Articulated Pose Estimation via Inference Machines. In ECCV, 33–47. Rao, Y.; Zhao, W.; Liu, B.; Lu, J.; Zhou, J.; and Hsieh, C.-J. 2021. DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification. In NeurIPS, volume 34, 13937– 13949. SenseTime. 2020. OpenMMLab Pose Estimation Toolbox and Benchmark. https://github.com/open-mmlab/mmpose. Accessed: 2023-03-20. Shen, G.; Zhang, Y.; Li, J.; Wei, M.; Wang, Q.; Chen, G.; and Heng, P.-A. 2021. Learning Regularizer for Monocular Depth Estimation with Adversarial Guidance. In ACMMM, 5222–5230. Tang, S.; Zhang, J.; Zhu, S.; and Tan, P. 2022. QuadTree Attention for Vision Transformers. In ICLR. Tang, Y.; Zhao, L.; Yao, Z.; Gong, C.; and Yang, J. 2021. Graph-Based Motion Prediction for Abnormal Action Detection. In ACMMM Asia, 63, 1–7. Tompson, J. J.; Jain, A.; LeCun, Y.; and Bregler, C. 2014. Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation. In NeurIPS, volume 27, 1799–1807. Toshev, A.; and Szegedy, C. 2014. DeepPose: Human Pose Estimation via Deep Neural Networks. In CVPR, 1653– 1660. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; Liu, W.; and Xiao, B. 2021a. Deep High-Resolution Representation Learning for Visual Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(10): 3349–3364. Wang, J.; and Torresani, L. 2022. Deformable Video Transformer. In CVPR, 14053–14062. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021b. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. In ICCV, 548–558. Wei, S.-E.; Ramakrishna, V.; Kanade, T.; and Sheikh, Y. 2016. Convolutional Pose Machines. In CVPR, 4724–4732. Xia, Z.; Pan, X.; Song, S.; Li, L. E.; and Huang, G. 2022. Vision Transformer with Deformable Attention. In CVPR, 4784–4793. Xiao, B.; Wu, H.; and Wei, Y. 2018. Simple Baselines for Human Pose Estimation and Tracking. In ECCV, 472–487. Xu, K.; Ye, F.; Zhong, Q.; and Xie, D. 2022a. Topology-Aware Convolutional Neural Network for Efficient Skeleton-Based Action Recognition. In AAAI, volume 36, 2866–2874. Xu, Y.; Zhang, J.; ZHANG, Q.; and Tao, D. 2022b. ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation. In NeurIPS, volume 35, 38571–38584. Yang, S.; Quan, Z.; Nie, M.; and Yang, W. 2021. TransPose: Keypoint Localization via Transformer. In ICCV, 11782– 11792. Yu, C.; Xiao, B.; Gao, C.; Yuan, L.; Zhang, L.; Sang, N.; and Wang, J. 2021. Lite-HRNet: A Lightweight HighResolution Network. In CVPR, 10435–10445. Yuan, Y.; Fu, R.; Huang, L.; Lin, W.; Zhang, C.; Chen, X.; and Wang, J. 2021. HRFormer: High-Resolution Vision Transformer for Dense Predict. In NeurIPS, volume 34, 7281–7293. Zhang, C.; He, N.; Sun, Q.; Yin, X.; and Lu, K. 2021. Human Pose Estimation Based on Attention Multi-Resolution Network. In ICMR, 682–687. Zhang, F.; Zhu, X.; Dai, H.; Ye, M.; and Zhu, C. 2020. Distribution-Aware Coordinate Representation for Human Pose Estimation. In CVPR, 7093–7102. Zhang, F.; Zhu, X.; and Ye, M. 2019. Fast Human Pose Estimation. In CVPR, 3517–3526. Zhao, L.; Xu, J.; Gong, C.; Yang, J.; Zuo, W.; and Gao, X. 2021. Learning to Acquire the Quality of Human Pose Estimation. IEEE Transactions on Circuits and Systems for Video Technology, 31(4): 1555–1568. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 699
2024
78
18,606
Weakly Supervised Few-Shot Object Detection With DETR Chenbo Zhang1*, Yinglu Zhang1*, Lu Zhang1, Jiajia Zhao2, Jihong Guan3, Shuigeng Zhou1† 1Shanghai Key Lab of Intelligent Information Processing, and School of Computer Science, Fudan University, China 2Science and Technology on Complex System Control and Intelligent Agent Cooperation Laboratory, Beijing Electro-Mechanical Engineering Institute, China 3Department of Computer Science & Technology, Tongji University, China {cbzhang21,yingluzhang21}@m.fudan.edu.cn, {l zhang19,sgzhou}@fudan.edu.cn zhaojiajia1982@gamil.com, jhguan@tongji.edu.cn Abstract In recent years, Few-shot Object Detection (FSOD) has become an increasingly important research topic in computer vision. However, existing FSOD methods require strong annotations including category labels and bounding boxes, and their performance is heavily dependent on the quality of box annotations. However, acquiring strong annotations is both expensive and time-consuming. This inspires the study on weakly supervised FSOD (WS-FSOD in short), which realizes FSOD with only image-level annotations, i.e., category labels. In this paper, we propose a new and effective weakly supervised FSOD method named WFS-DETR. By a well-designed pretraining process, WFS-DETR first acquires general object localization and integrity judgment capabilities on large-scale pretraining data. Then, it introduces object integrity into multiple-instance learning to solve the common local optimum problem by comprehensively exploiting both semantic and visual information. Finally, with simple fine-tuning, it transfers the knowledge learned from the base classes to the novel classes, which enables accurate detection of novel objects. Benefiting from this “pretrainingrefinement” mechanism, WSF-DETR can achieve good generalization on different datasets. Extensive experiments also show that the proposed method clearly outperforms the existing counterparts in the WS-FSOD task. Instruction Object detection is a fundamental task in computer vision and has achieved great success in many practical scenarios. Currently, deep learning based techniques such as Faster R-CNN (Ren et al. 2015), YOLO (Redmon and Farhadi 2018), and DETR (Carion et al. 2020) have become mainstream. Typically, these methods rely on substantial amounts of well-annotated data to train models that can accurately recognize and localize the objects. Nevertheless, collecting and annotating such data is extremely expensive and timeconsuming, which limits their applications. In recent years, few-shot object detection (FSOD) has emerged as a promising direction, which aims to achieve *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (c) Weakly Supervised Few-shot Object Detection (b) Few-shot Object Detection [dog, cat] Detector (a) Weakly Supervised Object Detection dog cat Train Predict Detector dog cat motorcycle + Base Classes Novel Classes motorcycle Train Predict + Base Classes Novel Classes motorcycle Train Predict Detector [dog, cat] [motorcycle] Figure 1: A conceptual comparison among general weakly supervised object detection (WSOD), few-shot object detection (FSOD), and weakly supervised few-shot object detection (WS-FSOD). Here, on the left, red texts and boxes are annotations; And on the right, yellow texts and boxes are detected results. effective object detection using only a small amount of annotated data of novel classes. However, to train the FSOD model (Chen et al. 2019), we still have to collect a large amount of strongly annotated training data for the base classes, including the category and the bounding box of each object of each target class in each training image, which incurs huge annotation cost. Moreover, the performance of FSOD models heavily relies on the quality of box annotations. But, due to the complexity of images and the diversity of object morphology, it is difficult to guarantee the quality of the box annotations, which inevitably impacts the performance of the models. To alleviate the annotation problem, recently a few works have tried to incorporate weakly supervised learning into FSOD (Karlinsky et al. 2021; Shaban et al. 2022; Gao et al. 2019). StarNet (Karlinsky et al. 2021) stands for the first weakly supervised FSOD (WS-FSDO) effort that utilizes a star model to perform non-parametric geometric matching between support and query images. However, its computational complexity will significantly increase when handling The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7015 high-resolution images, which is a critical issue in object detection. Furthermore, the prediction boxes of StarNet are directly generated by off-the-shelf CAM algorithms (Selvaraju et al. 2017), which makes it inherently face the discriminative region problem in weakly supervised object detection (WSOD). vMF-MIL (Shaban et al. 2022) and NOTERCNN (Gao et al. 2019) still need lots of strongly annotated data, so they do not strictly follow the WS-FSOD setting. Fig. 1 illustrates the differences among the three tasks: WSOD, FSOD, and WS-FSOD. In this paper, we propose a novel WS-FSOD method called WFS-DETR, which strictly follows the settings of WS-FSOD, and the whole training process does not require any box annotations. Compared with fully supervised FSOD methods, our method is more practical for real-world scenarios. WFS-DETR adopts a pretraining-refinement mechanism. First, an initial object localization network is trained through pretraining. Data augmentation and knowledge distillation are employed to enable DETR to acquire general object localization and integrity judgment capabilities. Then, a progressive refinement network is developed for weakly supervised training and tuning, following the Basetraining + Fine-tuning paradigm in FSOD. Specifically, the purpose of our pretraining is to equip the detector with the ability to locate and determine the integrity of foreground objects. Instead of non-parametric proposal generation (e.g. random cropping or selective-search) as in previous works, we train an Attention-based Localization Network (ALN) to generate more accurate proposals. With the benefit of long-range modeling by vision transformer, ALN can effectively localize objects in images. Then, we utilize data augmentation to expand the diversity of proposals generated by ALN and jointly train ALN and DETR to distill knowledge to the detector. In the refinement phase, we jointly train DETR with a multiple-instance learning (MIL) structure and perform progressive refinement. Furthermore, we design the refinement strategy by incorporating both category confidence and object evidence. This enhancement provides more accurate supervision information for the refinement process and effectively addresses the problem of discriminative regions. In summary, the contributions of this paper are as follows: (1) We propose a novel WS-FSOD method, which is the first WS-FSOD work based on DETR. Our method can precisely detect objects of novel classes using solely image-level label supervision. (2) We develop a pretrainingrefinement mechanism to address the discriminative region problem in WS-FSOD. Our approach includes a PretrainingDistillation Localization Learning (PDLL) strategy to enhance the model’s object localization and integrity judgment capabilities. Notably, PDLL stands as the first pretraining strategy tailored exclusively for WSOD. Additionally, we introduce a Dual-factor Driven Progressive Refinement (DDPR) strategy, leveraging semantic and visual information to overcome local optimum challenges. (3) We conduct extensive experiments on benchmark datasets, which show that the proposed method significantly outperforms the stateof-the-art methods, validating its effectiveness. Related Work Weakly Supervised Object Detection Weakly supervised object detection (WSOD) tries to train an object detector using images with only image-level category labels. Existing WSOD methods can be mainly divided into class activation map based (CAM) methods (Zhou et al. 2016) and multiple-instance learning (MIL) (Maron and Lozano-P´erez 1997) based methods. Recently, MILbased methods (Bilen and Vedaldi 2016; Tang et al. 2017, 2018; Zeng et al. 2019) have gradually become mainstream. Although previous works have paid great efforts, there are some problems that still have no satisfactory solution, such as discriminative regions and missed detections. These problems may be caused by the ”enumerate-select” paradigm for localization and rudimentary refinement strategies. Few-Shot Object Detection Few-shot object detection (FSOD) aims to detect objects with only a few annotated instances. Currently, there are two main FSOD paradigms. Inspired by few-shot learning, most existing few-shot detection methods adopt the periodic meta-learning paradigm to transfer knowledge from base classes to novel classes (Kang et al. 2019; Yan et al. 2019; Yang and Renaud 2020; Fan et al. 2020; Hu et al. 2021; Zhang et al. 2021b,a). Recently, some approaches use a simple fine-tuning paradigm and demonstrate superior performance (Wang et al. 2020; Sun et al. 2021; Qiao et al. 2021; Fan et al. 2021; Wu et al. 2022; Pei et al. 2022). Weakly Supervised FSOD All existing FSOD methods demand full annotations for both base and novel classes, making data collection laborintensive and costly. To tackle this, weakly supervised fewshot object detection (WS-FSOD) is introduced, training models with images labeled only at the category level. Despite its increased difficulty, WS-FSOD proves to be more practical. Currently, limited related works exist in this area. However, as we point out in the “Introduction” section, these works (Karlinsky et al. 2021; Shaban et al. 2022; Gao et al. 2019) either cannot handle high-resolution images, or still require a large amount of strongly labeled data, which does not fully follow the WS-FSOD setting. Different from existing works, our WFS-DETR strictly follows the WS-FSOD setting and is the first work built on Deformable DETR with an innovative pretrainingrefinement mechanism where the model shows precise object localization capability, instead of relying on nonparametric methods for coarse search as in previous works. Method Problem Formulation Given all the classes C, which consist of base classes Cb and novel classes Cn, we have Cb ∪Cn = C and Cb ∩Cn = ∅. Different from few-shot object detection (FSOD), all training images of both base classes in Cb and novel classes in Cn have only image-level category labels. In the whole training data Dtrain, there are abundant base class data Db, but The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7016 Transformer Decoder … Match & Loss Box Predictor 0.1 0.9 0.3 … 0.5 0.7 0.4 Classification Score 0 1 0 … 0 0 0 Image Category Label ALN Training Transformer Encoder Feature Embedding Object Query Backbone Transformer Decoder Transformer Encoder Class Predictor Transformer Decoder … Shared Queries Box Predictor Objectness Predictor Detection Predictor Box Predictor Objectness Predictor Image Category Label MIL Training Prediction Result Stage Ⅱ: Refinement (Guided by DDPR) Dual-factor Driven Supervision Backbone Transformer Decoder … Attention-based Localization Network Objectness Predictor 0.1 0.9 0.3 … 0.5 0.7 0.4 0 1 0 … 0 0 0 Transformer Encoder Stage Ⅰ: Pretraining (Guided by PDLL) DA Backbone Transformer Encoder Refinement Class Predictor Refinement Objectness Predictor Refinement Class Predictor Refinement Objectness Predictor Refinement Class Predictor Refinement Objectness Predictor Figure 2: The Framework of WFS-DETR, where PDLL means Pretraining-distillation localization learning strategy and DDPR means Dual-factor driven progressive refinement strategy. The training process consists of a pretraining phase and a refinement phase. During pretraining, we first train an Attention-based Localization Network (ALN) and then distill its localization capability to the detector. In the refinement phase, we refine the predictions via a progressive structure containing K refinement layers by comprehensively utilizing class confidence and object evidence. a small amount of novel class data Dn where each class has only dozen of or a few training images. Db ∪Dn = Dtrain and Db ∩Dn = ∅. Db and Dn are used for base-training and fine-tuning, respectively. Additionally, box annotations are used only for evaluating the model in the testing phase. Framework The framework of WFS-DETR is shown in Fig. 2, which is based on Deformable DETR. The training process is divided into two stages: the PDLL guided pretraining and the DDPR guided refinement. In pretraining, we use large-scale weakly labeled data (e.g. ImageNet (Deng et al. 2009)) to help the model obtain general object localization capability. Concretely, we first freeze the detector and train an Attentionbased Localization Network (ALN). Then, we jointly train the localization network and the detector, distilling the localization capability learned by ALN to the detector. In refinement, we follow the common Base-training + Fine-tuning paradigm in FSOD. We progressively refine the detector by comprehensively leveraging the semantic distinction and foreground integrity of the predictions to obtain more accurate prediction results. Pretraining-Distillation Localization Learning WSOD faces two major problems: inaccurate object localization and missed detection. The scarcity of training data in WS-FSOD makes these problems even worse. These problems are stemmed from poor object localization. Specifically, existing methods mostly follow the “enumerateselect” paradigm to locate objects, that is, first using nonparametric methods (e.g. selective-search or edge box) to generate initial proposal boxes and then selecting a part of these proposals as the object localization results. However, the proposals generated by non-parametric methods are usually of low quality and redundancy. Considering that the initial proposals are critical to WS-FSOD performance, the object localization strategy in previous works actually constrains their performance. In general object detection, pretraining is widely used to improve the object localization performance of DETR detectors, but in WSOD, the absence of box annotations makes pre-training methods unsuitable (e.g. UP-DETR (Dai et al. 2021) and DetReg (Bar et al. 2022)). Without accurate box supervision, the localization performance of detectors is difficult to be optimized. Consequently, the performance of the WS-FSOD models trained with low-quality pseudo boxes and without any further optimizations will be unsatisfactory. For effective WS-FSOD, we design a PretrainingDistillation Localization Learning strategy (PDLL), the first pretraining approach tailored exclusively for WSOD. Initially, we train an Attention-based Localization Network (ALN) using abundant pretraining data for initial object localization. Then, through localization distillation learning, we enhance and transfer the ALN’s localization capability to the detector, endowing the model with general object localization and integrity judgment capabilities. Attention-based localization network. Some works (Gao et al. 2021; Xu et al. 2022; Gupta et al. 2022) have shown that the vision transformer is able to model the entire object well due to its long-range feature dependence, which is ideal for locating objects in the images accurately. However, these works rely on single-scale vision transformers like DeiT (Touvron et al. 2021). As a consequence, they are not directly applicable to object detection tasks, which demand a multi-scale feature-producing backbone for identifying objects of varying sizes. While the swin transformer is often used as a detector backbone, its window attention and sliding window mechanisms The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7017 Patch Partition FC SW Block WA Block Patch Merging × 2 × 2 × 18 × 2 56 × 56 28 × 28 14 × 14 7 × 7 Detector Backbone MH-Self Attention … 1 × 1 Class Token 14 × 14 Patch Token × K Attention-based Localization Network SW Block WA Block Patch Merging SW Block WA Block Patch Merging SW Block WA Block MH-Self Attention MH-Self Attention Figure 3: The structure of ALN, which consists of K multihead self-attention blocks, is inserted after the 3-th stage of the detector backbone (swin transformer). fall short of capturing comprehensive global information interactions, impeding thorough object modeling. Motivated by prior research, we introduce the Attention-based Localization Network (ALN). Comprising multiple multi-head self-attention blocks, ALN serves as a plug-in module compatible with various multi-scale ViT backbones, such as the swin transformer. This integration produces refined proposal boxes crucial for effective pretraining and ultimately enhances object detection performance. As shown in Fig. 3, ALN consists of K multi-head selfattention blocks and is inserted after the 3-th stage of the detector backbone (e.g. swin transformer). A learnable class token tc is fed into ALN to make information interaction with the patch tokens tns: t∗ c+n = Attnmulti([tc, tns]WQ, [tc, tns]WK, [tc, tns]WV ) = A∗ c+n[tc, tns]WV (1) where Attnmulti is the standard multi-head selfattention (Vaswani et al. 2017). We take the last N columns of t∗ c+n ∈RD×(1+N) from the last block of ALN to obtain patch tokens t∗ n ∈RD×N, and then use a FC layer parameterized by WC = [w1, ..., wC], wi ∈RD×1 to map t∗ n to class-aware patch tokens. Finally, the class-aware patch tokens are reshaped to t∗ n′ ∈RC×W ×H. By optimizing ALN with LALN, we can assign class semantics to patch tokens: LALN = −log( exp[GAP(T(t∗ n)wj)] PC i exp[GAP(T(t∗n)wi)] ) (2) where T(·) is the matrix transpose operation. Then, we obtain the average attention map ¯A∗ c+n ∈ R(1+N)×(1+N) of K blocks and get the class-agnostic attention vector by taking the last N columns of the first row in ¯A∗ c+n and reshape it to ¯A∗∈R1×W ×H. The attention maps for different classes are generated by multiplying the class-agnostic ¯A∗with the class-aware t∗ n′ . Finally, the localization result ResultLoc is obtained by applying threshold filtering and the minimum rectangular algorithm to the attention map: ResultLoc = MMR(Thr( ¯A∗⊗t∗ n′ )) (3) Localization distillation learning. As shown in Stage I of Fig. 2, for a proposal box bori = [x1, y1, x2, y2] produced by ALN, we use box augmentation (Feng, Zhong, and Huang 2021) to generate a set of augmented proposal boxes around bori. By increasing the diversity of proposal boxes both in shape and position, the proposal boxes are able to cover the whole object as accurately as possible: baug = [x1±α1∗w, y1±α2∗h, x2±α3∗w, y2±α4∗h] (4) where w = x2 −x1, h = y2 −y1. α is a random number obtained from [0, 1 6]. This value range ensures that the IoU between the augmented proposal boxes and the original one is greater than 0.5 in all cases, effectively preventing the augmented proposal boxes from deviating from the objects. We define the augmented M proposal boxes as y = {(bi, oi)}M i=1, where bi and oi are the box coordinates and the objectness score of the i-th proposal respectively. After augmentation, y can cover different foreground objects well, and we use y as supervision to distill the object localization ability from ALN to DETR. Specifically, first we match the prediction results of DETR ˆy = {(ˆbi, ˆoi)}M i=1 to y (padded with no object ∅) via the Hungarian bipartite matching algorithm (Carion et al. 2020): ˆσ = arg min σ∈SN N X i Lmatch(yi, ˆyσ(i)) (5) where Lmatch follows DETR (Carion et al. 2020). Using the best matching sequence ˆσ, we calculate the distillation loss Ldis as follows: Ldis = N X i λobjLobj(oi, ˆoˆσ(i))+1{oi̸=∅}λboxLbox(bi,ˆbˆσ(i)) (6) where Lobj is implemented via binary cross entropy loss, and Lbox is based on the Smooth L1 loss and the DistanceIoU loss (DIoU) (Zheng et al. 2020). Compared with GIoU loss (Rezatofighi et al. 2019) used in DETR (Carion et al. 2020), DIoU loss handle the common problem of discriminative regions in WSOD better, i.e., the prediction box occupies only a portion of the GT box. Dual-Factor Driven Progressive Refinement Multiple-instance learning (MIL) paradigm is widely used in WSOD (Bilen and Vedaldi 2016), and some WSOD works try to apply progressive refinement structures with MIL to optimize the detection results layer-by-layer (Tang et al. 2017, 2018). However, these methods are usually trapped by the discriminative region problem because they focus merely on optimizing with classification scores while ignoring the integrity of the objects. To address this challenge, we propose a Dual-factor Driven Progressive Refinement strategy for multiple instance learning (DDPR), which tackles this problem by both taking both class confidence and object evidence into account. Accurate supervision mining. In order to alleviate the discriminative region problem, WSOD2 (Zeng et al. 2019) takes superpixel maps as object evidence into account to optimize object localization. However, there exist two main The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7018 problems: (1) When being applied to different datasets, superpixel maps must be regenerated, which incurs high costs. (2) WSOD2 (Zeng et al. 2019) utilizes selective-search to generate initial proposal boxes, while the principle of selective-search is merging superpixels to generate proposal boxes, which is essentially the same as superpixel maps. As a consequence, it is difficult for superpixel maps to provide effective object evidence for the generation of accurate proposal boxes. As shown in Stage II of Fig. 2, we introduce a novel refinement strategy that does not incur additional cost and leverages the results of pretraining to obtain accurate supervision effectively. For an image I having only image-level category label Y = [y1, ..., yC]T ∈ RC×1, where yc = 1 or 0 indicates the presence or absence of an object class c, our basic decoder generates a set of proposal boxes ˆP0 = {(ˆb0 i , ˆo0 i , ˆc0 i , ˆd0 i )}N i=1, where ˆb0 i ∈R1×4, ˆo0 i ∈R1×1, ˆc0 i ∈ R1×C and ˆd0 i ∈R1×C indicate the box coordinates, objectness score, classification score, and detection score respectively. As shown in Eq. (7), we can obtain the image-level classification result ˆY = [ˆy1, ..., ˆyC]T ∈RC×1 of image I by aggregating classification scores {ˆc0 i }N i=1 and detection scores { ˆd0 i }N i=1 of N proposal boxes together: ˆY = PN i=1 ( eˆc0 i PC k=1 eˆc0 ik ⊙ e ˆd0 i PN i=1 e ˆd0 ik ) (7) As shown in Eq. (8), the basic MIL classifier is optimized by Lmil: Lmil = PC c=1 {yc log ˆyc + (1 −yc) log(1 −ˆyc)} (8) In order to produce more accurate proposal boxes, we construct K refinement layers to generate refined predictions ˆPj = {(ˆbj i, ˆoj i, ˆcj i)}N,K i=1,j=1 from the proposal boxes ˆ P0 generated by the basic refinement decoder. After pretraining, our detector is equipped with the classagnostic localization capability and is able to produce objectness scores to evaluate the completeness of the predicted foreground object boxes. This process is of no additional computation cost, and the objectness score has excellent generalization. Therefore, we utilize the objectness score as object evidence and then consider the class confidence as well as the object evidence comprehensively to select more precise supervision proposals. In the (k −1)-th refinement layer, for proposal ˆpk−1 i , we first calculate its selection score Sk−1 i = ˆok−1 i ∗ˆck−1 i by using the class confidence and the object evidence. And then all the proposal boxes will be sorted by the selection scores {Sk−1 i }N i=1. After applying NMS (No Max Suppression) to the sorted proposals, we will obtain a set of supervision proposal boxes Pk−1 s = {(bk−1 i , ok−1 i , ck−1 i )}Ns i=1. Following that, we make a match between Pk−1 s and the proposals ˆPk generated in the k-th refinement layer. For a proposal ˆpk i in ˆPk, if there exists a set of proposal boxes {pk−1 j }ns j=1 in Pk−1 s where each proposal has a IoU score with ˆpk i over the threshold ϕ, then the proposal in {pk−1 j }ns j=1 has the highest IoU score with ˆpk i will be selected as the supervision proposal for ˆpk i . All proposals in ˆPk that successfully match the proposals in Pk−1 s compose the ROI proposal set ˆPk r = {(ˆbk i , ˆok i , ˆck i )}Nr i=1. Pk−1 s acts as supervision to ˆPk r , and the refinement loss Lk ref is calculated between Pk−1 s and ˆPk r to refine the k-th class predictor and objectness predictor: Lk ref = −1 |Nr| Nr X i=1 (ck−1 j ∗ok−1 j )(CE(ck−1 j , ˆck i ) + BCE(ok−1 j , ˆok i )) where match(pk−1 j , ˆpk i ) = 1. (9) With the supervision of Pk−1 s , the discriminative information captured by the small proposals will be delivered to the overlapping large proposals, and the object evidence of large proposals will, in turn, be passed to the small ones simultaneously. This bi-directional exchange of information effectively improves detection accuracy. Experiments Training and Inference Details Model training. Our method follows the pretrainingrefinement mechanism. In the pretraining phase, we utilize the pretraining dataset (e.g., ImageNet (Deng et al. 2009)) to train the ALN and then distill its object localization and integrity judgment capabilities into the detector. The learning target is formulated as: LP = λP LALN + (1 −λP )Ldis (10) where λP is the hyperparameter used to control the learning target. During the first half of pretraining, we set λP to 1 to train the ALN alone. During the second half of pretraining, we set λP to 0.5 to jointly train ALN and DETR, distilling the knowledge learned by ALN to DETR. After pretraining, our model is equipped with general object localization and integrity judgment capabilities and is able to be generalized to other datasets without repeating the pretraining. In the refinement phase, we refine our model on the training dataset. The learning target is formulated as follows: LR = Lmil + λ1 PK k=1Lk ref + λ2Lbox (11) where λ1,λ2 are the hyperparameters used to balance the loss function, and we set λ1=1,λ2=10. Other hyperparameters of the refinement structure are set following OICR (Tang et al. 2017) (e.g., K=3). Model inference. We use the mean of the classification scores output by K refinement class predictors as the class confidence result and the outputs of the box predictor as the box prediction results. Experimental Setting Existing benchmarks. Following the only previous work StarNet (Karlinsky et al. 2021) in WS-FSOD, we takes three benchmark datasets for evaluation: ImageNetLoc-FS, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7019 Method Dataset 1-shot 5-shot AP30 AP50 AP30 AP50 MetaOpt+GC (Lee et al. 2019) ImageNetLoc-FS 32.4 13.8 51.9 22.1 MetaOpt+SS (Lee et al. 2019) 16.1 4.9 27.4 10.2 PCL (Tang et al. 2018) 25.4 9.2 37.5 11.3 CAN (Hou et al. 2019) 23.2 10.3 38.2 12.7 WSOD2 (Zeng et al. 2019) 25.8 10.9 39.8 12.8 StarNet (Karlinsky et al. 2021) 50.0 26.4 63.6 34.9 WFS-DETR (ours) 58.4 45.3 68.3 52.8 MetaOpt+GC (Lee et al. 2019) CUB-200 53.3 12.0 72.8 14.4 MetaOpt+SS (Lee et al. 2019) 19.4 6.0 26.2 6.4 PCL (Tang et al. 2018) 29.1 11.4 41.1 14.7 CAN (Hou et al. 2019) 60.7 19.3 74.8 26.0 WSOD2 (Zeng et al. 2019) 47.8 16.2 54.6 18.7 StarNet (Karlinsky et al. 2021) 77.1 27.2 86.1 32.7 WFS-DETR (ours) 84.4 42.6 92.5 53.4 TFA(fully-supervised upper bound) PASCAL VOC 31.4 46.8 StarNet (Karlinsky et al. 2021) 34.1 16.0 52.9 23.0 WFS-DETR (ours) (average over 5-way sets) 36.2 23.2 55.3 31.5 Table 1: Comparison with SOTAs on ImageNetLoc-FS, CUB-200 and PASCAL VOC. GC = GradCAM (Selvaraju et al. 2017), SS = Selective-Search (Uijlings et al. 2013). CUB-200 and PASCAL VOC. For ImageNetLoc-FS (Eli et al. 2019), we divide the total 331 classes into three sets: 101 base classes for base-training, 214 novel classes for fine-tuning and evaluation, and 16 classes for validation. For CUB (Wah et al. 2011), we split the 200 classes into three sets: 100 base classes for base-training, 50 novel classes for fine-tuning and evaluation, and 50 classes for validation. For PASCAL VOC (Everingham et al. 2010) we divide the total 20 classes into two sets: 15 base classes for base-training, 5 novel classes for fine-tuning and evaluation. In the basetraining phase, all base data are used for training. In the fine-tuning phase, we follow the “N-way K-shot” training paradigm in FSOD. Implementation details. We conduct experiments on the Deformable DETR (Zhu et al. 2020) detector with swin transformer-s (Liu et al. 2021) as backbone. Please refer to our supplementary material for more details. Comparison with Existing Methods We compare our method with the WS-FSOD SOTA StarNet (Karlinsky et al. 2021) and other WSOD SOTAs both on ImageNetLoc-FS (Eli et al. 2019), CUB-200 (Wah et al. 2011) and PASCAL VOC (Everingham et al. 2010). The box annotations in the dataset are only used for evaluation. To mitigate the influence of randomness, we conduct numerous 5-way-1/5 shot tests and compute the average as the final result. Please refer to our supplementary material for more training details. As shown in Tab. 1, our WFS-DETR method outperforms all the compared methods in any shot and all metrics, demonstrating our method’s effectiveness and superiority in WS-FSOD. On ImageNetLoc-FS, for 1-shot, our method surpasses StarNet (Karlinsky et al. 2021) by 8.4 % and 18.9 % in terms of AP30 and AP50. With the growth of the number of shots, our method always keeps advantageous. For 5-shot, our method outperforms StarNet (Karlinsky et al. 2021) by 4.7 % and 17.9 % in AP30 and AP50, respectively. Specifically, our method performs significantly better than the previous SOTA method (Karlinsky et al. 2021) in terms of the AP50, which requires higher localization accuracy. Experiments indicate that our method is able to accurately detect the entire object rather than the parts, effectively tackling the most challenging discriminative region problem in WS-FSOD. We also compare our method with TFA (Wang et al. 2020), a representative method in fully supervised FSOD, on PASCAL VOC. With only image-level category labels, our approach approaches the upper performance bound set by fully supervised TFA, outperforming StarNet. Ablation Study Here we conduct ablations on ImageNetLoc-FS (Eli et al. 2019). To minimize the impact of randomness, we take the average of numerous experiments as the final result. Effect of pretraining strategy. In this study, we investigate the impact of pretraining strategy on model performance. As shown in Tab. 2, we compare the effects of different pretraining strategies on the model’s performance. When the model is pretrained by pseudo boxes generated by nonparametric methods such as random crop, edge box (Zitnick and Doll´ar 2014), and selective-search (Uijlings et al. 2013), its performance is poor. This could be attributed to the poor quality of the boxes generated by non-parametric methods. The boxes generated bring excessive background noise to the localization pretraining, hindering the detection performance. By utilizing ALN to generate pseudo boxes for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7020 Pretraining Strategy 1-shot 5-shot AP30 AP50 AP30 AP50 pretrained with RC 14.3 5.1 22.6 6.9 pretrained with EB 21.2 8.3 30.4 9.7 pretrained with SS 19.6 7.4 31.2 10.2 pretrained with ALN 58.4 45.3 68.3 52.8 Table 2: Ablation study on pretraining strategy. Here, ’RC’,’EB’,’SS’, and ’ALN’ refer to random crop, edge box, selective-search and the attention-based localization network, respectively. CC OE 1-shot 5-shot AP30 AP50 AP30 AP50 47.3 28.9 62.4 37.2 ✓ 53.4 40.2 64.7 49.6 ✓ 51.2 38.3 62.8 46.1 ✓ ✓ 58.4 45.3 68.3 52.8 Table 3: Ablation study on refinement strategy. Here, ’CC’and ’OE’ mean class confidence and object evidence. pretraining, the performance is improved significantly. This demonstrates the importance of high-quality pretraining. Effect of pretraining proportion. Here, we examine how the size of the pretraining dataset impacts model performance, considering two factors: the number of images and the diversity of categories. As illustrated in Fig. 4, we divide the pretraining dataset into different groups based on image count and category variety. From an image standpoint, different groups share the same total categories but differ in image quantities. Regarding categories, different groups have varying category numbers while maintaining uniform images per category. Notably, when the pretraining dataset reaches 40% of the total, performance has been close to the maximum. When a subset Cs of categories C is pretrained with all its images, performance increases with Cs C until 80%, when performance peaks. This highlights the greater impact of training categories over the number of images on performance and encourages efficient pretraining with a subset covering all original C categories. Effect of DDPR. As shown in Tab. 3, we explore the effect of the two factors: class confidence CC and object evidence OE. The detector trained without refinement acts as the baseline. The results presented in Tab. 3 show that the improvements brought by individually using the class confidence or object evidence for refinement are close. While by comprehensively considering the optimization of classification and localization, the cooperation of the two factors is able to make effective refinement and significantly improve the model performance. Visualization results. Examples of the detection results are shown in Fig.5. Compared with other methods (Tang et al. 2017; Karlinsky et al. 2021), WFS-DETR locates obAverage Precision Proportion Figure 4: Ablation study on the size of the pretraining dataset. To strictly follow the few-shot setting, we remove all categories contained in the training dataset. (a) GT (b) OICR (c) StarNet (d) WFS-DETR Figure 5: Comparison of WS-FSOD results. (a) ground truth. (b) OICR. (c) StarNet. (d) WFS-DETR. The GT boxes are in blue, and the prediction boxes are in red. jects more accurately and solves the problem of discriminative regions, which proves the effectiveness of our method. Conclusion In summary, we propose WFS-DETR, the first WS-FSOD work based on DETR, which leverages a pretrainingrefinement mechanism to address the problem of discriminative regions. We enhance the detector’s robustness in object localization and integrity judgment using the vision transformer (ViT) with pretraining and knowledge distillation and refine the model progressively by integrating object integrity into the multiple-instance learning (MIL) structure. Experiments on WS-FSOD benchmark datasets show that WFS-DETR achieves state-of-the-art performance, demonstrating the effectiveness of our approach. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7021 Acknowledgements Jihong Guan was partially supported by National Key Research and Development Program of China (grant No.2021YFC3300300), and the National Natural Science Foundation of China under Grant U1936205. References Bar, A.; Wang, X.; Kantorov, V.; Reed, C. J.; Herzig, R.; Chechik, G.; Rohrbach, A.; Darrell, T.; and Globerson, A. 2022. Detreg: Unsupervised pretraining with region priors for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14605–14615. Bilen, H.; and Vedaldi, A. 2016. Weakly supervised deep detection networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2846–2854. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Chen, W.-Y.; Liu, Y.-C.; Kira, Z.; Wang, Y.-C. F.; and Huang, J.-B. 2019. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232. Dai, Z.; Cai, B.; Lin, Y.; and Chen, J. 2021. Up-detr: Unsupervised pre-training for object detection with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1601–1610. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Eli, S.; Leonid, K.; Joseph, S.; Sivan, H.; Mattias, M.; Sharathchandra, a. R. S. F., Pankanti; Abhishek, K.; Raja, G.; and Alexander M., B. 2019. RepMet: Representativebased metric learning for classification and one-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5197–5206. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision, 88: 303–338. Fan, Q.; Zhuo, W.; Tang, C.-K.; and Tai, Y.-W. 2020. Fewshot object detection with attention-rpn and multi-relation detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4013– 4022. Fan, Z.; Ma, Y.; Li, Z.; and Sun, J. 2021. Generalized fewshot object detection without forgetting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4527–4536. Feng, C.; Zhong, Y.; and Huang, W. 2021. Exploring classification equilibrium in long-tailed object detection. In Proceedings of the IEEE/CVF International conference on computer vision, 3417–3426. Gao, J.; Wang, J.; Dai, S.; Li, L.-J.; and Nevatia, R. 2019. Note-rcnn: Noise tolerant ensemble rcnn for semi-supervised object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9508–9517. Gao, W.; Wan, F.; Pan, X.; Peng, Z.; Tian, Q.; Han, Z.; Zhou, B.; and Ye, Q. 2021. Ts-cam: Token semantic coupled attention map for weakly supervised object localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2886–2895. Gupta, S.; Lakhotia, S.; Rawat, A.; and Tallamraju, R. 2022. Vitol: Vision transformer for weakly supervised object localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4101–4110. Hou, R.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2019. Cross Attention Network for Few-shot Classification. In Proceedings of the Conference and Workshop on Neural Information Processing Systems, 4005–4016. Hu, H.; Bai, S.; Li, A.; Cui, J.; and Wang, L. 2021. Dense Relation Distillation With Context-Aware Aggregation for Few-Shot Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10185–10194. Kang, B.; Liu, Z.; Wang, X.; Yu, F.; Feng, J.; and Trevor, D. 2019. Few-shot object detection via feature reweighting. In Proceedings of the IEEE International Conference on Computer Vision, 8420–8429. Karlinsky, L.; Shtok, J.; Alfassy, A.; Lichtenstein, M.; Harary, S.; Schwartz, E.; Doveh, S.; Sattigeri, P.; Feris, R.; Bronstein, A.; et al. 2021. Starnet: towards weakly supervised few-shot object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 1743–1753. Lee, K.; Maji, S.; Ravichandran, A.; and Soatto, S. 2019. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10657–10665. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Maron, O.; and Lozano-P´erez, T. 1997. A framework for multiple-instance learning. Advances in neural information processing systems, 10. Pei, W.; Wu, S.; Mei, D.; Chen, F.; Tian, J.; and Lu, G. 2022. Few-Shot Object Detection by Knowledge Distillation Using Bag-of-Visual-Words Representations. In Proceedings of the European Conference on Computer Vision, 283–299. Qiao, L.; Zhao, Y.; Li, Z.; Qiu, X.; Wu, J.; and Zhang, C. 2021. Defrcn: Decoupled faster r-cnn for few-shot object detection. In Proceedings of the IEEE International Conference on Computer Vision, 8661–8670. Redmon, J.; and Farhadi, A. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7022 Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 658–666. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626. Shaban, A.; Rahimi, A.; Ajanthan, T.; Boots, B.; and Hartley, R. 2022. Few-shot Weakly-Supervised Object Detection via Directional Statistics. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3920–3929. Sun, B.; Li, B.; Cai, S.; Yuan, Y.; and Zhang, C. 2021. Fsce: few-shot object detection via contrastive proposal encoding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7352–7362. Tang, P.; Wang, X.; Bai, S.; Shen, W.; Bai, X.; Liu, W.; and Yuille, A. 2018. Pcl: Proposal cluster learning for weakly supervised object detection. IEEE transactions on pattern analysis and machine intelligence, 42(1): 176–191. Tang, P.; Wang, X.; Bai, X.; and Liu, W. 2017. Multiple instance detection network with online instance classifier refinement. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2843–2851. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and J´egou, H. 2021. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, 10347–10357. PMLR. Uijlings, J. R.; Van De Sande, K. E.; Gevers, T.; and Smeulders, A. W. 2013. Selective search for object recognition. International journal of computer vision, 104: 154–171. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wah, C.; Branson, S.; Welinder, P.; Perona, P.; and Belongie, S. 2011. The Caltech-UCSD Birds-200-2011 Dataset 1–15. Wang, X.; Huang, T. E.; Joseph, G.; Trevor, D.; and Yu, F. 2020. Frustratingly simple few-shot object detection. In Proceedings of the International Conference on Machine Learning, 9919–9928. Wu, S.; Pei, W.; Mei, D.; Chen, F.; Tian, J.; and Lu, G. 2022. Multi-faceted Distillation of Base-Novel Commonality for Few-Shot Object Detection. In Proceedings of the European Conference on Computer Vision, 578–594. Xu, J.; Hou, J.; Zhang, Y.; Feng, R.; Zhao, R.-W.; Zhang, T.; Lu, X.; and Gao, S. 2022. Cream: Weakly supervised object localization via class re-activation mapping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9437–9446. Yan, X.; Chen, Z.; Xu, A.; Wang, X.; Liang, X.; and Lin, L. 2019. Meta r-cnn: Towards general solver for instance-level low-shot learning. In Proceedings of the IEEE International Conference on Computer Vision, 9577–9586. Yang, X.; and Renaud, M. 2020. Few-shot object detection and viewpoint estimation for objects in the wild. In Proceedings of the European Conference on Computer Vision. Zeng, Z.; Liu, B.; Fu, J.; Chao, H.; and Zhang, L. 2019. Wsod2: Learning bottom-up and top-down objectness distillation for weakly-supervised object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 8292–8300. Zhang, G.; Luo, Z.; Cui, K.; Lu, S.; and Xing, E. P. 2021a. Meta-DETR: Image-Level Few-Shot Detection with InterClass Correlation Exploitation. arXiv:2103.11731. Zhang, L.; Zhou, S.; Guan, J.; and Zhang, J. 2021b. Accurate few-shot object detection with support-query mutual guidance and hybrid loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14424–14432. Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; and Ren, D. 2020. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 12993–13000. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2921–2929. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. Zitnick, C. L.; and Doll´ar, P. 2014. Edge boxes: Locating object proposals from edges. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 391–405. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7023
2024
780
18,607
S2WAT: Image Style Transfer via Hierarchical Vision Transformer Using Strips Window Attention Chiyu Zhang1,2, Xiaogang Xu3,4,*, Lei Wang1, Zaiyan Dai1, Jun Yang1,5,* 1Sichuan Normal University 2Nanjing University of Aeronautics and Astronautics 3Zhejiang Lab 4Zhejiang University 5Visual Computing and Virtual Reality Key Laboratory of Sichuan Provience {alienzhang19961005, xiaogangxu00, londmi9, zaiyan.dai}@gmail.com, jkxy yjun@sicnu.edu.cn Abstract Transformer’s recent integration into style transfer leverages its proficiency in establishing long-range dependencies, albeit at the expense of attenuated local modeling. This paper introduces Strips Window Attention Transformer (S2WAT), a novel hierarchical vision transformer designed for style transfer. S2WAT employs attention computation in diverse window shapes to capture both short- and long-range dependencies. The merged dependencies utilize the “Attn Merge” strategy, which adaptively determines spatial weights based on their relevance to the target. Extensive experiments on representative datasets show the proposed method’s effectiveness compared to state-of-the-art (SOTA) transformer-based and other approaches. The code and pre-trained models are available at https://github.com/AlienZhang1996/S2WAT. Introduction Background. Image style transfer imparts artistic characteristics from a style image to a content image, evolving from traditional (Efros and Freeman 2001) to iterative (Gatys, Ecker, and Bethge 2015, 2016) and feed-forward methods (Johnson, Alahi, and Fei-Fei 2016; Chen et al. 2017). Handling multiple styles concurrently remains a challenge, addressed by Universal Style Transfer (UST) (Park and Lee 2019; Kong et al. 2023; Li et al. 2022). This sparks innovative approaches like attention mechanisms for feature stylization (Yao et al. 2019; Deng et al. 2020; Chen et al. 2021), the Flow-based method (An et al. 2021) for content leakage, and Stable Diffusion Models (SDM) for creative outcomes (Zhang et al. 2023). New neural architectures, notably the transformer, show remarkable potential. (Deng et al. 2022) introduces StyTr2, leveraging transformers for SOTA performance. However, StyTr2’s encoder risks losing information due to one-time downsampling, impacting local details with global MSA (multi-head self-attention). Challenge. To enhance the transformer’s local modeling capability, recent advancements propose the use of windowbased attention computation, exemplified by hierarchical structures like Swin-Transformer (Liu et al. 2021). However, applying window-based transformers directly for feature ex*Corresponding authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Content (b) Style (c) Encoder - Swin (d) Encoder - S2WAT Figure 1: Illustration for locality problem. (c) Results of the Swin-based model. (d) Results from our S2WAT. traction in style transfer can lead to grid-like patterns, as depicted in Fig. 1 (c). This arises due to the localized nature of window attention, termed the locality problem. While window shift can capture long-range dependencies (Liu et al. 2021), it necessitates deep layer stacks, introducing substantial model complexity for style transfer, particularly with high-resolution samples. Motivation and Technical Novelty.Diverging from current transformer-based approaches, we introduce a novel hierarchical transformer framework for image style transfer, referred to as S2WAT (Strips Window Attention Transformer). This structure meticulously captures both local and global feature extraction, inheriting the efficiency of window-based attention. In detail, we introduce a distinct attention mechanism (Strips Window Attention, SpW Attention) that amalgamates outputs from multiple window attentions of varying shapes. These diverse window shapes enhance the equilibrium between modeling shortand long-range dependencies, and their integration is facilitated through our devised “Attn Merge” technique. In this paper, we formulate the SpW Attention in a simple while effective compound mode, which encompasses three window types: horizontal strip-like, vertical strip-like, and square windows. The attention computations derived from strip windows emphasize long-range modeling for extracting non-local features, while the square window attention focuses on short-range modeling for capturing local features. Furthermore, the “Attn Merge” method combines attention outputs from various windows by computing spatial correlations between them and the input. These calculated correlation scores serve as merging weights. In contrast to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7024 static merge strategies like summation and concatenation, “Attn Merge” dynamically determines the significance of different window attentions, thus enhancing transfer effect. Contributions. Extensive quantitative and qualitative experiments are conducted to prove the effectiveness of the proposed framework, including a large-scale user study. The main contributions of our work include: • We introduce a pioneering image style transfer framework, S2WAT, founded on a hierarchical transformer. This framework adeptly undertakes both short- and longrange modeling concurrently, effectively mitigating the challenge of locality issues. • We devise a novel attention computation within the transformer for style transfer, termed SpW Attention. This mechanism intelligently merges outputs from diverse window attentions using the “Attn Merge” approach. • We extensively evaluate our proposed S2WAT on wellestablished public datasets, demonstrating its state-ofthe-art performance for the style transfer task. Related Work Image Style Transfer. Style transfer methods can fall into single-style (Ulyanov, Vedaldi, and Lempitsky 2016), multiple-style (Chen et al. 2017), and arbitrary-style (UST) (Zhang et al. 2022; Kong et al. 2023; Ma et al. 2023) categories based on their generalization capabilities. Besides models based on CNNs, recent works include Flow-based ArtFlow (An et al. 2021), transformer-based StyTr2 (Deng et al. 2022), and SDM-based InST (Zhang et al. 2023). ArtFlow, with Projection Flow Networks (PFN), achieves content-unbiased results, while IEST (Chen et al. 2021) and CAST (Zhang et al. 2022) use contrastive learning for appealing effects. InST achieves creativity through SDM. Models like (Wu et al. 2021b; Zhu et al. 2023; Hong et al. 2023) use transformers to fuse image features, and (Liu et al. 2022; Bai et al. 2023) encode text prompts for text-driven style transfer. StyTr2 leverages transformers as the backbone for pleasing outcomes. Yet, hierarchical transformers remain unexplored in style transfer. Hierarchical Vison Transformer. Lately, there has been a resurgence of interest in hierarchical architectures within the realm of transformers. Examples include LeViT (Graham et al. 2021) & CvT (Wu et al. 2021a), which employ global MSA; PVT (Wang et al. 2021) & MViT (Fan et al. 2021), which compress the resolution of K & V. However, in these approaches, local information is not adequately modeled. While Swin effectively captures local information through shifted windows, it still gives rise to the locality problem when applied to style transfer (see Fig. 1). Intuitive attempts, such as inserting global MSA (see Section Pre-Analysis) or introducing Mix-FFN (Xie et al. 2021) by convolutions (see appendix), are powerless for locality problem. In the context of style transfer, a promising avenue involves advancing further with a new transformer architecture that encompasses both short- and long-range dependency awareness and possesses the capability to mitigate the locality problem. Differences with Other Methods While the attention mechanism in certain prior methods may share similarities Content Style 7-7-7 7-7-(7-224) 7-7-224 S2WAT (a) location of global MSA (b) window size Content Style 7-7-7 7-7-4 7-7-14 S2WAT Figure 2: Results of the Swin-based encoder experiments. 7-7-7 means the Swin used has 3 stages (each stage with 2 layers) and 7 is the window size of each layer. 7-7-(7-224) denotes the window size of the last layer in the last stage is 224 which represents global MSA. (a) Results of experiments inserting global MSA in certain layers. (b) Results of experiments changing the window size. with the proposed SpW Attention, several key distinctions exist. 1) The fusion strategy stands out: our proposed “Attn Merge” demonstrates remarkable superiority in image style transfer. 2) In our approach, all three window shapes shift based on the computation point, and their sizes dynamically adapt to variations in input sizes. Detailed differentiations from previous methods, such as CSWin, Longformer, and iLAT have been outlined in the Appendix. Pre-Analysis Our preliminary analysis aims to unveil the underlying causes of grid-like outcomes (locality problem) that arise when directly employing Swin for style transfer. Our hypothesis points towards the localized nature of window attention as the primary factor. To validate this hypothesis, we undertake experiments across four distinct facets as discussed in this Section. The details of the models tested in this part can be found in Appendix. Global MSA for Locality Problem The locality problem should be relieved or excluded when applying global MSA instead of window or shifted window attention, if the locality of window attention is the culprit. In the Swin-based encoder, we substitute the last one or two window attentions with global MSA, configuring the window size for target layers at 224 (matching input resolution). Fig. 2 (a) presents the experiment results, highlighting gridlike textures at a window size of 7 (column 3) and block-like formations when the last window attention is swapped with global MSA (column 4). While replacing the last two window attentions with global MSA effectively alleviates gridlike textures, complete exclusion remains a challenge. This series of experiments substantiates that the locality problem indeed stems from the characteristics of window attention. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7025 (c) feature maps Content Style Feature Maps Stylized Figure 3: The last feature maps from Swin-based encoder. (d) attention maps Content Style Stylized h1 h2 h3 h4 h5 h6 h7 h8 A.M. S.M.of p1 (L.T.) S.M.of p2 (R.T.) S.M.of p3 (L.D.) S.M.of p4 (R.D.) S.M.of p5 (C.) Figure 4: Attention maps (row 1) and similarity maps (rows 2-6) of the five points. Attention and similarity maps differ in shape. They are scaled for easy observation. And hi denotes i-th attention head. S/A.M. is short for “similarity/attention maps” and L/R/T/D/C for “left/right/top/down/center”. Influence of Window Size for Locality Problem The window size in window attention, akin to the receptive field in CNN, delineates the computational scope. To examine the impact of window size, assuming the locality of window attention causes the locality problem, we investigate three scenarios: window sizes of 4, 7, and 14 for the last stage. The outcomes of these experiments are depicted in Fig. 2 (b). Notably, relatively small blocks emerge with a window size of 4 (column 4), while a shifted window’s rough outline materializes with a window size of 14 (column 5). This series of experiments underscores the pivotal role of window size in the locality problem. Locality Phenomenon in Feature Maps In previous parts, we discuss the changes in external factors, and we will give a shot at internal factors in the following parts. Since the basis of the transformer-based transfer module is the similarity between content and style features, the content features should leave clues if the stylized images are grid-like. For this reason, we check out all the feature maps from the last layer of the encoder and list some of them (see Fig. 3), which are convincing evidence to prove that features from window attention have strongly localized. Locality Phenomenon in Attention Maps To highlight the adverse impact of content feature locality on stylization, we analyze attention maps from the first inter-attention (Cheng, Dong, and Lapata 2016) in the transfer module (see Fig. 4). Five points, representing corners (p1: top-left in red, p2: top-right in green, p3: bottom-left in blue, p4: bottom-right in black), and the central point (p5: white) are selected from style features to gauge their similarity with content features. These points, extracted from specific columns of attention maps and reshaped into squares, mirror content feature shapes. The similarity map of p1 reveals pronounced responses aligned with red blocks in the stylized image. Conversely, p2, p3, and p5 exhibit robust responses in areas devoid of red blocks. As for p4’s similarity map, responses are distributed widely. These outcomes underline the propagation of window attention’s locality from content features within the encoder to the attention maps of the transfer module. This influence significantly disrupts the stylization process, ultimately culminating in the locality problem. To address this issue, we present the SpW Attention and S2WAT solutions. Method Fig. 5 (c) presents the workflow of proposed S2WAT. Strips Window Attention As illustrated in Fig. 5 (b), SpW Attention comprises two distinct phases: a window attention phase and a fusion phase. Window Attention. Assuming input features possess a shape of C × H × W and n denotes the strip width, the first phase involves three distinct window attentions: a horizontal strip-like window attention with a window size of n × W, a vertical strip-like window attention with a window size of H × n, and a square window attention with a window size of M × M (where M = 2n). A single striplike window attention captures local information along one axis while accounting for long-range dependencies along the other. In contrast, the square window attention focuses on the surrounding information. Combining the outputs of these window attentions results in outputs that consider both local information and long-range dependencies. Illustrated in Fig. 6, the merged approach gathers information from a broader range of targets, striking a favorable balance between computational complexity and the ability to sense global information. In computing square window attention, we follow (Liu et al. 2021) to include relative position bias B ∈RM 2×M 2 to each head in computing the attention map, as W-MSAM×M(Q, K, V ) = Softmax(QKT √ d + B)V, (1) where Q, K, V ∈RM 2×d are the query, key, and value matrices; d is the dimension of query/key, M 2 is the number of patches in the window, and W-MSAM×M denotes multihead self-attention using window in shape of M × M. We exclusively apply relative position bias to square window attention, as introducing it to strip-like window attention did not yield discernible enhancements. Attn Merge. Following the completion of the window attention phase, a fusion module named “Attn Merge” is engaged to consolidate the outcomes with the inputs. Illustrated in Fig. 7, “Attn Merge” comprises three core steps: first, tensor stacking; second, similarity computation between the first tensor and the rest at every spatial location; third, weighted summation based on similarity. The compuThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7026 Patch Embedding Stage 1 Padding SpW Attention Block Un-padding x2 Patch Merging Stage 2 Padding SpW Attention Block Un-padding x2 Patch Merging Stage 3 Padding SpW Attention Block Un-padding x2 Images Patch Partition Window Attention n x W Window Attention H x n Window Attention 2n x 2n Attn Merge LN SpW Attention Strips Window Attention (SpW Attention) LN MLP Strips Window Attention Block (a) Strips Window Attention Transformer Encoder (b) Strips Window Attention Layer 1 Layer 2 Layer 3 SpW Transformer Encoder Decoder Transformer Decoder fc fs fcs Ic Ics Mirrored VGG Patch Reverse Images (c) Net Architecture of S2WAT (d) Decoder (e) Transformer Decoder Layer Is (SpW Transformer Encoder) LN MSA fc LN MHA LN MLP Q K V fs Figure 5: Overall pipeline of the proposed S2WAT. Given a content image Ic and a style image Is, the encoder produces corresponding features fc and fs. These features undergo style transfer from fs to fc within the transfer module, yielding stylized features fcs. Subsequently, stylized features are decoded in the decoder to generate the stylized image Ics. A patch A patch taken into attention The patch that is computing Square Window Horizontal S.W. Vertical S.W. + + Merge Figure 6: Receptive field of Strips Window Attention. Single strip-like window attention or square window attention can only glean information from limited targets in the image, while the merged one enlarges the receptive region to multiple directions. S.W. denotes “strip window”. tational efficiency of “Attn Merge” is noteworthy, as Y = Stack(x, a, b, c), Y ∈Rn×4×d, x′ = Unsqueeze(x), x ∈Rn×1×d, Z = x′Y T Y, Z ∈Rn×1×d, z = Squeeze(Z), z ∈Rn×d, (2) where x, a, b, c ∈Rn×d are input tensors and z is the outputs; Stack denotes the operation to collect tensors in a new dimension and Unsqueeze / Squeeze represents the operation to add or subtract a dimension of tensor. Strips Window Attention Block. We now provide an overview of the comprehensive workflow of the SpW Attention block. The structure of the SpW Attention block mirrors that of a standard transformer block, except for the substitution of MSA with a SpW Attention (SpW-MSA) module. As depicted in Fig. 5 (b), a SpW Attention block comprises a SpW-MSA module, succeeded by a two-layer MLP featurStack Sequence ... Dot Product a b c d Softmax a x b x + c x + b x + Weighted Sum ... ... ... Inputs of SpW A. Outputs of W.A. (H x n) Outputs of W.A. (n x W) Outputs of W.A. (2n x 2n) Output Figure 7: Workflow of “Attn Merge”. W./A. denotes “Window/Attention”. ing GLUE as the non-linear activation in between. Preceding each SpW-MSA module and MLP, a LayerNorm (LN) operation is applied, and a residual connection is integrated after each module. The computation process of a SpW Attention block unfolds as follows: ˆzl n×W = W-MSAn×W (LN(zl−1)), ˆzl H×n = W-MSAH×n(LN(zl−1)), ˆzl 2n×2n = W-MSA2n×2n(LN(zl−1)), ˜zl = A(LN(zl−1), ˆzl n×W , ˆzl H×n, ˆzl 2n×2n) + zl−1, zl = MLP(LN(ˆzl)) + ˜zl, (3) where “A” means “Attn Merge”, zl , ˜zl and ˆzl denote the outputs of MLP, “Attn Merge”, and W-MSA for block l, respectively; W-MSAn×m denotes multi-head self-attention using window in shape of n × m. As shown in (3), the SpW Attention block primarily consists of two parts: SpW AttenThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7027 tion (comprising W-MSA and “Attn Merge”) and an MLP. Computational Complexity. To make the cost of computation in SpW Attention clear, we compare the computational complexity of MSA, W-MSA, and the proposed SpW-MSA. Supposing the window size of W-MSA and the strip width of SpW-MSA are equal to M and C is the dimension of inputs, the computational complexity of a global MSA, a square window based one, and a Strips Window based one on an image of h × w patches are: Ω(MSA) = 2(wh)2C + 4whC2, (4) Ω(W - MSA) = 2M 2whC + 4whC2, (5) Ω(SpW - MSA) = 2M(w2h + wh2 + 4Mwh)C+ 12whC2 + 8whC. (6) As shown in Eqs. (4)-(6), MSA is quadratic to the patch number hw, and W-MSA is linear when M is fixed. And the proposed SpW-MSA is something in the middle. Overall Architecture In contrast to StyTr2 (Deng et al. 2022), which employs separate encoders for different input domains, we adhere to the conventional encoder-transfer-decoder design of UST. This architecture encodes content and style images using a single encoder. An overview is depicted in Fig. 5. Encoder. Like Swin, S2WAT’s encoder initially divides content and style images into non-overlapping patches using a patch partition module. These patches serve as “tokens” in transformers. We configure the patch size as 2 × 2, resulting in patch dimensions of 2×2×3 = 12. Subsequently, a linear embedding layer transforms the patches into a user-defined dimension (C). After embedding, the patches proceed through a series of consecutive SpW Attention blocks, nestled between padding and un-padding operations. Patches are padded to achieve divisibility by twice the strip width and cropped (un-padded) after SpW Attention blocks, preserving the patch count. Notably, patch padding employs reflection to mitigate potential light-edge artifacts that can arise when using constant 0 padding. These SpW Attention blocks uphold the patch count ( H 2 × W 2 ) and, in conjunction with the patch embedding layer and padding/un-padding operations, constitute “Stage 1”. To achieve multi-scale features, gradual reduction of the patch count is necessary as the network deepens. Swin introduces a patch merging layer as a down-sample module, extracting elements with a two-step interval along the horizontal and vertical axes. By concatenating 2 × 2 groups of these features in the channel dimension and reducing channels from 4C to 2C through linear projection, a 2x downsampling result is obtained. Subsequent application of SpW Attention blocks, flanked by padding and un-padding operations, transforms the features while preserving a resolution of H 4 × W 4 . This combined process is designated as “Stage 2”. This sequence is reiterated for “Stage 3”, yielding an output resolution of H 8 × W 8 . Consequently, the encoder’s hierarchical features in S2WAT can readily be employed with techniques like FPN or U-Net. Transfer Module. A multi-layer transformer decoder replaces the transfer module, similar to StyTr2 (Deng et al. 2022). In our implementation, we maintain a close resemblance to the original transformer decoder (Vaswani et al. 2017), with two key distinctions from StyTr2: a) The initial attention module of each transformer decoder layer is MSA, whereas StyTr2 employs MHA (multi-head attention); b) LayerNorm precedes the attention module and MLP, rather than following them. The structure is presented in Fig. 5 (e) and more details can be found in codes. Decoder. In line with prior research (Huang and Belongie 2017; Park and Lee 2019; Deng et al. 2021), we utilize a mirrored VGG for decoding stylized features. Detailed implements are available in codes. Network Optimization Similar to (Huang and Belongie 2017), we formulate two distinct perceptual losses for gauging the content dissimilarity between stylized images Ics and content images Ic, along with the style dissimilarity between stylized images Ics and style images Is. The content perceptual loss is defined as: Lcontent = X l∈C ∥ϕl(Ics) −ϕl(Ic)∥2, (7) where the overline denotes mean-variance channel-wise normalization; ϕl(·) represents extracting features of layer l from a pre-trained VGG19 model; C is a set consisting of relu4 1 and relu5 1 in the VGG19. The style perceptual loss is defined as: Lstyle = X l∈S ∥µ(ϕl(Ics)) −µ(ϕl(Is))∥2 + ∥σ(ϕl(Ics)) −σ(ϕl(Is))∥2, (8) where µ(·) and σ(·) denote mean and variance of features, respectively; and S is a set consisting of relu2 1, relu3 1, relu4 1 and relu5 1 in the VGG19. We also adopt identity losses (Park and Lee 2019) to further maintain the structure of the content image and the style characteristics of the style image. The two different identity losses are defined as: Lid1 = ∥Icc −Ic∥2 + ∥Iss −Is∥2, (9) Lid2 = P l∈N ∥ϕl(Icc) −ϕl(Ic)∥2 + ∥ϕl(Iss) −ϕl(Is)∥2, (10) where Icc (or Iss) denotes the output image stylized from two same content (or style) images and N is a set consisting of relu2 1, relu3 1, relu4 1 and relu5 1 in the VGG19. Finally, our network is trained by minimizing the loss function defined as: Ltotal = λcLcontent + λsLstyle + λid1Lid1 + λid2Lid2 , (11) where λc, λs, λid1, and λid2 are the weights of different losses. We set the weights to 2, 3, 50, and 1, relieving the impact of magnitude differences. Experiments MS-COCO (Lin et al. 2014) and WikiArt (Phillips and Mackintosh 2011) are used as the content dataset and the style dataset respectively. Other implementation details are available in Appendix and codes. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7028 Content/Style (a) Ours (a) Visual Comparison (b) Content Leak (b) InST (c) StyTr2 (d) CAST (e) IEST (f) A.F.+AdaIN (g) MCC (h) SANET (i) WCT (j) AdaIN Content Style Ours StyTr2 A.F.+AdaIN MCC SANet AdaIN Round 1 Round 20 CAST InST IEST Figure 8: Visual comparison of the results from SOTA methods and visualization of content leak. A.F. denotes “ArtFlow”. Style Transfer Results In this Section, we compare the results between the proposed S2WAT and previous SOTAs, including AdaIN (Huang and Belongie 2017), WCT (Li et al. 2017), SANet (Park and Lee 2019), MCC (Deng et al. 2021), ArtFlow (An et al. 2021), IEST (Chen et al. 2021), CAST (Zhang et al. 2022), StyTr2 (Deng et al. 2022) and InST (Zhang et al. 2023). Qualitative Comparison. In Fig. 8 (a), we present visual outcomes of the compared algorithms. AdaIN, relying on mean and variance alignment, fails to capture intricate style patterns. While WCT achieves multi-level stylization, it compromises content details. SANet, leveraging attention mechanisms, enhances style capture but may sacrifice content details. MCC, lacking non-linear operations, faces overflow issues. Flow-based ArtFlow produces content-unbiased outcomes but may exhibit undesired patterns at borders. CAST retains content structure through contrastive methods but may compromise style. InST’s diffusion models yield creative results but occasionally sacrifice consistency. StyTr2 and proposed S2WAT strike a superior balance, with S2WAT excelling in preserving content details (e.g., numbers on the train, the woman’s glossy lips, and letters on billboards), as highlighted in dashed boxes in Fig. 8 (a). Additional results are available in the Appendix. Quantitative Comparison. In this section, we follow a methodology akin to (Huang and Belongie 2017; An et al. 2021; Deng et al. 2022) utilizing losses as indirect metrics. Style, content, and identity losses serve as metrics, evaluating style quality, content quality, and input information retention, respectively. Additionally, inspired by (An et al. 2021), the Structural Similarity Index (SSIM) is included to gauge structure preservation. As shown in Table 1, S2WAT achieves the lowest content and identity losses, while SANet exhibits the lowest style loss. StyTr2 and S2WAT show comparable loss performance, emphasizing style and content, respectively. Due to its content-unbiased nature, ArtFlow registers identity losses of 0, signaling an unbiased approach. While ArtFlow is unbiased, S2WAT outperforms it in style and SSIM. S2WAT attains the highest SSIM, indicating superior content structure retention. It excels in preserving both content input structures and artistic style characteristics simultaneously. Content Leak Content leak problem occur when applying the same style image to a content image repeatedly, especially if the model struggles to preserve content details impeccably. Following (An et al. 2021; Deng et al. 2022), We investigate content leakage in the stylization process, focusing on S2WAT and comparing it to ArtFlow, StyTr2, CNN-based, and SDMbased methods. Our experiments, detailed in Fig. 8 (b), reveal S2WAT and StyTr2, both transformer-based, exhibit minimal content detail loss over 20 iterations, surpassing CNN and SDM methods known for noticeable blurriness. While CAST alleviates content leak partially, the stylized effect remains suboptimal. In summary, S2WAT effectively mitigates the content leak issue. InST occasionally underperforms, especially when content and style input styles differ significantly, potentially due to overfitting in the Textual Inversion module during singleimage training. More details are available in the Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7029 Method Ours InST StyTr2 CAST IEST ArtFlow-AdaIN ArtFlow-WCT MCC SANet WCT AdaIN Content Loss ↓ 1.66 1.66 1.66 3.73 1.83 2.07 1.81 1.93 1.73 1.92 2.16 2.56 1.71 Style Loss ↓ 1.74 29.98 1.52 4.33 2.72 1.90 1.89 1.70 1.11 1.11 1.11 2.23 3.50 Identity Loss 1 ↓ 0.16 0.16 0.16 0.71 0.26 1.94 0.91 0.00 0.00 1.07 0.81 3.01 2.54 Identity Loss 2 ↓ 1.38 1.38 1.38 134.23 3.10 18.72 7.16 0.00 0.00 7.72 6.03 21.88 17.97 SSIM ↑ 0.651 0.651 0.651 0.401 0.605 0.619 0.551 0.578 0.612 0.578 0.448 0.364 0.539 Table 1: Quantitative evaluation results of different style transfer methods. The losses above are average values from 400 random samples, while SSIMs are computed average from 100 pieces. For a fair comparison, we take relu1 1 into consideration in computing style loss and identity loss 2 while not in the training of S2WAT. The optimal results are highlighted in bold, the second-best results are underlined, and instances with a value of 0 are derived from unbiased methods. Content Style Attn Merge Sum Concat Figure 9: Visual comparisons when utilizing different fusion strategies for attention outputs from multiple windows. Ablation Study Attn Merge. In order to showcase the effectiveness and superiority of “Attn Merge”, we undertake experiments where “Attn Merge” is replaced by fusion strategies such as the concatenation operation (as employed by CSWin) or the sum operation. The outcomes are depicted in Fig. 9. Stylized images generated using the sum operation are extensively corrupted, indicating a failure in model optimization. On the other hand, outputs obtained through concatenation relinquish a substantial portion of information from input images, particularly the style images. An intuitive rationale for this phenomenon lies in the optimization challenges posed by straightforward fusion operations. Comprehensive explanations are available in the Appendix. The proposed “Attn Merge”, however, facilitates smooth information transmission, allowing the model to undergo normal training. Strips Window. To verify the demand to fuse outputs from window attention of various sizes, we carry out experiments employing window attention with distinct window sizes independently. As illustrated in Fig. 10, the utilization of horizontal or vertical strip-like windows in isolation yields corresponding patterns. Applying square windows alone results in grid-like patterns. However, the incorporation of “Attn Merge” to fuse outcomes leads to pleasing stylized images, surpassing the results obtained solely from window attention. Further details regarding the ablation study for Swin and Swin with Mix-FFN can be found in the Appendix. User Study In comparing virtual stylization effects between S2WAT and the aforementioned SOTAs like StyTr2, ArtFlow, MCC, and SANet, user studies were conducted. Using a widelyemployed online questionnaire platform, we created a Content Style Attn Merge Horizontal Vertical Square Figure 10: Visual comparisons for the ablation study when employing different window attention mechanisms. Method Ours StyTr2 ArtFlow MCC SANet Percent(%) 25.4 25.4 25.4 23.6 13.3 19.4 18.3 Table 2: Percentage of votes in the user study. dataset comprising 528 stylized images from 24 content images and 22 style images. Participants, briefed on image style transfer and provided with evaluation criteria, assessed 31 randomly selected content and style combinations. Criteria emphasized preserving content details and embodying artistic attributes. With 3002 valid votes from 72 participants representing diverse backgrounds, including high school students and professionals in computer science, art, and photography, our method achieved a marginal victory in the user study, as reflected in Table 2. Additional details including an example questionnaire page can be found in the Appendix. Conclusion In this paper, we introduce S2WAT, a pioneering image style transfer framework founded upon a hierarchical vision transformer architecture. S2WAT’s prowess lies in its capacity to simultaneously capture local and global information through SpW Attention. The SpW Attention mechanism, featuring diverse window attention shapes, ensures an optimal equilibrium between short- and long-range dependency modeling, further enhanced by our proprietary “Attn Merge”. This adaptive merging technique efficiently gauges the significance of various window attentions based on target similarity. Furthermore, S2WAT mitigates the content leak predicament, yielding stylized images endowed with vibrant style attributes and intricate content intricacies. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7030 Acknowledgements This work was supported by the National Key R&D Program of China (2021YFB3100700), the National Natural Science Foundation of China (62076125, U20B2049, U22B2029, 62272228), and Shenzhen Science and Technology Program (JCYJ20210324134408023, JCYJ20210324134810028). References An, J.; Huang, S.; Song, Y.; Dou, D.; Liu, W.; and Luo, J. 2021. Artflow: Unbiased image style transfer via reversible neural flows. In CVPR. Bai, Y.; Liu, J.; Dong, C.; and Yuan, C. 2023. ITstyler: Image-optimized Text-based Style Transfer. arXiv. Chen, D.; Yuan, L.; Liao, J.; Yu, N.; and Hua, G. 2017. Stylebank: An explicit representation for neural image style transfer. In CVPR. Chen, H.; Wang, Z.; Zhang, H.; Zuo, Z.; Li, A.; Xing, W.; Lu, D.; et al. 2021. Artistic style transfer with internalexternal learning and contrastive learning. In NeurIPS. Cheng, J.; Dong, L.; and Lapata, M. 2016. Long short-term memory-networks for machine reading. In EMNLP. Deng, Y.; Tang, F.; Dong, W.; Huang, H.; Ma, C.; and Xu, C. 2021. Arbitrary video style transfer via multi-channel correlation. In AAAI. Deng, Y.; Tang, F.; Dong, W.; Ma, C.; Pan, X.; Wang, L.; and Xu, C. 2022. StyTr2: Image Style Transfer with Transformers. In CVPR. Deng, Y.; Tang, F.; Dong, W.; Sun, W.; Huang, F.; and Xu, C. 2020. Arbitrary style transfer via multi-adaptation network. In ACM MM. Efros, A. A.; and Freeman, W. T. 2001. Image quilting for texture synthesis and transfer. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Fan, H.; Xiong, B.; Mangalam, K.; Li, Y.; Yan, Z.; Malik, J.; and Feichtenhofer, C. 2021. Multiscale vision transformers. In ICCV. Gatys, L.; Ecker, A. S.; and Bethge, M. 2015. Texture synthesis using convolutional neural networks. In NeurIPS. Gatys, L. A.; Ecker, A. S.; and Bethge, M. 2016. A neural algorithm of artistic style. In Vision Sciences Society. Graham, B.; El-Nouby, A.; Touvron, H.; Stock, P.; Joulin, A.; J´egou, H.; and Douze, M. 2021. Levit: a vision transformer in convnet’s clothing for faster inference. In ICCV. Hong, K.; Jeon, S.; Lee, J.; Ahn, N.; Kim, K.; Lee, P.; Kim, D.; Uh, Y.; and Byun, H. 2023. AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks. In ICCV. Huang, X.; and Belongie, S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV. Johnson, J.; Alahi, A.; and Fei-Fei, L. 2016. Perceptual losses for real-time style transfer and super-resolution. In ECCV. Kong, X.; Deng, Y.; Tang, F.; Dong, W.; Ma, C.; Chen, Y.; He, Z.; and Xu, C. 2023. Exploring the Temporal Consistency of Arbitrary Style Transfer: A Channelwise Perspective. IEEE Transactions on Neural Networks and Learning Systems. Li, G.; Cheng, B.; Cheng, L.; Xu, C.; Sun, X.; Ren, P.; Yang, Y.; and Chen, Q. 2022. Arbitrary Style Transfer with Semantic Content Enhancement. In The 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry. Li, Y.; Fang, C.; Yang, J.; Wang, Z.; Lu, X.; and Yang, M.H. 2017. Universal style transfer via feature transforms. In NeurIPS. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In ECCV. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV. Liu, Z.-S.; Wang, L.-W.; Siu, W.-C.; and Kalogeiton, V. 2022. Name your style: An arbitrary artist-aware image style transfer. arXiv. Ma, Y.; Zhao, C.; Li, X.; and Basu, A. 2023. RAST: Restorable Arbitrary Style Transfer via Multi-Restoration. In WACV. Park, D. Y.; and Lee, K. H. 2019. Arbitrary style transfer with style-attentional networks. In CVPR. Phillips, F.; and Mackintosh, B. 2011. Wiki Art Gallery, Inc.: A case for critical thinking. Issues in Accounting Education. Ulyanov, D.; Vedaldi, A.; and Lempitsky, V. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In NeurIPS. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV. Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; and Zhang, L. 2021a. Cvt: Introducing convolutions to vision transformers. In ICCV. Wu, X.; Hu, Z.; Sheng, L.; and Xu, D. 2021b. Styleformer: Real-time arbitrary style transfer via parametric style composition. In ICCV. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. Yao, Y.; Ren, J.; Xie, X.; Liu, W.; Liu, Y.-J.; and Wang, J. 2019. Attention-aware multi-stroke style transfer. In CVPR. Zhang, Y.; Huang, N.; Tang, F.; Huang, H.; Ma, C.; Dong, W.; and Xu, C. 2023. Inversion-based style transfer with diffusion models. In CVPR, 10146–10156. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7031 Zhang, Y.; Tang, F.; Dong, W.; Huang, H.; Ma, C.; Lee, T.Y.; and Xu, C. 2022. Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning. In SIGGRAPH. Zhu, M.; He, X.; Wang, N.; Wang, X.; and Gao, X. 2023. All-to-key Attention for Arbitrary Style Transfer. In ICCV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7032
2024
781
18,608
Synergistic Multiscale Detail Refinement via Intrinsic Supervision for Underwater Image Enhancement Dehuan Zhang1*, Jingchun Zhou1*†, Chunle Guo2, Weishi Zhang1, Chongyi Li2 1 College of Information Science and Technology, Dalian Maritime University 2 VCIP, CS, Nankai University zhangdehuan97@gmail.com, zhoujingchun03@gmail.com, guochunle@nankai.edu.cn, teesiv@dlmu.edu.cn, lichongyi@nankai.edu.cn Abstract Visually restoring underwater scenes primarily involves mitigating interference from underwater media. Existing methods ignore the inherent scale-related characteristics in underwater scenes. Therefore, we present the synergistic multiscale detail refinement via intrinsic supervision (SMDR-IS) for enhancing underwater scene details, which contain multistages. The low-degradation stage from the original images furnishes the original stage with multi-scale details, achieved through feature propagation using the Adaptive Selective Intrinsic Supervised Feature (ASISF) module. By using intrinsic supervision, the ASISF module can precisely control and guide feature transmission across multi-degradation stages, enhancing multi-scale detail refinement and minimizing the interference from irrelevant information in the low-degradation stage. In multi-degradation encoder-decoder framework of SMDR-IS, we introduce the Bifocal IntrinsicContext Attention Module (BICA). Based on the intrinsic supervision principles, BICA efficiently exploits multi-scale scene information in images. BICA directs higher-resolution spaces by tapping into the insights of lower-resolution ones, underscoring the pivotal role of spatial contextual relationships in underwater image restoration. Throughout training, the inclusion of a multi-degradation loss function can enhance the network, allowing it to adeptly extract information across diverse scales. When benchmarked against stateof-the-art methods, SMDR-IS consistently showcases superior performance. The code is publicly available at: https: //github.com/zhoujingchun03/SMDR-IS Introduction In the complex dynamics of underwater environments, the quality of optical images is primarily determined by the influence of dissolved and suspended substances on light absorption and scattering (Guo et al. 2023). Absorption effects lead to challenges like reduced imaging distance and color distortion, while scattering effects diminish image contrast and detail. Our goal is to enhance the quality of underwater optical images, providing robust solutions for applications, such as underwater exploration, marine biology research, and surveillance (Liu et al. 2022b). Image enhance*These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Illustration of motivation. The figure showcases similar scene information extracted from multi-resolution images alongside transmission. Degradation patterns, consistent across different positions, are evident in both the original and the downscaled images. ment technique empowers researchers and practitioners to more effectively interpret and analyze underwater image data (Kang et al. 2022). Enhancing low-quality images poses significant challenges to the field of computer vision (Ma et al. 2023; Khan et al. 2023; Zhou et al. 2024). These challenges arise primarily from scattering and blurring effects unique to aquatic environments, which inherently manifest in a multi-scale manner (Zhou et al. 2023b). Particulate matter and water turbulence at different scales have different effects on different scales have different effects on different parts of an image, leading to the degradation of multi-scale correlated features. The underwater image formation model (UIFM) (Zhou et al. 2023e) is represented as: I = J × t + A(1 −t) (1) where I represents the clear image, J is the underwater image, t represents the transmission related to depth, and A denotes the atmospheric light. The core objective of underwater image enhancement (UIE) is to accurately estimate I from Eq. (1) (Zhou et al. 2023a). This is crucial because degradation levels in underwater images can differ across pixels due to variations in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7033 distance. There are many existing enhancement techniques (Zhuang, Li, and Wu 2021), which either globally or locally extract features (Liu et al. 2022a) (Liu et al. 2023). These methods often leverage filters (Zhou et al. 2023c) and color correction techniques (Zhou et al. 2022) to improve image quality. As illustrated in Figure 1, similar degradation scenarios exist in different resolution stages, Therefore, the inherent scale-related characteristics in multi-scale underwater scenes (Jiang et al. 2020)(Hou et al. 2023) demand attention. By recognizing inherent scale-related degradation patterns, we can gain a richer understanding of the structure of the scene. In light of this, we proposed Synergistic Multiscale Detail Refinement via Intrinsic Supervision for Underwater Image Enhancement (SMDR-IS). SMDR-IS uniquely leverages low-resolution images as auxiliary inputs, providing additional insights into scene degradation. SMDR-IS simultaneously harnesses multiple degradations to mitigate the loss of multiple scene features resulting from feature extractions. While feature extraction at the original yields high-level features produce high-level features, it is often possible to capture finer data features at the expense of larger-scale scene information. The lowdegradation stages serve to counteract the excessive abstraction from redundant extractions, ensuring that the intrinsic relationship between features and the original image remains intact. By integrating low-degradation images and features from the initial layers, both credibility and applicability are enhanced. Furthermore, the incorporation of ASISF guarantees that irrelevant information is excluded from the original resolution. The contributions are summarized as follows: (1) Addressing the limitations of scale-related features in current underwater image enhancement methods, which lead to incomplete restoration of scene details, we propose a synergistic multiscale detail refinement via intrinsic supervision for underwater image enhancement. (2) We design a new attention, namely Bifocal IntrinsicContext Attention, based on a dual-path approach, ensuring that both feature enhancement and contextual semantic information are addressed. Additionally, we integrate resolution-guided supervision to boost computational efficiency without sacrificing detail enhancement quality. (3) With the aim of refining low-resolution feature information, we introduce the Adaptive Selective Intrinsic Supervised Feature (ASISF) module. ASISF regulates feature propagation, enhancing image quality and avoiding blurring caused by utilizing the overlay of multiscale scene details. (4) The integration of a multi-degradation loss function provides constraints and optimization for learning features at each stage. This approach empowers the network to effectively exploit information at various scales, thereby improving detail and structure recovery. Related Work UIE techniques have become increasingly prevalent in the domains of computer vision and image processing (Zhuang et al. 2022). Broadly speaking, deep learning-based UIE methods can be divided into prior-based methods and endto-end methods. Prior-based Image Enhancement Prior-based methods rely on explicit degradation models or pre-existing knowledge to calibrate model parameters. For instance, UColor (Li et al. 2021) which is guided by transmission, employs a transmission-driven image enhancement network using GDCP (Peng, Cao, and Cosman 2018). (Zhou et al. 2023d) uses an augmented U-Net to fuse inputs, such as the original, color-corrected, and contrast-enhanced images, to effectively leverage features. Unsupervised underwater image restoration method (USUIR) (Fu et al. 2022) involves designing a transmission subnet, a global background subnet, and a scene radiance subnet to estimate parameters of UIFM, facilitating Photo-realistic image restoration. Zhang et al. (Zhang et al. 2023) proposed Rex-Net, which applied the Retinex theory to enhance underwater images. (Mu et al. 2023) employs physical knowledge to design the Adaptive Transmission-Guided Module to guide the network. While these methods have shown promising results, the intricate and unpredictable nature of underwater environments sometimes undermines the efficacy of the proposed priors, potentially limiting model adaptability. End-to-end Image Enhancement End-to-end UIE techniques circumvent the need for explicit degradation models or prior knowledge. UIEC2 (Wang et al. 2021) enhanced image in the RGB and HSV color spaces. UIEC2 employs RGB pixel-level blocks for color restoration and adjusts saturation using curves in the HSV domain. In (Zhou, Zhang, and Zhang 2023), an enhancement method is devised based on cross-view images, which employs feature alignment to fuse scene information from multiple perspectives. Liu et al. (Jiang et al. 2022b) put forth an image-aware adversarial fusion network rooted in object detection. This strategy integrates a multi-scale dense enhancement subnet to bolster visual results. (Khan et al. 2023) proposed a lightweight transformer method (UIEPTA), which presents a gray-scale attention model to guide the network for extracting non-contaminated features. Notwithstanding their merits, many existing end-to-end methods tend to overlook the full potential of scale-related features in image scenes. This oversight often results in difficulties in precise scene detail recovery. To address this gap, we introduce SMDR-IS, emphasizing synergistic multiscale detail refinement and intrinsic supervision. Methodology We propose Synergistic Multiscale Detail Refinement via Intrinsic Supervision for Underwater Image Enhancement (SMDR-IS), as depicted in Figure 2. SMDR-IS comprises a multi-degradation encoder and decoder. Multi-Degradation Encoder Our proposed model integrates four stages of multiresolution image inputs, equipping the network and the related features with diverse scale scene information of the input image. Initially, we employ downsampling on the original image to derive underwater scenes of different The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7034 Figure 2: The overview of SMDR-IS. In Decoder, E1 i is the same as in Encoder. FEi, i ∈{or, 2D, 4D, 8D} is Feature Extraction (Conv). E1 i , D1 i denote the Bifocal Intrinsic-Context Attention (BICA). Ej i , j ∈[2, 4] is Downsamling+BICA. Dj i is Upsamling+BICA. S denotes Adaptive Selective Intrinsic Supervised Feature Module (ASISF). FRi is Feature Restoration (Conv). scales. Since prevailing UNet-based underwater enhancement methods (Li et al. 2021) typically utilize three downsampling steps, we adopt three downscaled stages corresponding to each stage of the UNet. This can capture a richer array of scale scene information. To harness the scale-correlated attributes of image scenes, SMDR-IS proposed the multi-degradation encoder, further supplemented by three stages for lower-resolution images. Each encoder stage incorporates the Bifocal IntrinsicContext Attention Module (BICA) to extract feature details. BICA can adeptly fuse global and local features, optimizing image enhancement. With the aid of resolution-guided supervision, BICA strikes a harmonious balance between computational efficiency and details enhancement. To bolster the utility of low-resolution features, we proposed the Adaptive Selective Intrinsic Supervised Feature (ASISF) module. This module governs feature propagation, elevating image enhancement outcomes, and curbing information blurring that may arise from superimposing multi-scale scene details. Bifocal Intrinsic-Context Attention The bifocal intrinsiccontext attention module (BICA), as depicted in Figure 3, consists of two distinct branches. The first branch is responsible for recognizing the significant influence of neighboring pixel regions for image restoration. This branch contains the Comprehensive Feature Attention (CFA) module (detailed in the supplementary material), followed by the ResolutionGuided Intrinsic Attention (ReGIA) module. The CFA module, as shown in Figure 3 (b), extracts features from pixels and channels through spatial attention and channel attention, respectively. The features from CFA are fed to the ReGIA module, whose function is to broaden the receptive field while maintaining computational efficiency. As shown in Figure 3(c), by using low-resolution spatial intrinsic supervision, ReGIA can further enrich the features by effectively capturing multi-scale scene details. In the second branch, acknowledging the pivotal role of spatial contextual relationships in underwater image restoration, we design the Hierarchical Context-Aware Feature Extraction (HCAFE) module (detailed in the supplementary material). This module functions within the original feature domain, extracting image representation features through hierarchical context attention. BCIA leverages a resolution-guided supervision approach, achieving superior computational efficiency without sacrificing detail quality. BCIA employs an innovative dualpath attention mechanism, comprehensively addressing both the influence of neighboring pixel regions and spatial contextual relationships. Resolution-Guided Intrinsic Attention Module Underwater image optimization is heavily influenced by the contextual details of adjacent pixels. However, to reduce the computational efficiency of the network, enhancement methods often utilize compact 3 × 3 convolutional kernels for feature extraction. Although 3 × 3 kernels are computationally efficient, the small receptive field hinders the network to capture extensive contextual features. To address the limitation and broaden the receptive field, we propose the Resolution-Guided Intrinsic Attention Module (ReGIA), which enhances the preliminary features extracted from the input image by CFA. ReGIA is tailored to discern feature weight data in a lower-resolution latent space with a large feature receptive field. Guided by the lowerresolution latent space, ReGIA serves as a guiding beacon that not only enhances the correlation of features in the origThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7035 Figure 3: (a) Architecture of BICA, LN represents Layer Normalization. (b) CFA, CA and SA represent Channel Attention and Spatial Attention, respectively. (c) ReGIA, ”Down” and ”Up” denote downsampling and upsampling, respectively. In lowresolution, 1×1 corresponds to 6×6 spatial feature in the original resolution. (d) ASISF, highlighting that the channels in both input and reference features can be diverse. inal domain but also strives to improve the computational efficiency of the network. Adaptive Selective Intrinsic Supervised Feature Module When integrating feature information from the lowresolution encoder and decoder into the original resolution branch, it is imperative to mitigate interference from nonessential information during image restoration (Zamir et al. 2021). To this end, we introduce the Adaptive Selective Intrinsic Supervised Feature Module (ASISF). This module adopts an intrinsic-supervised approach for feature selection and constraint, as depicted in Figure 3. ASISF is meticulously designed to retain only the most pertinent features to supplement image feature extraction and reconstruction in the original resolution stage. When learning features from the low-resolution encoder, the features from the original resolution act as a reference. This strategy ensures that both the encoder and decoder components of multi-degradation align with the genuine degradation traits inherent in underwater images. This is particularly significant in the context of image restoration, where the presence of noise, distortions, and varying levels of degradation can complicate the restoration process. By employing intrinsic supervision, ASISF effectively identifies and retains features meaningful for the restoration task, thereby improving the precision and efficiency of the restoration process. The feature adaptive selection is pivotal for effective learning and generalization of the SMDR-IS, ultimately enhancing image restoration capabilities by supplementing inherent scale-related degradation patterns at multi-resolution. Multi-Degradation Decoder Unlike the traditional UNet, the multi-degradation decoder in SMDR-IS is uniquely designed to complement its encoder counterpart. Low-degradation information captures the inherent scale-related characteristics of input images. By integrating this information, the decoder can gain a deeper insight into these degradation patterns and multiple scenes. The fusion of low-degradation and residual information from the encoder equips the decoder with a holistic grasp of the complicated image degradation. This comprehensive understanding empowers the network to produce restoration results that are both precise and context-rich. Essentially, the low-degradation information acts as the supplementary branch, enhancing the decoder’s restoration capabilities. Consequently, the output images not only exhibit crisp details but also accurately recover information from multiple scenes. It is worth mentioning that within the FRiD, the ASISF performs intrinsic-supervised feature selection on the output features when feeding data to the original degradation encoder. This ensures that only the most relevant feature is propagated through the image processing pipeline. In essence, the multi-degradation decoder of SMDR-IS uses low-degradation cues to ingeniously supplement inherent scale-related characteristics, thereby notably improving the fidelity of the restored images. Multi-Degradation Loss Function To enhance the precision of the restoration process, SMDRIS employs a multi-degradation loss function to govern different modules of the network, which compares the multiresolution enhancement image of SMDR-IS with multiresolution ground truth, to instill multi-resolution superviThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7036 Figure 4: Qualitative comparison between state-of-the-art methods and the SMDR-IS on various datasets. sion. L1 = N X i=1 |yi −ˆyi| (2) Lpre = ∥ϕ(y)i −ϕ(ˆy)i∥2 2 (3) Lmse = 1 N N X i=1 (yi −ˆyi)2 (4) Lj = Lj 1 + 0.2 × Lj pre + Lj mse (5) where N is the total number of pixels in the image, yi denotes the ground-truth pixel value, ˆyi represents the predicted pixel value. ϕ is the function in VGGNet. i represents the i-th layer’s feature maps in the pre-trained network. Lj represents the loss function of the j-th stage, j is an index, from 1 to 4. 4 is the number of stages in SMDR-IS. The overall loss functions (L) for training SMDR-IS can be written as follows: L = 4 X j=1 Lj (6) Experiments Datasets We trained SMDR-IS utilizing the UIEB dataset, which consists of 800 training images and 90 paired testing images. To assess the robustness of SMDR-IS, we further conducted the evaluation on various datasets, i.e. UIEB, U45, LSUI. Implementation Details Our method was implemented using PyTorch, with an NVIDIA Tesla V100 GPU, Intel(R) Xeon(R) Silver 4114 CPU, and 32GB RAM. For training, images were randomly cropped to a resolution of 256×256, and we used a batch size of 44 and a learning rate of 0.0002. To ensure that SMDR-IS can generate outputs consistent with the original image size during testing, we adopted the border padding techniques. Comparison Results In this section, we adopted both objective assessments (UIQM, UCIQE (Jiang et al. 2022a), CCF (Wang et al. 2018), CEIQ (Fang et al. 2014), VSI (Zhang, Shen, and Li The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7037 Dataset Metric ULAP IBLA GDCP WaterNet FUnIE GAN UWCNN UColor UDA U-shape SMDR-IS UIEB Val PSNR ↑ 15.913 17.988 13.386 17.349 17.114 17.985 20.962 21.484 20.462 23.710 MSE ↓ 0.174 0.143 0.228 0.144 0.150 0.134 0.097 0.096 0.100 0.075 SSIM ↑ 0.745 0.805 0.747 0.813 0.701 0.844 0.863 0.873 0.792 0.922 VSI ↑ 0.947 0.958 0.943 0.966 0.941 0.966 0.971 0.970 0.959 0.983 FSIM ↑ 0.915 0.928 0.901 0.908 0.891 0.923 0.931 0.932 0.892 0.967 FSIMc ↑ 0.878 0.899 0.865 0.899 0.858 0.903 0.920 0.918 0.883 0.957 UIQM ↑ 2.259 2.490 2.670 2.917 3.092 3.011 3.049 2.897 3.131 3.015 UCIQE ↑ 0.604 0.606 0.592 0.606 0.564 0.554 0.591 0.612 0.576 0.607 CCF ↑ 24.145 23.841 23.026 20.042 20.416 20.360 21.827 26.220 21.966 26.012 CEIQ ↑ 3.209 3.283 3.208 3.101 3.307 3.090 3.209 3.372 3.235 3.369 ALL ↑ 49.441 51.656 46.109 47.455 47.733 48.502 53.226 58.374 52.997 60.466 U45 UIQM ↑ 2.282 2.388 2.275 2.957 2.495 3.064 3.148 2.878 3.175 3.121 UCIQE ↑ 0.588 0.595 0.597 0.601 0.545 0.554 0.586 0.607 0.571 0.605 CCF ↑ 22.069 21.598 22.736 20.391 12.931 21.418 22.100 25.449 21.284 25.489 CEIQ ↑ 3.192 3.249 3.191 3.186 2.785 3.213 3.283 3.392 3.254 3.397 ALL ↑ 28.131 27.830 28.799 27.135 18.757 28.248 29.117 32.326 28.285 32.612 LSUI PSNR ↑ 17.677 17.555 13.284 19.990 21.129 20.368 21.786 20.288 20.491 21.984 MSE ↓ 0.142 0.149 0.229 0.109 0.101 0.102 0.087 0.104 0.102 0.089 SSIM ↑ 0.760 0.784 0.710 0.839 0.778 0.851 0.848 0.842 0.775 0.870 VSI ↑ 0.959 0.957 0.935 0.971 0.962 0.970 0.973 0.968 0.960 0.975 FSIM ↑ 0.925 0.925 0.886 0.931 0.920 0.940 0.940 0.931 0.900 0.948 FSIMc ↑ 0.894 0.894 0.850 0.918 0.900 0.921 0.928 0.912 0.888 0.933 UIQM ↑ 2.324 2.567 2.572 2.865 2.895 2.976 2.984 2.833 3.038 2.917 UCIQE ↑ 0.611 0.616 0.610 0.605 0.585 0.561 0.593 0.617 0.576 0.602 CCF ↑ 24.004 24.135 24.304 20.531 21.484 21.254 22.331 25.942 21.849 24.727 CEIQ ↑ 3.160 3.282 3.227 3.123 3.195 3.147 3.259 3.361 3.246 3.323 ALL ↑ 51.457 51.864 47.606 50.882 52.949 52.091 54.729 56.799 52.826 57.368 Table 1: Quantitative comparison between state-of-the-art methods and SMDR-IS on different testing datasets. 2014), PSNR (Korhonen and You 2012), MSE, SSIM (Wang et al. 2004), FSIM, FSIMc (Zhang et al. 2011)) and subjective evaluations for comprehensive analysis. We conducted experiments to compare SMDR-IS with state-of-the-art methods to underscore its effectiveness. This includes traditional methods, such as ULAP (Song et al. 2018), IBLA (Peng and Cosman 2017), GDCP (Peng, Cao, and Cosman 2018), as well as deep learning methods, such as WaterNet (Li et al. 2019), FUnIEGAN (Islam, Xia, and Sattar 2020), UWCNN (Li, Anwar, and Porikli 2020), UColor (Li et al. 2021), UDA (Shen et al. 2023) and U-shape (Peng, Zhu, and Bian 2023). The visual results are illustrated in Figure 4, and the metrics are shown in Table 1. To avoid any influence of color on the metrics (e.g., the impact observed with ULAP in Figure 4), we excluded the color component from CCF. As shown in Figure 4, SMDRIS performs better than other methods in terms of visual results and performance metrics. Although ULAP, IBLA, and GDCP achieve commendable restoration results, their generalization capabilities are hindered by their dependence on priors. WaterNet and UColor display enhanced robustness but remain susceptible to extraneous inputs. Although FUnIE-GAN and UWCNN can achieve real-time efficiency, their expressive capability is somewhat constrained due to limited parameters. Although UDA and U-shape can both achieve the goal of underwater image enhancement, SMDRIS combines the inherent scale-related features from multiple scales, providing better robustness for diverse underwater scenarios. Table 1 tabulates the performance of different methods on the three datasets, in terms of different performance metrics. In addition to the individual performance metrics, we also combine them to form a comprehensive metric, denoted as ”ALL”, which is the net sum of the ↑values minus the ↓values. This measurement can provide a holistic assessment by weighing multiple criteria. Remarkably, SMDR-IS achieves the best performance, in terms of the ALL score. Collectively, these results show the ability of SMDR-IS to adeptly restore intricate scenes, underpinned by the synergy from multiscale detail extraction and intrinsic supervision. Figure 5: Subjective results of ablation experiments on number of stages used. From left to right: (a) original images, (b)-(e) correspond to lines 1-4 in Table 4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7038 ULAP IBLA GDCP WaterNet FUnIEGAN UWCNN UColor UDA U-shape SMDR-IS Time ↓ 0.358 9.134 0.163 0.091 0.003 0.050 0.577 0.100 0.109 0.061 Quality ↑ 49.441 51.656 46.109 47.455 47.733 48.502 53.226 58.374 52.997 60.466 Agg ↑ 49.083 42.522 45.946 47.364 47.730 48.452 52.649 58.273 52.887 60.405 Table 2: Aggregative evaluation of efficiency and image quality. En En to DE De PSNR ↑ SSIM ↑ ALL ↑ ✓ ✓ 23.021 0.915 23.936 ✓ ✓ 23.697 0.919 24.615 ✓ ✓ 23.122 0.914 24.036 ✓ ✓ ✓ 23.710 0.922 24.631 Table 3: Ablation study for ASISF. S1 S2 S3 S4 PSNR ↑ SSIM ↑ ALL ↑ ✓ 22.790 0.916 23.706 ✓ ✓ 22.872 0.915 23.787 ✓ ✓ ✓ 23.248 0.915 24.163 ✓ ✓ ✓ ✓ 23.710 0.922 24.631 Table 4: Ablation study for the four different stages. S represents the stage. CA SA ReGIA HCAFE PSNR ↑ SSIM ↑ ALL ↑ ✓ ✓ ✓ 21.219 0.901 22.120 ✓ ✓ ✓ 23.352 0.917 24.269 ✓ ✓ ✓ 22.673 0.907 23.580 ✓ ✓ ✓ 23.276 0.911 24.186 ✓ ✓ ✓ ✓ 23.710 0.922 24.631 Table 5: Ablation study for BICA. L1 Lpre Lmse PSNR ↑ SSIM ↑ ALL↑ ✓ ✓ 23.257 0.915 24.172 ✓ ✓ 22.905 0.912 23.817 ✓ ✓ 23.151 0.919 24.069 ✓ ✓ ✓ 23.710 0.922 24.631 Table 6: Ablation study for loss function. Efficiency Evaluation Table 2 presents the comprehensive analysis of efficiency, performance (the Quality is the ”ALL” on UIEB Val in Table 1), and Aggregative (Agg) score. Although SMDR-IS does not obtain the highest efficiency, it still achieves a commendable frame rate of 16.4744, satisfying real-time demands. Moreover, we introduce the ’Agg’ score to provide a holistic performance evaluation, which considers both efficiency and performance, specifically, Agg = Quality-Time. As observed from Table 2, SMDR-IS achieves the highest score in Agg, which demonstrates the suitability of SMDR-IS for advanced underwater vision tasks. Ablation Study We performed a series of ablation experiments using the testing set from the UIEB dataset. The effects of degradation at various stages are presented in Table 4. Furthermore, we conducted ablation studies on the individual components within BICA, as detailed in 5, on the ASISF component within SMDI-IR, as illustrated in Table 3, and on the different loss functions, as showcased in Table 6. In all tables, entries in bold represent the highest scores obtained. From Table 3, it is evident that the intrinsic guidance provided by ASISF plays a pivotal role in filtering out irrelevant features during the image enhancement process. As illustrated in Table 4 and Figure 5, the number of stages has a significant impact on the performance metrics. When increasing the number of stages, the image restoration performance and the multi-scale feature extraction capability of SMDI-IR progressively improve. The performance metrics reach the peak when the number of stages is four. It is worth noting that our choice of four stages is informed by prevalent practices, especially three-fold downsampling is commonly utilized, as highlighted in (Li et al. 2021). The advantages of the BICA architecture are manifestly demonstrated in Table 5, where the ablation experiments highlight the image restoration ability of each module within BICA. Notably, the contribution of the proposed ReGIA to SMDI-IR is particularly significant, reaffirming the superior capabilities of ReGIA. Furthermore, to ascertain the efficacy of the loss functions used in our study, we conducted ablation experiments by sequentially omitting L1, Lpre, and Lmse in Table 6. These experiments collectively substantiate the effectiveness of our chosen loss functions. Conclusion In this study, we propose a novel method for underwater image restoration, namely SMDR-IS, which can proficiently capture multi-scale scene information, by multi-resolution detail extraction with intrinsic supervision, and utilizing the inherent scale-related features in image scenes. To ensure optimal assimilation of information across different scales, we integrated low-resolution inputs by adaptive selective intrinsic supervision within the original-resolution input, thereby amplifying the conveyance of scene information. To mitigate unnecessary interference from irrelevant scenes, we introduced ASISF, which is meticulously designed to regulate the feature propagation process. Furthermore, our multidegradation loss function strategically guides the network during training. Optimization and constraints at each stage enhance the network’s ability to leverage information at different scales. In the future, we will explore the integration of SMSR-IS into broader computer vision applications, like underwater robotics and autonomous underwater vehicles. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7039 Acknowledgments This work was supported in part by the Natural Science Foundation of China (Nos. 62301105), National Key Research and Development Program of China (No.2018AAA0100400), China Postdoctoral Science Foundation (NO.2021M701780), and the Cultivation Program for the Excellent Doctoral Dissertation of Dalian Maritime University. We are also sponsored by CAAI-Huawei MindSpore Open Fund and the High Performance Computing Center of Dalian Maritime University. References Fang, Y.; Ma, K.; Wang, Z.; Lin, W.; Fang, Z.; and Zhai, G. 2014. No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Processing Letters, 22(7): 838–842. Fu, Z.; Lin, H.; Yang, Y.; Chai, S.; Sun, L.; Huang, Y.; and Ding, X. 2022. Unsupervised Underwater Image Restoration: From a Homology Perspective. In AAAI, volume 36, 643–651. Guo, C.; Wu, R.; Jin, X.; Han, L.; Zhang, W.; Chai, Z.; and Li, C. 2023. Underwater Ranker: Learn Which is Better and How to Be Better. In AAAI, volume 37, 702–709. Hou, G.; Li, N.; Zhuang, P.; Li, K.; Sun, H.; and Li, C. 2023. Non-uniform Illumination Underwater Image Restoration via Illumination Channel Sparsity Prior. IEEE Transactions on Circuits and Systems for Video Technology. Islam, M. J.; Xia, Y.; and Sattar, J. 2020. Fast Underwater Image Enhancement for Improved Visual Perception. IEEE Robotics and Automation Letters, 5(2): 3227–3234. Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Huang, B.; Luo, Y.; Ma, J.; and Jiang, J. 2020. Multi-scale Progressive Fusion Network for Single Image Deraining. In CVPR, 8346–8355. Jiang, Q.; Gu, Y.; Li, C.; Cong, R.; and Shao, F. 2022a. Underwater Image Enhancement Quality Evaluation: Benchmark Dataset and Objective Metric. IEEE Transactions on Circuits and Systems for Video Technology, 32(9): 5959– 5974. Jiang, Z.; Li, Z.; Yang, S.; Fan, X.; and Liu, R. 2022b. Target Oriented Perceptual Adversarial Fusion Network for Underwater Image Enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 32(10): 6584–6598. Kang, Y.; Jiang, Q.; Li, C.; Ren, W.; Liu, H.; and Wang, P. 2022. A Perception-aware Decomposition and Fusion Framework for Underwater Image Enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 33(3): 988–1002. Khan, M. R.; Kulkarni, A.; Phutke, S. S.; and Murala, S. 2023. Underwater Image Enhancement with Phase Transfer and Attention. In 2023 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. Korhonen, J.; and You, J. 2012. Peak Signal-to-noise Ratio Revisited: Is Simple Beautiful? In QoMEx, 37–38. IEEE. Li, C.; Anwar, S.; Hou, J.; Cong, R.; Guo, C.; and Ren, W. 2021. Underwater Image Enhancement via Medium Transmission-guided Multi-color Space Embedding. IEEE Transactions on Image Processing, 30: 4985–5000. Li, C.; Anwar, S.; and Porikli, F. 2020. Underwater Scene Prior Inspired Deep Underwater Image and Video Enhancement. Pattern Recognition, 98: 107038. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; and Tao, D. 2019. An Underwater Image Enhancement Benchmark Dataset and Beyond. IEEE Transactions on Image Processing, 29: 4376–4389. Liu, J.; Shang, J.; Liu, R.; and Fan, X. 2022a. Attentionguided Global-local Adversarial Learning for Detailpreserving Multi-exposure Image Fusion. IEEE Transactions on Circuits and Systems for Video Technology, 32(8): 5026–5040. Liu, J.; Wu, G.; Luan, J.; Jiang, Z.; Liu, R.; and Fan, X. 2023. HoLoCo: Holistic and Local Contrastive Learning Network for Multi-exposure Image Fusion. Information Fusion, 95: 237–249. Liu, R.; Jiang, Z.; Yang, S.; and Fan, X. 2022b. Twin Adversarial Contrastive Learning for Underwater Image Enhancement and Beyond. IEEE Transactions on Image Processing, 31: 4922–4936. Ma, L.; Jin, D.; An, N.; Liu, J.; Fan, X.; Luo, Z.; and Liu, R. 2023. Bilevel fast scene adaptation for low-light image enhancement. International Journal of Computer Vision, 1– 19. Mu, P.; Fang, J.; Qian, H.; and Bai, C. 2023. Transmission and Color-guided Network for Underwater Image Enhancement. In 2023 IEEE International Conference on Multimedia and Expo (ICME), 1337–1342. IEEE. Peng, L.; Zhu, C.; and Bian, L. 2023. U-shape transformer for underwater image enhancement. IEEE Transactions on Image Processing. Peng, Y.-T.; Cao, K.; and Cosman, P. C. 2018. Generalization of the Dark Channel Prior for Single Image Restoration. IEEE Transactions on Image Processing, 27(6): 2856–2868. Peng, Y.-T.; and Cosman, P. C. 2017. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE transactions on image processing, 26(4): 1579– 1594. Shen, Z.; Xu, H.; Luo, T.; Song, Y.; and He, Z. 2023. UDAformer: Underwater image enhancement based on dual attention transformer. Computers & Graphics, 111: 77–88. Song, W.; Wang, Y.; Huang, D.; and Tjondronegoro, D. 2018. A Rapid Scene Depth Estimation Model based on Underwater Light Attenuation Prior for Underwater Image Restoration. In Pacific Rim Conference on Multimedia, 678– 688. Springer. Wang, Y.; Guo, J.; Gao, H.; and Yue, H. 2021. UIECˆ 2-Net: CNN-based Underwater Image Enhancement using Two Color Space. Signal Processing: Image Communication, 96: 116250. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7040 Wang, Y.; Li, N.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; and Sun, M. 2018. An Imaging-inspired No-reference Underwater Color Image Quality Assessment Metric. Computers & Electrical Engineering, 70: 904–913. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing, 13(4): 600–612. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2021. Multi-stage Progressive Image Restoration. In CVPR, 14821–14831. Zhang, D.; Zhou, J.; Zhang, W.; Lin, Z.; Yao, J.; Polat, K.; Alenezi, F.; and Alhudhaif, A. 2023. ReX-Net: A Reflectance-guided Underwater Image Enhancement Network for Extreme Scenarios. Expert Systems with Applications, 120842. Zhang, L.; Shen, Y.; and Li, H. 2014. VSI: A Visual Saliency-induced Index for Perceptual Image Quality Assessment. IEEE Transactions on Image processing, 23(10): 4270–4281. Zhang, L.; Zhang, L.; Mou, X.; and Zhang, D. 2011. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Transactions on Image Processing, 20(8): 2378–2386. Zhou, J.; Gai, Q.; Zhang, D.; Lam, K.-M.; Zhang, W.; and Fu, X. 2024. IACC: Cross-Illumination Awareness and Color Correction for Underwater Images Under Mixed Natural and Artificial Lighting. IEEE Transactions on Geoscience and Remote Sensing, 62: 1–15. Zhou, J.; Li, B.; Zhang, D.; Yuan, J.; Zhang, W.; Cai, Z.; and Shi, J. 2023a. UGIF-Net: An Efficient Fully Guided Information Flow Network for Underwater Image Enhancement. IEEE Transactions on Geoscience and Remote Sensing. Zhou, J.; Liu, Q.; Jiang, Q.; Ren, W.; Lam, K.-M.; and Zhang, W. 2023b. Underwater camera: Improving visual perception via adaptive dark pixel prior and color correction. International Journal of Computer Vision, 1–19. Zhou, J.; Pang, L.; Zhang, D.; and Zhang, W. 2023c. Underwater Image Enhancement Method via Multi-interval Subhistogram Perspective Equalization. IEEE Journal of Oceanic Engineering. Zhou, J.; Sun, J.; Zhang, W.; and Lin, Z. 2023d. Multi-view Underwater Image Enhancement Method via Embedded Fusion Mechanism. Engineering Applications of Artificial Intelligence, 121: 105946. Zhou, J.; Wang, Y.; Li, C.; and Zhang, W. 2023e. Multicolor Light Attenuation Modeling for Underwater Image Restoration. IEEE Journal of Oceanic Engineering, 48(4): 1322– 1337. Zhou, J.; Zhang, D.; Ren, W.; and Zhang, W. 2022. Auto Color Correction of Underwater Images Utilizing Depth Information. IEEE Geoscience and Remote Sensing Letters, 19: 1–5. Zhou, J.; Zhang, D.; and Zhang, W. 2023. Cross-view Enhancement Network for Underwater Images. Engineering Applications of Artificial Intelligence, 121: 105952. Zhuang, P.; Li, C.; and Wu, J. 2021. Bayesian Retinex Underwater Image Enhancement. Engineering Applications of Artificial Intelligence, 101: 104171. Zhuang, P.; Wu, J.; Porikli, F.; and Li, C. 2022. Underwater Image Enhancement with Hyper-laplacian Reflectance Priors. IEEE Transactions on Image Processing, 31: 5442– 5455. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7041
2024
782
18,609
W2P: Switching from Weak Supervision to Partial Supervision for Semantic Segmentation Fangyuan Zhang1,2 , Tianxiang Pan1,2, Junhai Yong1,2, Bin Wang1,2* 1School of Software, Tsinghua University, China 2Beijing National Research Center for Information Science and Technology (BNRist), China zhangfy19@mails.tsinghua.edu.cn, ptx9363@gmail.com, yongjh@tsinghua.edu.cn, wangbins@tsinghua.edu.cn Abstract Current weakly-supervised semantic segmentation (WSSS) techniques concentrate on enhancing class activation maps (CAMs) with image-level annotations. Yet, the emphasis on producing these pseudo-labels often overshadows the pivotal role of training the segmentation model itself. This paper underscores the significant influence of noisy pseudolabels on segmentation network performance, particularly in boundary region. To address above issues, we introduce a novel paradigm: Weak to Partial Supervision (W2P). At its core, W2P categorizes the pseudo-labels from WSSS into two unique supervisions: trustworthy clean labels and uncertain noisy labels. Next, our proposed partially-supervised framework adeptly employs these clean labels to rectify the noisy ones, thereby promoting the continuous enhancement of the segmentation model. To further optimize boundary segmentation, we incorporate a noise detection mechanism that specifically preserves boundary regions while eliminating noise. During the noise refinement phase, we adopt a boundaryconscious noise correction technique to extract comprehensive boundaries from noisy areas. Furthermore, we devise a boundary generation approach that assists in predicting intricate boundary zones. Evaluations on the PASCAL VOC 2012 and MS COCO 2014 datasets confirm our method’s impressive segmentation capabilities across various pseudo-labels. Introduction Weakly-supervised semantic segmentation (WSSS) (Lee et al. 2019; Wang et al. 2020; Lee, Kim, and Yoon 2021; Chen et al. 2022; Lee et al. 2021b; Li et al. 2022; Xu et al. 2022) achieves segmentation by using image-level labels instead of precise pixel-wise annotations in fully-supervised semantic segmentation (FSSS) (Xie et al. 2021; Chen et al. 2018; Long, Shelhamer, and Darrell 2015). It follows the two-stage paradigm: a classification model generates class activation maps (CAMs) (Zhou et al. 2016) as pseudo-labels, which are then used to train a segmentation network. Contemporary methods in WSSS primarily focus on enhancing CAMs during the initial stage. Despite progressive improvements in the metrics for pseudo-labels, these enhancements do not translate into improved segmentation performance. Table 1 demonstrates that although *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. SEAM IRN ∆ EDAM AMN ∆ T-mIoU 63.6 66.3 2.7↑ 68.0 72.2 4.2↑ T-mAcc 80.2 74.2 6.0↓ 80.4 77.3 3.1↓ V-mIoU 64.5 63.5 1.0↓ 70.9 69.5 1.4↓ Table 1: We analyze the quality of pseudo-labels and segmentation performance of various WSSS methods on the PASCAL VOC 2012 train (T) and val (V) set using the mean Intersection-over-Union (mIoU) and Accuracy (mAcc). AMN (Lee, Kim, and Shim 2022) achieves a higher mIoU compared to EDAM (Wu et al. 2021), it does not lead to improved segmentation performance in the val set. A similar occurrence can be observed between SEAM (Wang et al. 2020) and IRN (Ahn, Cho, and Kwak 2019). This inconsistency can be attributed to the presence of inherent noise during the initial stage, resulting in lower accuracy as shown in Table 1. Figure 1(b) depicts how training the FSSS network with low-quality pixels in the pseudo ground-truths causes it to converge towards sub-optimal solutions. In this study, we contend that it is crucial to give greater emphasis to robust learning with noisy labels for the second stage rather than solely focusing on optimizing the CAMs in the first stage. Unlike image classification, the process of pixel-wise learning in noisy environments is more intricate. This complexity arises from the difficulty of accurately predicting the main supervisory signals in the boundary regions of WSSS (Rong et al. 2023; Wang et al. 2022). To tackle these challenges, as illustrated in Figure 1(c), we propose a novel framework named Weak to Partial Supervision (W2P), which consists of two modules: the boundary-preserving noise detection (BPND) module and the Partially-supervised Learning (PSL) module. The BPND module trains a segmentation model using a few iterations with pseudo-labels generated by established WSSS methods. Motivated by the early-learning theory (Liu et al. 2020, 2022), which indicates a significant discrepancy between the noisy pseudo-labels and model predictions in the early stages of training, the BPND module employs pixel-wise “small-loss” metric to distinguish between clean and noisy pseudo-labels. While “small-loss”criterion effectively selects trustworthy clean pseudo-labels, it fails The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7042 WSSS “Bird” Output Image (a) Weakly-supervised Segmentation WSSS Pseudo Label Image BPND Clean Noisy BPNC New Output W2P (c) Weakly-supervised Segmentation to Partial-supervised Segmentation (Ours) FSSS “Bird” WSSS Pseudo Label Image FSSS Output (b) Weakly-supervised Segmentation to Fully-supervised Segmentation “Bird” Figure 1: An overview of various training paradigms for WSSS: (a) Basic weakly-supervised semantic segmentation. (b) Weakly-supervised to fully-supervised semantic segmentation. (c) Our proposed W2P framework. to preserve boundary regions that present challenges during early learning. To address this, we incorporate low-level semantics and introduce a boundary-preserving noise detection strategy based on the superpixel structure. This strategy utilizes structural constraints to shift regions near boundaries with small losses toward the interior of the object, thereby preserving an accurate boundary representation. After boundary-preserving noise detection, we train the partially-supervised learning framework using clean partial supervisions, with model predictions as reliable supervision for noisy regions. To reduce errors and ensure robust training, we propose a class-wise adaptive threshold paradigm to filter out unreliable predictions. To identify challenging boundaries, we suggest a boundary-preserving noise correction algorithm. Additionally, we propose a boundary generation strategy to enhance boundary predictions by duplicating and transferring high-confidence object boundaries between images, creating artificial boundary pixels. In summary, the main contributions are as follows: • We propose a new WSSS paradigm that converts weak to partial supervision by redefining the second stage of WSSS as the segmentation problem with noisy labels. • We present a boundary-preserving noise detection module for pseudo-labels selection, while preserving complete boundary structure. • Upon selecting reliable supervision, the partiallysupervised learning module offers complementary signals for the unreliable parts. We further introduce a boundary preserving noise correction and boundary generation strategy to enhance boundary segmentation. Related Work Weakly-Supervised Semantic Segmentation Existing image-level WSSS methods commonly use CAMs (Zhou et al. 2016) as initial seeds to generate pseudo segmentation labels. Due to the inherent discrepancies between semantic labels and pixel-wise annotations, it is challenging to achieve complete coverage of the object region. To address this issue, current solutions target at enhancing the quality of CAMs by utilizing the transformer’s attention module (AFA (Ru et al. 2022), MCTformer (Xu et al. 2022), ViT-PCM (Rossetti et al. 2022)), separating the foreground regions with contrastive learning (PPC (Du et al. 2022), ToCo (Ru et al. 2023)), iteratively erasing (OC-CSE (Kweon et al. 2021), ECS (Sun et al. 2021), AEFT (Yoon et al. 2022)), changing the optimization target (RIB (Lee et al. 2021a), PMM (Li et al. 2021)), generating more accurate seeds (Su et al. 2021) and incorporating additional signals such as saliency maps (ICD (Fan et al. 2020), EPS (Lee et al. 2021b), DRS (Kim, Han, and Kim 2021), AuxSeg (Xu et al. 2021) and SANCE (Li, Fan, and Zhang 2022)). However, these methods primarily focus on generating pseudo-labels, with little attention given to training with these labels. URN (Li et al. 2022) proposes a method to identify noisy labels by estimating uncertainty across different scales. ADELE (Liu et al. 2022) adaptively corrects pseudolabels for different categories. BECO (Rong et al. 2023) introduces a co-training paradigm for correcting noise. These methods neglect the importance of boundary regions in the learning process. In contrast to these methods, we propose a new Weak-to-Partial framework that shifts the focus in WSSS from relying on CAMs to robustly handle noisy labels. W2P generates partial supervision and progressively refine noisy parts, with boundary-preserving segmentation. Robust Learning with Noisy Labels Learning with noisy labels for classification tasks has recently received significant attention. Existing solutions can be categorized into two groups: approaches that aim to reduce the negative impact of noisy labels, and techniques that focus on fixing inaccurate annotations. The former group includes methods that reduce the negative impact of noisy labels through improved robust optimization (Zhang and Sabuncu 2018; Ma et al. 2020; Wang et al. 2019), designed regularization techniques (Liu et al. 2020; Tanaka et al. 2018), robust architecture (Chen and Gupta 2015; Goldberger and Ben-Reuven 2017; Han et al. 2018a), and sample selection (Han et al. 2018b; Jiang et al. 2018). These algorithms emphasize the role of clean data during the training process, neglecting the information in the noisy labeled data. To address this issue, recent studies (Li, Socher, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7043 Hoi 2020; Yi et al. 2023; Xia et al. 2022) propose the progressive correction of noisy supervision through model predictions, achieving state-of-the-art performance. Despite the significant progress in noisy learning for classification tasks, there has been limited research on the more prevalent problem of robust learning in segmentation tasks. Only a few studies (Shu, Wu, and Li 2019; Guo and Yuan 2022) have developed noise-robust architectures for medical image segmentation. ADELE (Liu et al. 2022) exploits the phenomenon of early learning in semantic segmentation to adaptively correct noisy labels with various categories. BECO (Rong et al. 2023) proposes a co-training paradigm for noise correction. These methods mainly tackle noisy segmentation as a pixel-wise classification task, disregarding the challenging boundary regions in WSSS tasks. In contrast, we propose the W2P paradigm, which performs boundary-preserving noise detection and correction. Method Overview Weakly-Supervised Semantic Segmentation (WSSS) aims to train a segmentation network using a weakly annotated dataset denoted as X = {(I, y)}, where I is the image and y = [y1, y2, ..., yC]T denotes the corresponding image-level label, with C denoting the number of object categories. We present a new framework called Weak-to-Partial (W2P) that focuses on the second stage of the WSSS. Using the provided dataset X, we train a WSSS model and generate initial pseudo-labels for training images. Now we obtain a new dataset: Xs = {(I, Y )}, where Y ∈RH×W denotes the inaccurate segmentation map with the spatial size H, W. To train a segmentation model on this noisy dataset, the W2P framework incorporates a boundary preserving noisy detection (BPND) module to focus on identifying precise partial supervision, providing dependable signals for the partiallysupervised learning (PSL) module. The PSL module leverages high-quality partial supervisions from Yclean to enhance the quality of the Ynoisy using model predictions. Noisy Label Generation The generation of noisy labels follows the first stage of the existing WSSS methods. It extracts the initial segmentation from CAMs extracted from the classification model trained with X. As the first and second stages of WSSS are independent, our W2P can be effortlessly applied to existing WSSS solutions. To showcase the generalization capability of W2P, we choose three WSSS baseline methods as generators: IRN(Ahn, Cho, and Kwak 2019), ReCAM(Chen et al. 2022), and AMN(Lee, Kim, and Shim 2022). Boundary Preserving Noise Detection Noisy labels are incorporated in the W2P training framework to improve the segmentation network. As in previous studies (Huang et al. 2022; Sui, Zhang, and Wu 2022; Li, Socher, and Hoi 2020), the quality of clean pseudo labels Yclean significantly impacts the model performance in the context of noisy robust learning. In our W2P, the introduction of erroneous annotations adversely affects the performance of the subsequent PSL module, making it unable to provide reliable labels for the noisy region. Consequently, this impacts the overall performance. Therefore, designing a strategy to distinguish between noisy and clean pseudo-labels is crucial. The prevalent technique for noise separation is the small-loss criterion, where labels misaligned with predictions during early-learning stage are considered noisy, while the rest are deemed clean. Based on this theory, we propose a noise detection strategy using the loss of the predictions and pseudo-labels as an indicator of inconsistency. The loss is calculated as: P = fa(I, θa), L = CE(P, Y ), (1) where fa represents the segmentation model trained over t epochs with parameters θa. The model prediction is denoted as P, and the Cross-Entropy loss function is denoted as CE. Noisy labels are identified as pseudo-labels with losses exceeding a designated threshold. While the small-loss efficiently identifies clean pseudolabels, it faces difficulty in differentiating between boundary and noisy pixels. Precisely learning pixel-wise annotations in the boundary area is challenging due to semantic confusion. Consequently, the small-loss metric naturally results in a loss of boundary supervision. To tackle this issue, we propose Boundary Preserving Noise Detection (BPND), which retains boundaries while eliminating noisy pseudo-labels. BPND aims to leverage the structural prior and spatial correlation within pixels to preserve object boundaries. This is achieved by employing superpixels (Achanta et al. 2010; den Bergh et al. 2012), which are clusters of low-level features commonly used in visual and shape analysis tasks. Figure 3 illustrates that pixels in both the boundary and inner regions often have the same categorical semantics within each superpixel. Therefore, the preservation of the boundary region can be accomplished by relying on the semantics of the internal region. Next, we present our BPND strategy, along with the use of superpixel representation. Concretely, given an image I, we calculate the loss L between its prediction P and pseudo-labels Y using Equation.1. The superpixel representation of I is SP = {SPi}H×W i=1 , where SPi ∈1, 2, ..., K and K is the number of superpixels. Pixel j belongs to superpixel k if SPj = k. The pixels within the same superpixel are expected to have consistently labeled semantic labels. Therefore, we propose reducing high loss in boundary regions by exploiting low loss in the interior regions. In practice, we employ a averaging operation to compute the smoothed loss Lk = 1 Nk P j:SPj=k lj. where Nk = |j : SPj = k| is the size of the superpixel and lj is the loss of pixel j. In practice, as shown in Figure 2, we calculate the smoothed loss for each superpixel as follows: L = R(L, K), SP = OH(SP), (2) Ls = SP ⊗N(L ⊙SP), (3) where R(, b) represents the repeat operation along the channel dimension for b times and OH refers to the one-hot operation for superpixel indexes. ⊗and ⊙denote the matrix The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7044 I Teacher Model Student Model Pt C×H×W C×H×W Ps ℒ푠ljǑ BPNC EMA Ynew Ym Yclean SP H×W C×H×W Pt R& OH R C×H×W×K C×H×W×K SP Pt Pm Ynew H×W C×H×W Partial-supervised Learning Module (PSL) Boundary Preserving Noise Correction (BPNC) “Bird” WSSS BPND PSL Pseudo Label Output Y H×W Ρ C×H×W CE SP H×W OH R L H×W L H×W×K H×W×K SP Ls H×W GMM G H×W H×W Gclean H×W Gnoisy Segmentation Model Boundary Preserving Noise Detection Module (BPND) I 1 1 0 .. 1 1 0 0 0 1 0 1 0 .. .. 1 0 0 .. 1 1 1 0 .. 1 1 0 0 0 1 0 1 0 .. .. 1 0 0 .. 1 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ⨀ N ⨀ N Figure 2: Overview of our Weak-to-Partial (W2P) method: Firstly, we generate initial noisy labels using images and imagelevel labels with the WSSS module. Next, we utilize the proposed Boundary Preserving Noise Detection (BPND) module to separate the noisy labels and generate clean supervisions with two masks Gclean and Gnoisy. Then, W2P incorporates a Partially-supervised Learning (PSL) module that generates complementary supervisions (Ynew) to refine segmentation within the region defined by Gnoisy. Finally, we combine supervisions from Ynew and Yclean to create complete supervisions (Ym), which are crucial for training the segmentation model and enhancing performance. (a) (b) (c) (d) Figure 3: Visual representation of the noisy detection: (a) presents the image and its corresponding superpixels. (b) displays the original noisy label. (c) and (d) exhibit the selected reliable supervision using the “small-loss” method and the BPND module, respectively. and pixel-wise multiplication operation, respectively. N is the normalization operation along the spatial dimensions. After obtaining the smoothed loss Ls, several issues arise. The non-normalized cross-entropy loss poses a challenge in establishing a suitable classification threshold for clean and noisy data. Moreover, the variation in pseudolabels across categories requires laborious task of determining appropriate thresholds for each category. To address these challenges, we propose an adaptive threshold strategy that autonomously differentiates between clean and noisy pseudo-labels across categories. Specifically, we fit a two-component Gaussian Mixture Model (GMM) (Permuter, Francos, and Jermyn 2006) with the loss Lb for category b, using the Expectation-Maximization algorithm. With the sharpness of distribution, GMM is efficiently in distinguishing the two-modality (clean and noisy) data. Consequently, the clean probability wi for each pixel in category b is calculated using the posterior probability p(gb|li), where gb corresponds to the Gaussian component with the smaller mean (smaller loss) for category b. The problem of threshold selection, originally based on Lb, now relies on the clean probability wi. Here, a threshold τ1 is used to create masks Gnoisy and Gclean for each category, providing partial supervision. Despite using the same threshold, the category-specific GMM effectively captures the noise distribution, enabling the generation of adaptive thresholds for different categories. The masks for clean and noisy regions, Gclean and Gnoisy, are produced: Gi clean = True, p(g|li) > τ1 False, p(g|li) ≤τ1 , Gnoisy =∼Gclean. (4) Subsequently, partial supervisions Yclean and Ynoisy are generated with Gclean and Gnoisy, respectively. This process mitigates the presence of low-quality pseudo-labels and enhances the performance of the PSL. Partially-supervised Learning In this module, we propose a partially-supervised training (PSL) algorithm that utilizes the provided Yclean and Ynoisy splits from the BPND module. Initially, we train a segmentation framework using the reliable Yclean. Then, we utilize this framework to generate more reliable labels denoted as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7045 (a) (b) (c) (d) (b) (c) (d) Figure 4: Visual representation of the new supervision: (a) presents the image with corresponding superpixels. (b) displays the original prediction. (c) and (d) exhibit the selected reliable supervision using the “high-confidence” metric and the BPNC module, respectively. Ynew, which outperform the noisy pseudo-labels Ynoisy. Finally, we merge the Yclean and Ynew labels to supervise and enhance the performance of semantic segmentation model. Figure 2 illustrates the partially-supervised learning module consisting of a student model fs and a teacher model ft with parameters θs and θt, respectively. The student model fs learns from the combined supervision Ym derived from Ynew and Yclean, while the teacher model ft generates reliable supervisions Ynew to substitute the original noisy labels Ynoisy. To reduce the computational complexity and improve the reliability of the teacher model, we optimize the student model using gradient backward while updating the teacher model through exponential moving average (EMA) (Tarvainen and Valpola 2017): θt = λ ∗θt + (1 −λ) ∗θs, (5) where λ is the EMA coefficient. Utilizing the teacher model, we generate the merged supervision Ym as: Pt = ft(I, θt), (6) Ynew = arg max Pt, Snew = max(softmax(Pt)), (7) Ym = Ynew ⊕Yclean, (8) where ⊕merges the two partial supervisions, and Snew represents the confidence associated with Ynew. The performance of the model heavily depends on the quality of Ynew. Accumulated errors caused by incorrect modifications in the noisy regions negatively impact PSL training. Hence, we utilize the commonly-used “highconfidence” metric to filter out predictions with low confidences that are deemed unreliable. To avoid the manual adjustment of thresholds for different categories, we present an algorithm that generates adaptive thresholds. This algorithm employs a two-component Gaussian Mixture Model (GMM) to fit the prediction confidences, denoted as Snew, for each category. Accurate predictions are selected based on the reliable probability p(gb|si), where gb denotes the Gaussian component with a higher mean (indicating higher confidences) for category b, and si represents the prediction confidence of the i-th pixel in Snew. By utilizing adaptive thresholds, we produce the reliable supervision Ynew using the following method: Y i new = Y i new, p(g|si) > τ2 255, p(g|si) ≤τ2 . (9) The value 255 is the “ignore” indicator during training. While the high-threshold strategy effectively reduces noise in Figure 4, it also decreases the recall for pixels in the boundary regions. This limitation, in turn, results in a lack of crucial supervisory signal in the boundary area, thereby negatively impacting the overall performance of W2P. To address this issue, we propose a paradigm called boundary preserving noise correction (BPNC) and an algorithm for generating boundaries to enhance boundary prediction. The first approach extracts boundary pixels using structural constraints, while the second approach constructs boundary pixels through the copying and pasting of highconfidence areas from one image to another. Boundary Preserving Noise Correction In BPNC, superpixels identify low confidence boundaries. Figure 4 shows that same-category pixels are usually grouped together within superpixels. Boundary regions have lower confidence, while the interior regions have higher confidence. Thus, high confidence in interior regions can identify low confidence boundaries. Similar to BPND, the smoothed prediction Pm is calculated for each superpixel as follows: Pt = R(Pt, K), SP = OH(R(SP, C)), (10) Pm = SP ⊗N(Pt ⊙SP). (11) We employ two techniques in the implementation to accelerate computation. Firstly, we only smooth the logits of the most dominant class in prediction for each superpixel. Secondly, the overall operation is performed on the predictions without up-sampling. These operations can be efficiently computed in parallel on GPUs, resulting in negligible additional computational time. With smoothed prediction, the labels Ynew and the confidences Snew are updated by: Ynew = arg max Pm, Snew = max(softmax(Pm)). (12) Boundary Generation The proposed BPNC algorithm improves boundary segmentation reliability. To capitalize on the enhanced boundaries, we suggest creating artificial boundary pixels by copying and pasting high-confidence areas between images. This allows for the generation of diverse boundary scenes by pasting accurate object segmentations onto various image backgrounds. For the input images I1 and I2, along with their corresponding smoothed labels Y 1 new, Y 2 new, we aim to formulate the mixed image Imix and the mixed target Ymix as: Imix = ((1 −M) ⊙I1) ⊕(M ⊙I2), (13) Ymix = ((1 −M) ⊙Y 1 new) ⊕(M ⊙Y 2 new). (14) The copy-paste mask M is formulated as: Mi = 1, p(g|si 2) > τ2 0, p(g|si 2) ≤τ2 , (15) where si 2 represents the confidence of i-th pixel in I2. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7046 Overall Loss Function The overall Weak-to-Partial (W2P) loss is computed using two pairs: a mixed image Imix and mixed target Ymix, and an original image I and combined supervision Ym. The loss is defined as follows: P mix s = fs(Imix, θs), Ps = fs(I, θs), (16) Lsem = CE(P mix s , Ymix) + CE(Ps, Ym). (17) Since the teacher model exhibits superior robustness to noise, it is exclusively retained for the inference phase. Experiments Experimental Settings Our experiments use the benchmarks PASCAL VOC 2012 and MS COCO 2014 for WSSS. PASCAL VOC 2012 has 10,582 training images, 1,449 validation images, and 1,456 testing images across 21 categories. MS COCO 2014 has 81 categories with 82,783 training images and 40,504 validation images. The mean Intersection-over-Union (mIoU) is the evaluation metric we use. We use the SLIC (Achanta et al. 2010) algorithm for superpixel generation. To certify the effectiveness of W2P, we present extensive ablation studies on PASCAL VOC 2012 val dataset. In the first stage of WSSS, we use IRN to generate pseudo-labels, unless specified. The W2P framework in the second stage utilizes DeeplabV3+ with a ResNet101 backbone pretrained on ImageNet as the segmentation network, following prior studies. We exclude general tricks like an output stride of 8 and COCO pretrained models. During inference, we employ established practices and use multi-scale techniques along with dense CRF. The W2P framework’s hyperparameters need minimal tuning, with threshold values τ1 and τ2 set to 0.9 and λ set to 0.99. The BPND stage is trained for 8 epochs on VOC and 4 epochs on COCO, while the PSL stage takes 72 epochs on VOC and 36 epochs on COCO. A batch size of 16 is used for all experiments. Comparison with State-of-the-arts PASCAL VOC 2012. Table 2 shows W2P’s performance compared to state-of-the-art WSSS methods on PASCAL VOC 2012. W2P achieves exceptional results with mIoU of 74.0 and 73.9 using an ImageNet pretrained backbone, setting new state-of-the-art in Image-level WSSS. W2P outperforms IRN by 10.7 and 9.3, as well as ReCAM (5.5 and 5.5) and AMN (4.5 and 4.3), other IRN-based methods. Compared to methods using saliency maps from saliency detection models like SANCE and DRS, our method demonstrates significant superiority. Moreover, our method outperforms MCTformer and other transformer-based methods. MS COCO 2014. We also report the performance of our method on the challenging MS COCO 2014 dataset to showcase its superiority. Table 3 presents the comparison results on the MS COCO 2014 validation set. Our W2P achieves a new state-of-the-art mIoU of 46.4, demonstrating its effectiveness on a large-scale dataset. Method BackBone Val Test ICD(CVPR20) ResNet101 67.8 68.0 EPS(CVPR21) ResNet101 71.0 71.8 EDAM(CVPR21) ResNet101 70.9 70.6 AuxSeg(ICCV21) ResNet38 69.0 68.6 DRS(AAAI21) ResNet101 71.2 71.4 SANCE(CVPR22) ResNet101 72.0 72.9 IRN(CVPR19) ResNet101 63.5 64.8 SEAM(CVPR20) ResNet38 64.5 65.7 URN(AAAI22) ResNet101 69.5 69.7 ReCAM(CVPR22) ResNet101 68.5 68.4 ADELE(CVPR22) ResNet101 69.3 68.8 PPC(CVPR22) ResNet101 67.7 67.4 AMN(CVPR22) ResNet101 69.5 69.6 AEFT(ECCV22) ResNet101 70.9 71.7 BECO(CVPR23) ResNet101 72.1 71.8 W2P (Ours) ResNet101 74.0 73.9 AFA(CVPR22) MiT-B1 69.3 68.8 MCTformer(CVPR22) ResNet38 70.9 71.7 ViT-PCM(ECCV22) ResNet101 72.1 71.8 ToCo(CVPR23) ViT-B 69.8 70.5 W2P (Ours) MiT-B2 76.0 75.7 Table 2: Performance of WSSS methods in mIoU on PASCAL VOC 2012 val and test. Method BackBone Sup. Val IRN(CVPR19) ResNet101 I 41.4 CDA(ICCV21) ResNet38 I 33.2 RIB(NeurIPS21) ResNet101 I 43.8 MCTformer(CVPR22) ResNet38 I 42.0 URN(CVPR22) ResNet101 I 40.7 BECO(CVPR23) ResNet101 I 45.1 W2P (Ours) ResNet101 I 46.4 Table 3: Performance comparison of WSSS methods in terms of mIoU on the COCO val set. Improvement on boundary segmentation. To validate the predictions of W2P on boundary areas, we present qualitative comparisons from the VOC val set in Figure 5. Compared with IRN and BECO, our W2P improves predictions on challenging boundary areas and enhances object segmentation. Additionally, we provide quantitative results of boundary improvement in Table 4. Ablation Study Analysis of the proposed components. We evaluate the proposed components’ effectiveness on different pseudolabels generated by IRN, ReCAM, and AMN in Table 5. Using BPND alone trains FSSS directly with selected clean labels. Employing PSL solely means training the segmentation network with all pseudo-labels, allowing updates in all regions. Equipped with BPND, the W2P framework significantly improves performance from 65.1 to 74.0 with IRN. With M+CRF, performance is further improved from 73.5 to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7047 IRN ReCAM AMN BECO W2P (ours) 24.7 28.0 29.2 33.4 36.0 Table 4: Performance of different methods in terms of boundary mIoU (Cheng et al. 2021) on VOC 2012 val set. Image IRN BECO W2P(Ours) GT Figure 5: Visualization of segmentation results on PASCAL VOC 2012 val set. BPND PSL M+CRF IRN ReCAM AMN 65.1 67.1 67.9 ✓ 67.5 69.2 70.0 ✓ 70.7 70.9 71.8 ✓ ✓ 73.1 73.5 74.5 ✓ ✓ ✓ 74.0 74.4 75.6 Table 5: Performance of different pseudo-labels in terms of mIoU on VOC 2012 val set. M+CRF denotes the label refinement with multi-scale test and dense-CRF. BPNC AT BG IRN ReCAM AMN 70.5 70.7 71.6 ✓ 71.9 71.8 73.5 ✓ ✓ 73.6 73.6 75.0 ✓ ✓ ✓ 74.0 74.4 75.6 Table 6: Analysis of PSL module. AT: Adaptive Threshold. BG: Boundary Generation. Module Method S20 S50 S70 AT BPND 61.0 70.9 69.5 74.0 PSL 72.8 73.6 73.0 74.0 Table 7: Analysis of clean label selection in BPND and PSL module. AT: Adaptive Threshold. Sn: Select the n% data with the lowest loss values for BPND or select the n% data with the highest confidences for PSL. τ1 0.5 0.7 0.9 0.95 0.99 Segm. 72.2 73.8 74.0 73.2 67.6 Table 8: Value of τ1. τ2 0.5 0.7 0.9 0.95 0.99 Segm. 73.4 73.1 74.0 73.6 73.0 Table 9: Value of τ2. λ 0.5 0.7 0.9 0.99 0.999 Segm. 70.3 71.0 73.7 74.0 73.9 Table 10: Value of λ. ADELE BECO W2P (Ours) Time 1.7 h 32 m 11 m Table 11: Training time for one epoch. h: hour. m: minute. 74.4 with ReCAM. These results validate the effectiveness of our W2P framework for different CAMs. For an in-depth analysis of the PSL module, please refer to Table 6. Analysis of the adaptive thresholds. Our study compares the adaptive thresholds for BPND and PSL with the common strategy (Huang et al. 2022; Han et al. 2018b) of dividing data into clean and noisy. Table 7 shows our AT strategy’s superior performance. Impact with hyper-parameters. Tables 8 and 9 show the importance of setting moderate values for both τ1 and τ2. This ensures accurate noise removal and the utilization of reliable predictions for correction. In Table 10, the EMA framework performs well with a threshold above 0.9. We conduct an analysis of the duration required for training each epoch in Table 11, demonstrating substantial advantages of our algorithm compared to prior research. Conclusion In this study, we present W2P, a new weakly supervised segmentation method that focuses on the second stage of WSSS for robust learning despite noisy labels, specifically targeting boundary segmentation. By using a class-wise GMM paradigm, our noise detection module selects reliable pseudo-labels while preserving the boundary annotations accurately. Our partially-supervised learning module utilizes separate partial-supervision to learn from clean supervision and generate accurate signals for noisy parts. Additionally, we propose a boundary correction strategy and a boundary generation method to improve boundary segmentation with only image-level supervision. Extensive experiments on multiple benchmarks show that our method surpasses other state-of-the-art WSSS methods. Acknowledgements This work was supported by the NSFC under Grant 62072271. Jun-Hai Yong was supported by the NSFC under Grant 62021002. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7048 References Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; and S¨usstrunk, S. 2010. SLIC Superpixels. epfl. Ahn, J.; Cho, S.; and Kwak, S. 2019. Weakly Supervised Learning of Instance Segmentation With Inter-Pixel Relations. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In European Conference on Computer Vision (ECCV). Chen, X.; and Gupta, A. 2015. Webly Supervised Learning of Convolutional Networks. In IEEE/CVF International Conference on Computer Vision (ICCV). Chen, Z.; Wang, T.; Wu, X.; Hua, X.; Zhang, H.; and Sun, Q. 2022. Class Re-Activation Maps for Weakly-Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Cheng, B.; Girshick, R.; Doll´ar, P.; Berg, A. C.; and Kirillov, A. 2021. Boundary IoU: Improving Object-Centric Image Segmentation Evaluation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). den Bergh, M. V.; Boix, X.; Roig, G.; de Capitani, B.; and Gool, L. V. 2012. SEEDS: Superpixels Extracted via Energy-Driven Sampling. In European Conference on Computer Vision (ECCV). Du, Y.; Fu, Z.; Liu, Q.; and Wang, Y. 2022. Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Fan, J.; Zhang, Z.; Song, C.; and Tan, T. 2020. Learning Integral Objects With Intra-Class Discriminator for WeaklySupervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Goldberger, J.; and Ben-Reuven, E. 2017. Training deep neural-networks using a noise adaptation layer. In International Conference on Learning Representations (ICLR). Guo, X.; and Yuan, Y. 2022. Joint Class-Affinity Loss Correction for Robust Medical Image Segmentation with Noisy Labels. In Medical Image Computing and Computer Assisted Intervention - MICCAI 2022 - 25th International Conference, Singapore, September 18-22, 2022, Proceedings, Part IV. Han, B.; Yao, J.; Niu, G.; Zhou, M.; Tsang, I. W.; Zhang, Y.; and Sugiyama, M. 2018a. Masking: A New Perspective of Noisy Supervision. In Advances in Neural Information Processing Systems (NeurIPS). Han, B.; Yao, Q.; Yu, X.; Niu, G.; Xu, M.; Hu, W.; Tsang, I. W.; and Sugiyama, M. 2018b. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems (NeurIPS). Huang, Z.; Bao, Y.; Dong, B.; Zhou, E.; and Zuo, W. 2022. W2N: Switching from Weak Supervision to Noisy Supervision for Object Detection. In European Conference on Computer Vision (ECCV). Jiang, L.; Zhou, Z.; Leung, T.; Li, L.; and Fei-Fei, L. 2018. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. In International Conference on Machine Learning (ICML). Kim, B.; Han, S.; and Kim, J. 2021. Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation. In AAAI. Kweon, H.; Yoon, S.; Kim, H.; Park, D.; and Yoon, K. 2021. Unlocking the Potential of Ordinary Classifier: Classspecific Adversarial Erasing Framework for Weakly Supervised Semantic Segmentation. In IEEE/CVF International Conference on Computer Vision (ICCV). Lee, J.; Choi, J.; Mok, J.; and Yoon, S. 2021a. Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation. In Advances in Neural Information Processing Systems (NeurIPS). Lee, J.; Kim, E.; Lee, S.; Lee, J.; and Yoon, S. 2019. FickleNet: Weakly and Semi-Supervised Semantic Image Segmentation Using Stochastic Inference. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Lee, J.; Kim, E.; and Yoon, S. 2021. Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Lee, M.; Kim, D.; and Shim, H. 2022. Threshold Matters in WSSS: Manipulating the Activation for the Robust and Accurate Segmentation Model Against Thresholds. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Lee, S.; Lee, M.; Lee, J.; and Shim, H. 2021b. Railroad Is Not a Train: Saliency As Pseudo-Pixel Supervision for Weakly Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Li, J.; Fan, J.; and Zhang, Z. 2022. Towards Noiseless Object Contours for Weakly Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Li, J.; Socher, R.; and Hoi, S. C. H. 2020. DivideMix: Learning with Noisy Labels as Semi-supervised Learning. In International Conference on Learning Representations (ICLR). Li, Y.; Duan, Y.; Kuang, Z.; Chen, Y.; Zhang, W.; and Li, X. 2022. Uncertainty Estimation via Response Scaling for Pseudo-Mask Noise Mitigation in Weakly-Supervised Semantic Segmentation. In AAAI. Li, Y.; Kuang, Z.; Liu, L.; Chen, Y.; and Zhang, W. 2021. Pseudo-mask Matters in Weakly-supervised Semantic Segmentation. In IEEE/CVF International Conference on Computer Vision (ICCV). Liu, S.; Liu, K.; Zhu, W.; Shen, Y.; and Fernandez-Granda, C. 2022. Adaptive Early-Learning Correction for Segmentation from Noisy Annotations. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7049 Liu, S.; Niles-Weed, J.; Razavian, N.; and FernandezGranda, C. 2020. Early-Learning Regularization Prevents Memorization of Noisy Labels. In Advances in Neural Information Processing Systems (NeurIPS). Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Ma, X.; Huang, H.; Wang, Y.; Romano, S.; Erfani, S. M.; and Bailey, J. 2020. Normalized Loss Functions for Deep Learning with Noisy Labels. In International Conference on Machine Learning (ICML). Permuter, H. H.; Francos, J. M.; and Jermyn, I. 2006. A study of Gaussian mixture models of color and texture features for image classification and segmentation. Pattern Recognit. Rong, S.; Tu, B.; Wang, Z.; and Li, J. 2023. Boundaryenhanced Co-training for Weakly Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Rossetti, S.; Zappia, D.; Sanzari, M.; Schaerf, M.; and Pirri, F. 2022. Max Pooling with Vision Transformers Reconciles Class and Shape in Weakly Supervised Semantic Segmentation. In European Conference on Computer Vision (ECCV). Ru, L.; Zhan, Y.; Yu, B.; and Du, B. 2022. Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Ru, L.; Zheng, H.; Zhan, Y.; and Du, B. 2023. Token Contrast for Weakly-Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Shu, Y.; Wu, X.; and Li, W. 2019. LVC-Net: Medical Image Segmentation with Noisy Label Based on Local Visual Cues. In Medical Image Computing and Computer Assisted Intervention - MICCAI 2019 - 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part VI. Su, Y.; Sun, R.; Lin, G.; and Wu, Q. 2021. Context Decoupling Augmentation for Weakly Supervised Semantic Segmentation. In IEEE/CVF International Conference on Computer Vision (ICCV). Sui, L.; Zhang, C.; and Wu, J. 2022. Salvage of Supervision in Weakly Supervised Object Detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Sun, K.; Shi, H.; Zhang, Z.; and Huang, Y. 2021. ECSNet: Improving Weakly Supervised Semantic Segmentation by Using Connections Between Class Activation Maps. In IEEE/CVF International Conference on Computer Vision (ICCV). Tanaka, D.; Ikami, D.; Yamasaki, T.; and Aizawa, K. 2018. Joint Optimization Framework for Learning With Noisy Labels. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Tarvainen, A.; and Valpola, H. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems (NeurIPS). Wang, C.; Zhang, Y.; Cui, M.; Ren, P.; Yang, Y.; Xie, X.; Hua, X.; Bao, H.; and Xu, W. 2022. Active Boundary Loss for Semantic Segmentation. In AAAI. AAAI Press. Wang, Y.; Ma, X.; Chen, Z.; Luo, Y.; Yi, J.; and Bailey, J. 2019. Symmetric Cross Entropy for Robust Learning With Noisy Labels. In IEEE/CVF International Conference on Computer Vision (ICCV). Wang, Y.; Zhang, J.; Kan, M.; Shan, S.; and Chen, X. 2020. Self-Supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Wu, T.; Huang, J.; Gao, G.; Wei, X.; Wei, X.; Luo, X.; and Liu, C. H. 2021. Embedded Discriminative Attention Mechanism for Weakly Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Xia, X.; Liu, T.; Han, B.; Gong, M.; Yu, J.; Niu, G.; and Sugiyama, M. 2022. Sample Selection with Uncertainty of Losses for Learning with Noisy Labels. In International Conference on Learning Representations (ICLR). Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; ´Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. In Advances in Neural Information Processing Systems (NeurIPS). Xu, L.; Ouyang, W.; Bennamoun, M.; Boussa¨ıd, F.; Sohel, F.; and Xu, D. 2021. Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised Semantic Segmentation. In IEEE/CVF International Conference on Computer Vision (ICCV). Xu, L.; Ouyang, W.; Bennamoun, M.; Boussa¨ıd, F.; and Xu, D. 2022. Multi-class Token Transformer for Weakly Supervised Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Yi, R.; Guan, D.; Huang, Y.; and Lu, S. 2023. ClassIndependent Regularization for Learning with Noisy Labels. In AAAI. Yoon, S.; Kweon, H.; Cho, J.; Kim, S.; and Yoon, K. 2022. Adversarial Erasing Framework via Triplet with Gated Pyramid Pooling Layer for Weakly Supervised Semantic Segmentation. In European Conference on Computer Vision (ECCV). Zhang, Z.; and Sabuncu, M. R. 2018. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. In Advances in Neural Information Processing Systems (NeurIPS). Zhou, B.; Khosla, A.; Lapedriza, `A.; Oliva, A.; and Torralba, A. 2016. Learning Deep Features for Discriminative Localization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7050
2024
783
18,610
HyperEditor: Achieving Both Authenticity and Cross-Domain Capability in Image Editing via Hypernetworks Hai Zhang1,2, Chunwei Wu1,2, Guitao Cao1,2*, Hailing Wang1,2, Wenming Cao3 1Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, China 2MoE Engineering Research Center of SW/HW Co-design Technology and Application, East China Normal University, China 3College of Information Engineering, Shenzhen University, China {hzhang22, 52215902005}@stu.ecnu.edu.cn, gtcao@sei.ecnu.edu.cn, 52215902004@stu.ecnu.edu.cn, wmcao@szu.edu.cn Abstract Editing real images authentically while also achieving crossdomain editing remains a challenge. Recent studies have focused on converting real images into latent codes and accomplishing image editing by manipulating these codes. However, merely manipulating the latent codes would constrain the edited images to the generator’s image domain, hindering the attainment of diverse editing goals. In response, we propose an innovative image editing method called HyperEditor, which utilizes weight factors generated by hypernetworks to reassign the weights of the pre-trained StyleGAN2’s generator. Guided by CLIP’s cross-modal image-text semantic alignment, this innovative approach enables us to simultaneously accomplish authentic attribute editing and crossdomain style transfer, a capability not realized in previous methods. Additionally, we ascertain that modifying only the weights of specific layers in the generator can yield an equivalent editing result. Therefore, we introduce an adaptive layer selector, enabling our hypernetworks to autonomously identify the layers requiring output weight factors, which can further improve our hypernetworks’ efficiency. Extensive experiments on abundant challenging datasets demonstrate the effectiveness of our method. Introduction The primary objective of image editing is to modify specific properties of real images by leveraging conditions. In recent years, image editing has emerged as one of the most dynamic and vibrant research areas in academia and industry. Several studies (Shen et al. 2019; H¨ark¨onen et al. 2020; Liu et al. 2023; Revanur et al. 2023; Liu, Song, and Chen 2023; Patashnik et al. 2021) have investigated the extensive latent semantic representations in StyleGAN2 (Karras et al. 2020) and have successfully achieved diverse and authentic image editing through the manipulation of latent codes. These methods share a common characteristic: finding the optimal target latent codes for substituting the initial latent codes, thereby advancing the source image to the target image. However, latent codes with high reconstructability often exhibit weak editability (Pehlivan, Dalva, and Dundar 2023). Additionally, if the target image falls outside the image domain of the generator, it becomes difficult to achieve *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. cross-domain image editing purely through the manipulation of the latent codes. Thus, we pondered the question: “Is it possible to achieve both authentic image attribute editing and cross-domain image style transfer simultaneously?” Recently, (Alaluf et al. 2022; Dinh et al. 2022) utilize hypernetworks to reassign the generator’s weights, achieving a more precise image reconstruction by gradually compensating for the missing details of the source image in the reconstructed image. Inspired by this, we find that modulating the generator’s weights can achieve detailed attribute changes in images. Therefore, we adopt the concept of weight reassignment to the image editing task. In our work, we propose a novel image editing method called HyperEditor, which directly conducts image editing by utilizing weight factors generated by hypernetworks to reassign the weights of StyleGAN2’s generator. Unlike traditional methods of model weight fine-tuning (Gal et al. 2022), which often involve retraining the pre-trained model, our approach involves scaling and reassigning the weights of the generator using weight factors. As a result, our method offers better controllability, allowing for authentic attribute editing (e.g., facial features, hair, etc.). Moreover, due to our method’s capability to modify the generator’s weights, it can effectively perform cross-domain editing operations, which might be challenging to achieve solely by manipulating the latent codes. To the best of our knowledge, our method is the first to achieve authentic attribute editing and cross-domain style editing simultaneously. We pondered a question: must we reassign all layers in StyleGAN2’s generator when editing a single attribute? Based on experimental findings, we observed that only a few layers’ weight factors undergo significant changes before and after editing. Thus, we propose the adaptive layer selector, enabling hypernetworks to choose the layers that require outputting weight factors autonomously. Consequently, we can maximize the effect of weight factors while achieving comparable results. During model training, aligning the generated images with the target conditions poses a challenge due to the lack of paired datasets before and after editing in the real-world. Recently, some methods (Wei et al. 2022; Kocasari et al. 2022) use CLIP (Radford et al. 2021) to convert image features into pseudo-text features and align them with genuinetext features. In our work, we leverage the self-supervised The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7051 “face” “face with smile” CLIP Text Encoder Feature Extractor Adaptive layer selector Initial Invention input output ∆𝑡 𝑥 𝑥 ∆𝑡 𝑥 Conv BN Relu Resnet FMM 𝑦 𝑥 𝑥 ∆𝑡 MPL MPL × + 𝑥 𝑤𝑖𝑛𝑖𝑡 𝑤𝑖𝑛𝑖𝑡 Fused modulation module(FMM) Structure of the feature extractor Conv FC Conv FC Conv FC Conv FC Conv FC 𝑦= 𝐺(𝜃+ ∆· 𝜃, 𝑤𝑖𝑛𝑖𝑡) ∆𝑖1∈ℝ1×1×𝐶𝑖1×𝐶𝑖1 ∆𝑖2∈ℝ1×1×𝐶𝑖2×𝐶𝑖2 ··· ∆𝑖3∈ℝ1×1×𝐶𝑖3×𝐶𝑖3 {𝑖1,𝑖2, 𝑖3, ⋯, 𝑖𝑛} ∆𝑖𝑛∈ℝ1×1×𝐶𝑖𝑛×𝐶𝑖𝑛 𝜃𝑖1 + ∆𝑖1𝜃𝑖1 𝜃𝑖𝑛+ ∆𝑖𝑛𝜃𝑖𝑛 𝜔1 𝜔2 ··· 𝜔3 𝜔𝑘 ··· 𝜔18 𝜃𝑖2 + ∆𝑖2𝜃𝑖2 𝜃𝑖3 + ∆𝑖3𝜃𝑖3 ··· Generator 𝐺(𝜃+ ∆𝜃) Hypernetworks ··· Figure 1: The overall structure of our HyperEditor. Given a text pair (t0, t1) and an initial image x, we utilize the CLIP text encoder to extract features for (t0, t1) and compute their difference, resulting in ∆t. The image feature extractor processes x to obtain its feature representation, which is then refined with the conditional information ∆t through the fusion modulation module(FMM), yielding an intermediate feature map, ˆx. Using the adaptive layer selector, we identify a sequence of layers, L, that require outputting weight factors. Subsequently, our hypernetworks generate weight factors, ∆, based on ˆx and L, which are used to reassign the generator’s weights. Moreover, we input the latent codes winit of the initial image. Finally, the generated image after editing, y = (θ + ∆· θ, winit), is obtained. learning capacity of CLIP and introduce the directional CLIP loss to supervise model training. This process aligns the difference set of pseudo-text features before and after editing with the difference set of authentic-text pair features, all in a direction-based manner. It makes our approach more focused on cross-modal representations of local semantics, enhances the convergence capability of the model, and prevents the generation of adversarial effects. Additionally, we introduce a fusion modulation module that refines intermediate feature maps using text prompts as scaling and shifting factors. This enables different text prompts to manipulate the hypernetworks and generate various weight factors. Consequently, our approach enables a single model to accomplish diverse image editing tasks. Overall, our contributions can be summarized as follows: • We surpass the constraints of prior image editing techniques and introduce a novel image editing framework called HyperEditor. This framework utilizes hypernetworks to reassign the weights of StyleGAN2’s generator and leverages CLIP’s cross-modal semantic alignment ability to supervise our model’s training. Consequently, HyperEditor can not only authentically modify attributes in images but also achieve cross-domain style editing. • We propose an adaptive layer selector that allows hypernetworks to autonomously determine the layers that require outputting weight factors when editing a single attribute, maximizing the efficiency of the hypernetworks. Related Work Image Editing Many studies (Saha et al. 2021; Zhu et al. 2022; Roich et al. 2022) have investigated how to leverage the latent space of pre-trained generators for image editing. Particularly, with the advent of StyleGAN2, image editing through manipulation of the latent codes has become a prevalent research topic. In (Shen et al. 2019), the authors utilized linear variation to achieve a disentangled representation of the latent codes in StyleGAN. They completed accurate image editing by decoupling entangled semantics and subspace projection. In (H¨ark¨onen et al. 2020), the authors proposed using principal component analysis to identify and modify the latent directions in latent codes, thereby enabling image editing. TediGAN (Xia et al. 2021) introduced a visual-language similarity module that maps linguistic representations into a latent space that aligns with visual representations, allowing text-guided image editing. StyleCLIP (Patashnik et al. 2021) combines the robust cross-modal semantic alignment capability of CLIP with the generative power of StyleGAN2. It presents three text-driven image editing methods, namely latent optimization, latent mapping, and global directions, to achieve image editing in an unsupervised or self-supervised manner. Subsequently, a series of CLIP+StyleGAN2 methods (Wei et al. 2022; Lyu et al. 2023) have been introduced. These methods utilize mappers to learn style residuals and transform the original image’s latent codes toward the target latent codes. Furthermore, several studies (Zeng, Lin, and Patel 2022; Revanur et al. 2023; Hou et al. 2022) have incorporated mask graphs into the network to provide more accurate supervision for specific attribute changes. We can observe that all these methods involve the manipulation of latent codes. However, if the target image exceeds the image domain of the generator, it becomes challenging to rely solely on modifying latent codes to achieve cross-domain editing operations. Thus, we take an approach, focusing on the generator’s weights and performing image editing by reassigning these weights. Our method is entirely distinct from theirs, as it does not involve manipulating latent codes durThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7052 512×16×16 Conv2d(512,512,3,2,1) LeakyReLU() Conv2d(512,512,3,2,1) LeakyReLU() Conv2d(512,512,3,2,1) LeakyReLU() Conv2d(512,512,3,2,1) LeakyReLU() Reshape Fully connected layer FC1 FC2 Reshape Reshape × ··· Reshape Reshape 𝐶𝑖 𝑜𝑢𝑡 𝐶𝑖 𝑖𝑛 𝐶𝑖 𝑜𝑢𝑡 𝐶𝑖 𝑖𝑛 ··· 512×1×1 𝐶𝑖 𝑜𝑢𝑡× 𝐶𝑖 𝑖𝑛× 𝑘× 𝑘 1 512 𝑘× 𝑘× 1 × 𝐶𝑖 𝑖𝑛 𝑘× 𝑘× 𝐶𝑖 𝑜𝑢𝑡× 1 𝐹𝐶(𝐶𝑖 𝑖𝑛, 𝐶𝑖 𝑖𝑛) 𝐹𝐶(𝐶𝑖 𝑜𝑢𝑡, 𝐶𝑖 𝑜𝑢𝑡) FC1:Linear(512, 𝑘× 𝑘× 𝐶𝑖 𝑖𝑛) FC2:Linear(512, 𝑘× 𝑘× 𝐶𝑖 𝑜𝑢𝑡) Trainable Weights Data Flow 𝑥 𝑥 𝑖 ′ 𝑥 𝑖 ′′ ∆ 𝑖 ∆𝑖 Figure 2: The structure of a hypernetwork. It consists of downsampled convolutions and fully connected layers. The hypernetwork takes the intermediate feature map ˆx as input, which predicts and outputs the weight factor ∆i for the convolutional layer i in StyleGAN2’s generator. ing image editing. Our approach sets the groundwork for developing novel image editing techniques. Hypernetworks A hypernetwork is an auxiliary neural network responsible for generating weights for another network, often called the primary network. It was initially introduced by (Ha, Dai, and Le 2017). Training the hypernetworks on extensive datasets can adjust the weights of the main network through appropriate weights shifts, resulting in a more expressive model. Since its proposal, hypernetworks have found applications in various domains, including semantic segmentation (Nirkin, Wolf, and Hassner 2021), neural architecture search (Zhang, Ren, and Urtasun 2019), 3D modeling (Littwin and Wolf 2019; Sitzmann et al. 2020), continuous learning (von Oswald et al. 2020), and more. Recently, hypernetworks have also been applied to the StyleGAN inversion task. Both (Alaluf et al. 2022) and (Dinh et al. 2022) have developed hypernetworks structures to enhance the quality of image reconstruction. However, they employ additional methods (such as StyleCLIP, etc.) to complete image editing. In contrast, we utilize hypernetworks directly to accomplish image editing tasks. Moreover, our method can achieve both authentic attribute editing and cross-domain style editing simultaneously. To the best of our knowledge, this is the first occurrence of such an approach in the field of image editing. Method The overall structure of our model is illustrated in Figure 1, providing an overview of the image editing process. The following sections will comprehensively analyze each component in our approach. The Design of Expressive Hypernetworks Our approach is to reassign the generator’s weights for image editing, which requires our hypernetworks to be expressive, allowing us to control the generator effectively. This control empowers us to edit images both authentically and cross-domain. The details of our hypernetworks are depicted in Figure 2. It takes the intermediate feature map ˆx ∈ Convolutional Layer Convolutional Layer Convolutional Layer Convolutional Layer Directional CLIP Loss Convolutional Layer Convolutional Layer Convolutional Layer Convolutional Layer ∆𝑡: ··· ··· 𝜔1 0 𝜔2 0 𝜔3 0 𝜔18 0 𝜔18 𝑚 𝜔3 𝑚 𝜔2 𝑚 𝜔1 𝑚 [“face” “face with anger”] ∆𝜔1 < 𝜑 [False] ∆𝜔2 ≥𝜑 [True] [True] ∆𝜔3 ≥𝜑 [False] ∆𝜔18 < 𝜑 Figure 3: Overview of the adaptive layer selector. We utilize random latent codes as optimization objects and employ directional CLIP loss for a few iterations to dynamically select layers with significant differences between target and source codes. The grid part indicates parameter freezing, while the solid-colored part indicates optimization training. R512×16×16 as input. Firstly, we extract the features of ˆx and obtain ˆx′ i ∈R512×1×1 through four down-sampling convolution operations. We then reshape ˆx′ i as ˆx′′ i ∈R512, and it undergoes a series of fully connected operations. In the fully connected operations, we first employ two distinct fully connected layers to expand ˆx′′ i dimensions, yielding two different vectors: σ1(ˆx′′ i ) ∈Rk×k×cin i and σ2(ˆx′′ i ) ∈Rk×k×cout i . Here, σ1 and σ2 represent the FC1 and FC2, respectively. Where Cin i and Cout i represent the number of channels per convolution kernel and the total number of convolution kernels in the ith layer of StyleGAN2’s generator, respectively. Then, we reshape them and calculate the inner product to obtain the vector b∆i ∈Rk×k×Cout i ×Cin i , which is as follows: b∆i = R (σ1 (ˆx′′ i )) ⊗R (σ2 (ˆx′′ i )) (1) Where R denotes the reshape operation, and ⊗represents multidimensional matrix multiplication. To expand the representation space of ˆ∆i, we conduct two consecutive fully connected (FC) operations on ˆ∆i, followed by reshaping, resulting in the weight factors ∆i ∈RCout i ×Cin i ×k×k. We denote the weights of the ith layer in StyleGAN2 as θi = {θj,k i | 0 ≤j ≤Cout i , 0 ≤k ≤Cin i }, where j represents the jth convolutional kernel, and k denotes the kth channel of the convolutional kernel. To achieve image editing, we reassign the weights of the generator based on equation 2. At this point, we obtain the final generated image after editing y = G(ˆθ, winit), where winit is the latent vector of the original image x. ˆθj,k i = θj,k i + ∆j,k i · θj,k i (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7053 Input image Red hair White hair Purple hair Green hair Blue hair Yellow hair Blond hair Bangs Curly hair Input image Surprised Smile Pale skin Tanned skin Makeup Young face Blue eyes Anger Double chin Figure 4: Results of image editing on FFHQ dataset by using our method. The target attributes are located above the images. Directional CLIP Guidance In the real world, pairwise image datasets where specific attributes change are scarce. It poses a challenge to supervise the alignment of generated images with target images efficiently. To tackle this issue, we leverage the CLIP model’s potent cross-modal semantic alignment capability to facilitate the transformation of original images into target images in a self-supervised learning manner. In other words, the training of the entire model can be accomplished using only the original image and the target text prompt. In the existing work, (Wei et al. 2022) utilizes CLIP to directly align the generated image with the text conditions, as depicted in equation 3, where Ei represents the image encoder of CLIP, Et represents the text encoder of CLIP, cos(·, ·) represents the cosine similarity, and y is the generated image. Lglobe CLIP = 1 −cos (Ei(y), Et(Text)) (3) Since specific attribute changes typically involve localized modifications, direct semantic alignment may lead to global feature alterations, making it challenging for the network to converge quickly. To address this issue, we draw inspiration from the approaches in (Kwon and Ye 2022; Lyu et al. 2023) and introduce the directional CLIP loss to align the text condition with the image. The process of semantic alignment is illustrated in equation 4, where Ty and Tx represent the target and source text, respectively (e.g., [“face with smile”, “face”]). At this stage, we only need to ascertain the CLIP feature direction between the original and edited images. As the model continues to train, the generated image solely changes in this direction, ensuring that other local regions remain unaffected. Ldirection CLIP = 1−cos(Ei(y)−Ei(x), Et(Ty)−Et(Tx)) (4) Furthermore, in the feature extraction stage of the input image, we introduce a fusion modulation module to integrate the text conditions into the input feature map of the hypernetworks. By modifying the numerical characteristics of the intermediate layer feature map ¯x, we achieve indirect control over hypernetworks, allowing it to generate various weight factors and enabling a single model to produce diverse editing effects. In the fusion modulation module, to maintain consistency between the text condition and the generated image, we also incorporate the CLIP feature direction ∆t between the texts as the conditional embedding. The embedding process is as follows: ˆx = ¯x·α(∆t)+β(∆t), where∆t = Et (Ty)−Et (Tx) (5) Here, α(·) and β(·) refer to the multi-layer perceptrons (MLPs), and ¯x denotes the feature map obtained from the input image x through a series of operations such as convolution, batch normalization, and ResNet34 (He et al. 2016). Adaptive Layer Selector Recently, (Xia et al. 2021) demonstrated that different layers of the generator in StyleGAN2 control different attributes. Inspired by their findings, we question whether weight factors for all layers play a role in editing a single attribute. To explore this, we monitored how the weight factors generated by the hypernetworks changed during training. Figure 9 indicates that only a few layers undergo significant changes in the weight factors. Thus, we propose the adaptive layer selector, which allows hypernetworks to determine the layers requiring output weight factors autonomously. This technique alleviates the model’s parameter count and maximizes the hypernetworks’ efficiency when a single model only needs to implement a single attribute edit operation. The schematic diagram of the adaptive layer selector is illustrated in Figure 3. Before training the model, we identify layers that exhibit significant variations in latent codes through latent optimization (Patashnik et al. 2021). These layers are then used as the selected layers to output weight factors. We found that the process of optimization can be completed with a few iterations, taking approximately less than 5 seconds. We first sample random noise Z ∼N(0, 1) to generate the initial latent codes ω0 i . Then we optimize the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7054 Input 3D Render in the Style of Pixar Ukiyo-e style Amedeo Modigliani painting Disney Princess Fernando Botero Painting Sketch Old-timey photograph Figure 5: Results of cross-domain style editing by using our method. The target styles are located above the images. initial latent codes ω0 i for m iterations using the directional CLIP loss to obtain ωm i . Now, we can obtain the difference set between the latent codes before and after optimization as ∆ωi = ωm i −ω0 i . To adaptively select the suitable layer, we establish an adaptive threshold as follows: φ = Pn i=1 ∆ωi n +λstd v u u t Pn j=1  ∆ωj − Pn i=1 ∆ωi n 2 n (6) Where λstd represents the trade-off coefficient, and n = 17. If ∆ωi ≥φ, then the ith layer is considered the layer requiring output weight factors, and its index is added to the sequence L. Otherwise, no further action is taken. Finally, we obtain the sequence L = {i1, i2, . . . , in}. Loss Functions Our objective is to modify specific target regions of the images while preserving the non-target regions unchanged. To achieve this, we follow the previous approach (Alaluf et al. 2022) and use a similarity loss in our work. The calculation process is as follows: LSim = 1 −cos(RSim(G(θ + ∆· θ, winit)), RSim(G(θ, winit))) (7) RSim represents the pre-trained ArcFace network (Deng et al. 2019) when the initial images belong to the facial domain, and the pre-trained MoCo model (He et al. 2020) when initial images belong to the non-facial domain. Additionally, we minimize the variations in global regions through L2 loss. The calculation process is as follows: L2 = ∥G (θ + ∆· θ, winit) −G (θ, winit)∥2 (8) Combined with our goal of text-driven image editing, we define our comprehensive loss function as follows: L = λclipLdirection CLIP + λnormL2 + λSimLSim (9) Where λclip and λnorm are both set to 1. And λSim can take the values of either 0.1 or 0.5, depending on whether RSim corresponds to the ArcFace or MoCo networks. Notably, the ArcFace and MoCo networks cannot be simultaneously used during the model training process. Experiments Implementation Details To validate the effectiveness of our approach, we conducted extensive experiments on diverse and challenging datasets. For the face domain, we utilized the FFHQ (Karras, Laine, and Aila 2018) dataset with 70,000 images as the training set and the Celeba-HQ (Karras et al. 2018) dataset with 28,000 images as the test set. Additionally, in the supplementary material, we provided image editing results on the Cat, Horse, Car, and Church datasets of LSUN (Yu et al. 2015), as well as the Cat, Dog, and Wild datasets of AFHQ (Choi et al. 2020). It is worth noting that all real images were inverted using the e4e encoder (Tov et al. 2021) to obtain the latent codes, and all generated images were produced using the pre-trained StyleGAN2 generator. Our training was conducted on a 4090 GPU, with a batch size of 4, and the Ranger optimizer, using a learning rate of 0.001. Qualitative Evaluation Results of authentic images editing. To validate the effectiveness of our method in utilizing various text conditions for image editing, we present the results of using text to control 18 different attributes in the image, as shown in Figure 4. The results demonstrate that our method effectively controls specific attributes while preserving irrelevant ones. More editing results from various datasets are shown in the supplementary material. Results of cross-domain images editing. Figure 5 shows the results of our method editing real images to target images of different domains. The target domain never appears in the training process, which indicates that our method has good domain generalization ability. More editing results of cross-domain images editing are shown in the supplementary material. Comparisons with the SOTA. In Figure 6, We compare our method with three current state-of-the-art apThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7055 Input α=10 α=1.5 α=5 α=1.5 α=10 α=10 α=5 α=10 α=10 α=7 StyleCLIP-LM StyleCLIP-GD DeltaEdit StyleGAN-NADA Ours Input StyleCLIP-LM StyleCLIP-GD DeltaEdit StyleGAN-NADA Ours Purple hair Curly hair Young face Double chin Anger Makeup Sadness Old-timey photograph Sketch 3D Render in the Style of Pixar Figure 6: The results of comparing our method with DeltaEdit (Lyu et al. 2023), StyleCLIP-LM (Patashnik et al. 2021), StyleCLIP-GD (Patashnik et al. 2021), and StyleGAN-NADA (Gal et al. 2022) for image editing. Red hair Input Purple hair Blue hair White hair Green hair Pixar Disney Toonify Sketch Input Surprised Smile Sadness Anger Double Chin Figure 7: The weight factors produced by HyperEditor trained on the FFHQ dataset are applied to other domain generators (e.g., StyleGAN-NADA (Gal et al. 2022)). proaches, namely DeltaEdit (Lyu et al. 2023), StyleGANNADA (Gal et al. 2022), and StyleCLIP (Patashnik et al. 2021). Among these methods, DeltaEdit and StyleCLIP utilize the manipulation of latent codes for image editing. In contrast, StyleGAN-NADA achieves style transfer by finetuning the generator’s weights through retraining. In StyleCLIP, due to the extensive time consumption caused by optimization-based methods, here we only consider two methods based on latent mapper and global directions, respectively named StyleCLIP-LM and StyleCLIP-GD. Compared with DeltaEdit, where the editing results remain unchanged from the input image for attributes like “Purple hair” and “Young face”, our method excels at achieving accurate modifications of specific image attributes. Compared with StyleCLIP-LM, our method not only edits more accurately, but also protects non-relevant regions better. Compared with StyleCLIP-GD, our method achieves better image editing results without parameter adjustment. Furthermore, compared to the approaches that manipulate the latent codes, our method can accomplish cross-domain image editing, while they cannot. Nevertheless, while StyleGANNADA excels in style transfer, our method outperforms it regarding the controllability of detailed attribute editing and the preservation of facial identity. More qualitative comparison results are shown in the supplementary material. Weight factors’ transferability. In this section, we apply the weight factors trained on the FFHQ dataset to generators in various domains. The edited images are depicted in Figure 7. The results demonstrate that the generated weight factors can be effectively transferred to generators in other domains, enabling authentic facial attribute editing without compromising the target style. More results are shown in the supplementary material. Quantitative Evaluation In Table 1, we provide the objective measures, including FID, PSNR, SSIM, LPIPS, IDS (identity similarity), and CS (CLIP score). All the results are the average values obtained by calculating the images before and after changing the ten attributes. Compared with state-of-the-art methods, our approach achieves the highest CLIP score (27.35), indicating that our results are more consistent with the target condition, confirming that HyperEditor can conduct more authentic image editing operations. Furthermore, in addition to achieving authentic editing, our method excels at preserving the image regions that are irrelevant to the editing target (as evidenced by Table 1, where we achieve the best results in the first four columns). Moreover, we calculated the FID values for the variations of “smile” and “double chin” attributes, which are displayed in Table 1. The minimal FID values signify The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7056 Methods PSNR↑ LPIPS↓ SSIM↑ IDS↑ FID-0↓ FID-1↓ CS↑ Nop↓ StyleCLIP-GD α = 5 24.96 0.28 0.79 0.72 11.73 44.05 24.82 StyleCLIP-GD α = 10 20.51 0.33 0.76 0.64 17.08 201.95 25.28 StyleCLIP-LM 20.57 0.23 0.81 0.84 8.49 14.58 26.35 33.52M StyleGAN-NADA 18.75 0.27 0.74 0.58 56.20 59.54 26.69 DeltaEdit 23.31 0.23 0.82 0.81 10.04 8.6 22.89 82.76M Ours 25.33 0.22 0.85 0.85 6.19 7.19 27.35 71.48M Ours(Global-CLIP) 23.21 0.39 0.77 0.71 120.18 61.07 22.96 Ours(w/ adaptive layer selector) 24.87 0.18 0.87 0.83 8.84 7.5 26.37 15.66M Table 1: Quantitative evaluation of edited face images. Where FID-0 and FID-1 are obtained by calculating 2000 images before and after editing the “smile” and “double chin” attributes, respectively, the other values are obtained by computing the mean of images before and after editing for the ten different facial attributes. Nop denotes the number of model parameters. Face with smile Face with red hair Pixar face Input (a) (c) (b) Old-timey photograph face Figure 8: The results are presented in three cases: (a) without the adaptive layer selector and with Global-CLIP guidance, (b) with the adaptive layer selector and DirectionalCLIP guidance, and (c) without the adaptive layer selector and with Directional-CLIP guidance. the closest distribution between the images produced by our method and the original images, and it also reflects the protection of non-edited regions by our approach. Ablation Studies Effectiveness of directional CLIP loss. The Global-CLIP guided text-driven image editing results are presented in Table 1 and Figure 8. The results indicate that the Global-CLIP guided method disrupts the global characteristics of the original image, leading to either significant differences between the generated image and the original image or blurriness. We attribute this to CLIP causing perturbations to the global feature semantics while guiding the local feature semantics to change. Additionally, (Gal et al. 2022) mentions the occurrence of adversarial interference during the image generation process guided by Global-CLIP. In contrast, our directional CLIP loss effectively protects the global feature semantics and provides more stable supervised training. Rationality of adaptive layer selector. In Figure 9, we have documented the fluctuations in the average weight factors for each layer from the initiation of hypernetwork training until convergence. The results reveal that only some layers’ weight factors undergo changes, confirming that it is not necessary to output weight factors for all layers during the -0.10 0.00 0.10 0.20 0.30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 smile young face old-timey style pixar style Figure 9: The variation in the average weight factors for each layer before and after editing specific attributes. The horizontal axis represents the layer index. image editing process. Table 1 and Figure 8 present results of image editing when we utilize the adaptive layer selector, at this point, λstd = 0.6. The results demonstrate that using the adaptive layer selector can reduce the parameter count of hypernetworks by 80%, while still achieving equivalent results. Note that the adaptive layer selector is only suitable for a single model to edit a single attribute. If a single model is used to edit multiple attributes, the weight factors of all layers can be effectively used, so there is no need to reduce the number of layers that output weight factors. The selection of λstd is illustrated in the supplementary material. Conclusions In this paper, we propose HyperEditor, an innovative framework that leverages hypernetworks to reassign weights of StyleGAN2’s generator and utilizes CLIP for supervised training. As a result, compared with previous methods that achieve image editing by manipulating latent codes, our HyperEditor enables both authentic attribute editing and crossdomain style transfer. Compared to fine-tuning the generator by retraining, reassigning the generator’s weights using hypernetworks offers more excellent controllability, enabling it to achieve finer precision in attribute editing and safeguarding the coherence of non-edited regions. Moreover, our fusion modulation module allows diverse editing operations within a single model, and the adaptive layer selector can reduce the model’s complexity while editing a single attribute. Our innovative approach will open up the possibility to edit one thing to anything in the future. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7057 Acknowledgments This work was supported by the National Natural Science Foundation of China under Grant 61871186 and 61771322. References Alaluf, Y.; Tov, O.; Mokady, R.; Gal, R.; and Bermano, A. 2022. Hyperstyle: Stylegan inversion with hypernetworks for real image editing. In Proceedings of the IEEE/CVF conference on computer Vision and pattern recognition, 18511– 18521. Choi, Y.; Uh, Y.; Yoo, J.; and Ha, J.-W. 2020. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8188–8197. Deng, J.; Guo, J.; Xue, N.; and Zafeiriou, S. 2019. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4690–4699. Dinh, T. M.; Tran, A. T.; Nguyen, R.; and Hua, B.-S. 2022. Hyperinverter: Improving stylegan inversion via hypernetwork. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11389–11398. Gal, R.; Patashnik, O.; Maron, H.; Bermano, A. H.; Chechik, G.; and Cohen-Or, D. 2022. StyleGAN-NADA: CLIPguided domain adaptation of image generators. ACM Transactions on Graphics (TOG), 41(4): 1–13. Ha, D.; Dai, A. M.; and Le, Q. V. 2017. HyperNetworks. In International Conference on Learning Representations. H¨ark¨onen, E.; Hertzmann, A.; Lehtinen, J.; and Paris, S. 2020. Ganspace: Discovering interpretable gan controls. Advances in neural information processing systems, 33: 9841– 9850. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9729–9738. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hou, X.; Shen, L.; Patashnik, O.; Cohen-Or, D.; and Huang, H. 2022. Feat: Face editing with attention. arXiv preprint arXiv:2202.02713. Karras, T.; Aila, T.; Laine, S.; and Lehtinen, J. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In International Conference on Learning Representations. Karras, T.; Laine, S.; and Aila, T. 2018. A Style-Based Generator Architecture for Generative Adversarial Networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4396–4405. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2020. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8110–8119. Kocasari, U.; Dirik, A.; Tiftikci, M.; and Yanardag, P. 2022. StyleMC: multi-channel based fast text-guided image generation and manipulation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 895–904. Kwon, G.; and Ye, J. C. 2022. Clipstyler: Image style transfer with a single text condition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18062–18071. Littwin, G.; and Wolf, L. 2019. Deep meta functionals for shape representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1824–1833. Liu, H.; Song, Y.; and Chen, Q. 2023. Delving StyleGAN Inversion for Image Editing: A Foundation Latent Space Viewpoint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10072–10082. Liu, Z.; Li, M.; Zhang, Y.; Wang, C.; Zhang, Q.; Wang, J.; and Nie, Y. 2023. Fine-Grained Face Swapping via Regional GAN Inversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8578– 8587. Lyu, Y.; Lin, T.; Li, F.; He, D.; Dong, J.; and Tan, T. 2023. Deltaedit: Exploring text-free training for text-driven image manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6894–6903. Nirkin, Y.; Wolf, L.; and Hassner, T. 2021. Hyperseg: Patchwise hypernetwork for real-time semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4061–4070. Patashnik, O.; Wu, Z.; Shechtman, E.; Cohen-Or, D.; and Lischinski, D. 2021. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2085–2094. Pehlivan, H.; Dalva, Y.; and Dundar, A. 2023. Styleres: Transforming the residuals for real image editing with stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1828–1837. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Revanur, A.; Basu, D.; Agrawal, S.; Agarwal, D.; and Pai, D. 2023. CoralStyleCLIP: Co-Optimized Region and Layer Selection for Image Editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12695–12704. Roich, D.; Mokady, R.; Bermano, A. H.; and Cohen-Or, D. 2022. Pivotal tuning for latent-based editing of real images. ACM Transactions on graphics (TOG), 42(1): 1–13. Saha, R.; Duke, B.; Shkurti, F.; Taylor, G. W.; and Aarabi, P. 2021. Loho: Latent optimization of hairstyles via orthogonalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1984–1993. Shen, Y.; Gu, J.; Tang, X.; and Zhou, B. 2019. Interpreting the Latent Space of GANs for Semantic Face Editing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7058 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9240–9249. Sitzmann, V.; Martel, J.; Bergman, A.; Lindell, D.; and Wetzstein, G. 2020. Implicit neural representations with periodic activation functions. Advances in neural information processing systems, 33: 7462–7473. Tov, O.; Alaluf, Y.; Nitzan, Y.; Patashnik, O.; and Cohen-Or, D. 2021. Designing an encoder for stylegan image manipulation. ACM Transactions on Graphics (TOG), 40(4): 1–14. von Oswald, J.; Henning, C.; Grewe, B. F.; and Sacramento, J. 2020. Continual learning with hypernetworks. In International Conference on Learning Representations. Wei, T.; Chen, D.; Zhou, W.; Liao, J.; Tan, Z.; Yuan, L.; Zhang, W.; and Yu, N. 2022. Hairclip: Design your hair by text and reference image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18072–18081. Xia, W.; Yang, Y.; Xue, J.-H.; and Wu, B. 2021. Tedigan: Text-guided diverse face image generation and manipulation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2256–2265. Yu, F.; Seff, A.; Zhang, Y.; Song, S.; Funkhouser, T.; and Xiao, J. 2015. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Zeng, Y.; Lin, Z.; and Patel, V. M. 2022. Sketchedit: Maskfree local image manipulation with partial sketches. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5951–5961. Zhang, C.; Ren, M.; and Urtasun, R. 2019. Graph HyperNetworks for Neural Architecture Search. In International Conference on Learning Representations. Zhu, Y.; Liu, H.; Song, Y.; Yuan, Z.; Han, X.; Yuan, C.; Chen, Q.; and Wang, J. 2022. One model to edit them all: Free-form text-driven image manipulation with semantic modulations. Advances in Neural Information Processing Systems, 35: 25146–25159. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7059
2024
784
18,611
RadOcc: Learning Cross-Modality Occupancy Knowledge through Rendering Assisted Distillation Haiming Zhang1,2*, Xu Yan3†, Dongfeng Bai3, Jiantao Gao3, Pan Wang3, Bingbing Liu3, Shuguang Cui2,1, Zhen Li2,1† 1FNii, CUHK-Shenzhen, Shenzhen, China 2SSE, CUHK-Shenzhen, Shenzhen, China 3Huawei Noah’s Ark Lab {haimingzhang@link., xuyan1@link., lizhen@}cuhk.edu.cn Abstract 3D occupancy prediction is an emerging task that aims to estimate the occupancy states and semantics of 3D scenes using multi-view images. However, image-based scene perception encounters significant challenges in achieving accurate prediction due to the absence of geometric priors. In this paper, we address this issue by exploring cross-modal knowledge distillation in this task, i.e., we leverage a stronger multi-modal model to guide the visual model during training. In practice, we observe that directly applying features or logits alignment, proposed and widely used in bird’s-eyeview (BEV) perception, does not yield satisfactory results. To overcome this problem, we introduce RadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction. By employing differentiable volume rendering, we generate depth and semantic maps in perspective views and propose two novel consistency criteria between the rendered outputs of teacher and student models. Specifically, the depth consistency loss aligns the termination distributions of the rendered rays, while the semantic consistency loss mimics the intra-segment similarity guided by vision foundation models (VLMs). Experimental results on the nuScenes dataset demonstrate the effectiveness of our proposed method in improving various 3D occupancy prediction approaches, e.g., our proposed methodology enhances our baseline by 2.2% in the metric of mIoU and achieves 50% in Occ3D benchmark. Introduction 3D occupancy prediction (3D-OP) is a crucial task within the field of 3D scene understanding, which has garnered considerable attention, particularly in the field of autonomous driving (Wang et al. 2023b; Tong et al. 2023; Tian et al. 2023). In contrast to other 3D perception tasks, such as object detection using bounding box representations, 3DOP involves the simultaneous estimation of both the occupancy state and semantics in the 3D space using multi-view images (Tian et al. 2023). This is achieved by leveraging geometry-aware cubes to represent a wide range of objects and background shapes. *Work done during an internship at Huawei Noah’s Ark Lab. †Corresponding authors: Xu Yan and Zhen Li. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. or Camera-based Student (a) Voxel / BEV / logits Distillation or voxel features BEV features (b) Rendering Assisted Distillation logits logits Multi-modal Teacher voxel feature Camera-based Student Multi-modal Teacher Predict and Volume Rendering voxel feature depths depths semantics semantics Figure 1: Rendering Assisted Distillation. (a) Existing methods conduct alignment on features or logits. (b) Our proposed RadOcc method constrains the rendered depth maps and semantics simultaneously. In the realm of 3D occupancy prediction, remarkable advancements have been achieved thus far. These advancements have been made possible by adopting a pipeline inspired by Bird’s Eye View (BEV) perception, which utilizes either forward projection (Huang et al. 2021) or backward projection (Li et al. 2022b) techniques for view transformation. This process generates 3D volume features that capture the spatial information of the scene, which are then fed into the prediction head for occupancy predictions. However, relying solely on camera modality poses challenges in accurate prediction due to the lack of geometric perception. To overcome this bottleneck, two mainstream solutions have emerged in the field of BEV perception: 1) integrating geometric-aware LiDAR input and fusing the complementary information of the two modalities (Liu et al. 2023), and 2) conducting knowledge distillation to transfer the complementary knowledge from other modalities to a singlemodality model (Zhou et al. 2023a). As the first solution introduces additional network designs and computational overhead, recent works have increasingly focused on the second solution, aiming to develop stronger single-modal The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7060 models through distilling multi-modal knowledge. In this paper, we present the first investigation into crossmodal knowledge distillation for the task of 3D occupancy prediction. Building upon existing methods in the field of BEV perception that leverage BEV or logits consistency for knowledge transfer, we extend these distillation techniques to aligning voxel features and voxel logits in the task of 3D occupancy prediction, as depicted in Figure 1(a). However, our preliminary experiments reveal that these alignment techniques face significant challenges in achieving satisfactory results in the task of 3D-OP, particularly the former approach introduces negative transfer. This challenge may stem from the fundamental disparity between 3D object detection and occupancy prediction, where the latter is a more fine-grained perception task that requires capturing geometric details as well as background objects. To address the aforementioned challenges, we propose RadOcc, a novel approach that leverages differentiable volume rendering for cross-modal knowledge distillation. The key idea of RadOcc is conducting alignment between rendered results generated by teacher and student models, as Figure 1(b). Specifically, we employ volume rendering (Mildenhall et al. 2021) on voxel features using the camera’s intrinsic and extrinsic parameters, which enables us to obtain corresponding depth maps and semantic maps from different viewpoints. To achieve better alignment between the rendered outputs, we introduce the novel Rendered Depth Consistency (RDC) and Rendered Semantic Consistency (RSC) losses. On the one hand, the RDC loss enforces consistency of ray distribution, which enables the student model to capture the underlying structure of the data. On the other hand, the RSC loss capitalizes on the strengths of vision foundation models (Kirillov et al. 2023), and leverages pre-extracted segments to conduct an affinity distillation. This criterion allows the model to learn and compare semantic representations of different image regions, enhancing its ability to capture fine-grained details. By combining the above constraints, our proposed method effectively harnesses the cross-modal knowledge distillation, leading to improved performance and better optimization for the student model. We demonstrate the effectiveness of our approach on both dense and sparse occupancy prediction and achieve state-of-the-art results on both tasks. In summary, our main contributions are threefold: • We propose a rendering assisted distillation paradigm for 3D occupancy prediction, named RadOcc. Our paper is the first to explore cross-modality knowledge distillation in 3D-OP and provides valuable insights into the application of existing BEV distillation techniques for this task. • Two novel distillation constraints, i.e., rendered depth and semantic consistency (RDC & RSC), are proposed, which effectively enhance the knowledge transfer process through aligning ray distribution and affinity matrices guided by vision foundation models. • Equipped with the proposed methodology, RadOcc achieves state-of-the-art performance on the Occ3D and nuScenes benchmarks for dense and sparse occupancy prediction. Furthermore, we verify that our proposed distillation approach can effectively boost the performance of several baseline models. Related Work Camera-based 3D Perception Camera-based 3D perception has emerged as a significant research focus in the field of autonomous driving, owing to its cost-effectiveness and rich visual attributes. Recent advancements have aimed to integrate multiple tasks into a unified framework by transforming image-based features into a Bird’s Eye View (BEV) space. One mainstream follows the forward projection paradigm proposed in LSS (Philion and Fidler 2020), where multi-view image features are projected onto the BEV plane through predicted depth maps (Huang et al. 2021; Li et al. 2023, 2022a). Another mainstream (i.e., backward projection) draws inspiration from DETR3D (Wang et al. 2022b), which involves using learnable queries and a cross-attention mechanism to extract information from image features (Li et al. 2022b; Lu et al. 2022; Jiang et al. 2023). Although these methods effectively compress information onto the BEV plane, they may lose some of the essential structural details inherent in 3D space. Introducing LiDAR priors through cross-modal knowledge distillation makes them ideal for understanding the structure of 3D scenes while keeping efficiency. 3D Occupancy Prediction The field of 3D occupancy prediction (3D-OP) has garnered significant attention in recent years, with the aim of reconstructing the 3D volumetric scene structure from multi-view images. This area can be broadly classified into two categories based on the type of supervision: sparse prediction and dense prediction. On the one hand, sparse prediction methods utilize LiDAR points as supervision and are evaluated on LiDAR semantic segmentation benchmarks. For instance, TPVFormer (Huang et al. 2023) proposes a tri-perspective view method for predicting 3D occupancy, while PanoOcc (Wang et al. 2023b) unifies the task of occupancy prediction with panoptic segmentation in a coarseto-fine scheme. On the other hand, dense prediction methods are more akin to the Semantic Scene Completion (SSC) task (Song et al. 2017; Yan et al. 2021), with the core difference being whether to consider the area that the camera cannot capture. Recently, several studies focus on the task of dense occupancy prediction and introduce new benchmarks using nuScenes dataset (Caesar et al. 2020) at the same period, such as OpenOccupancy (Wang et al. 2023a), OpenOcc (Tong et al. 2023), SurroundOcc (Wei et al. 2023) and Occ3D (Tian et al. 2023). These works mainly adopt the architecture from BEV perception and use 3D convolution to construct an extra head for occupancy prediction. We find out that concurrent work (Gan et al. 2023) also utilizes volume rendering technique, however, they naively apply rendered results as auxiliary supervision. Still, we first time investigate cross-modal knowledge distillation in this field, and our proposed method can be integrated into arbitrary previous works. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7061 Multi-modal Input Multiview Images Rendering Assisted Distillation View Transform Occ Decoder Backbone Backbone Fusion Voxel Features Occ Decoder Depth Maps Semantics Rendered Depth and Semantic Consistency Losses Camera Voxel Features Density Volumes Semantic Volumes Volume Feature Fusion Teacher Model Student Model 3D Occupancy Prediction 3D Occupancy Prediction Depth Maps Semantics Volume Rendering Figure 2: Overall framework of RadOcc. It adopts a teacher-student architecture, where the teacher network is a multi-modal model while the student network only takes camera inputs. The predictions of two networks will be utilized to generate rendered depth and semantics through differentiable volume rendering. The newly proposed rendered depth and semantic consistency losses are adopted between the rendered results. Cross-Modal Knowledge Distillation Knowledge distillation has been a popular technique in the field of computer vision since its introduction in (Hinton, Vinyals, and Dean 2015). This technique initially involves compressing a large network (teacher) into a more compact and efficient one (student), while simultaneously improving the performance of the student. Over the years, the effectiveness of knowledge distillation has led to its widespread exploration in various computer vision tasks, including object detection (Dai et al. 2021; Guo et al. 2021; Zhang and Ma 2020), semantic segmentation (Hou et al. 2020; Liu et al. 2019) and other tasks (Yan et al. 2022b; Zhao et al. 2023; Yuan et al. 2022; Zhou et al. 2023b). Recently, knowledge distillation has been introduced into 3D perception tasks for knowledge transfer between models using different modalities. For instance, (Chong et al. 2022) transfers depth knowledge of LiDAR points to a camera-based student detector by training another camera-based teacher with LiDAR projected to perspective view. 2DPASS (Yan et al. 2022a) utilizes multiscale fusion-to-single knowledge distillation to enhance the LiDAR model with image priors. In the field of BEV perception, CMKD (Hong, Dai, and Ding 2022), BEVDistill (Chen et al. 2022) and UniDistill (Zhou et al. 2023a) perform cross-modality distillation in BEV space. Specifically, these methods transform prior knowledge through distillation in feature, relation, and output levels. Although these efforts have greatly enhanced the performance of student models, they cannot achieve satisfactory performance gains when directly applied to the task of 3D occupancy prediction. Methodology Problem Setup 3D occupancy prediction leverages multiview images as input to predict a semantic volume surrounding the egovehicle. Specifically, it takes into account the current multiview images denoted as It = {It 1, ..., It n}, as well as the previous frames It−1, ..., It−k, where k represents the number of history frames and n denotes the camera view index. By incorporating this temporal information, the model finally predicts the semantic voxel volume Yt ∈ {w1, ..., wC+1}H×W ×Z for the current frame. Here, C + 1 includes C semantic classes with an occupancy state in the scene, while w(·) represents the voxel grid. Distillation Architecture Framework overview. The overall architecture is illustrated in Figure 2, consisting of teacher and student networks. The teacher network takes both LiDAR and multi-view images as input, while the student network solely utilizes multi-view images. Both branches are supervised by ground truth occupancy, and the distillation constraints are applied between 3D occupancy predictions and rendered results. Camera-based student. Our student network takes multiframe multi-view images as input and first extracts the feature using an image backbone. To leverage the benefits of Bird’s Eye View (BEV) perception, we apply pixel-wise depth estimation on image features and then project them from the perspective view into a 3D volume via the viewtransform operation proposed in (Huang et al. 2021), forming a low-level volume feature. Moreover, to introduce the temporal information in our model, we adopt the technique proposed in (Li et al. 2022a), which dynamically warps and fuses the historical volume feature and produces a fused feature. To obtain more fine-grained predicted shapes, the volume feature is fed into an occupancy decoder to generate the prediction. Multi-modal teacher. Inspired by LiDAR-based detectors presented in (Shi et al. 2020), the unstructured point clouds are scattered into pillars (Lang et al. 2019). Subsequently, the volume features are extracted by SECOND and SECOND-FPN (Yan, Mao, and Li 2018). Building upon the success of LiDAR-camera-based BEV detectors, as presented in (Liu et al. 2023), we further concatenate features from two modalities and process the result with a fully convolutional network to produce the fused features. Finally, a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7062 View Image Rendered Depth (T) Rendered Depth (S) Disparity Map Ray Distribution (T) Ray Distribution (S) Figure 3: The analysis of rendered depths. Although the rendered depths of teacher (T) and student (S) are similar, especially for the foreground objects, their ray termination distribution shows a great disparity. similar occupancy decoder is applied to the fused feature, resulting in the prediction of occupancy. Rendering Assisted Distillation Volume rendering. In this paper, we adopt the volume rendering technique as proposed in NeRF (Mildenhall et al. 2021) to obtain depth and semantic maps for knowledge distillation. By incorporating camera intrinsic and external parameters, we are able to compute the corresponding 3D ray for each pixel in the 2D image. After that, we employ the volume rendering technique to perform a weighted sum on the sampled points along the ray, thereby calculating the predicted depths and semantics in perspective views. Given Np sampled points {pi = (xi, yi, zi))}Np i=1 along the ray in pixel (u, v), the rendered depth ˆd and semantic logits ˆs at this pixel can be calculated via Ti = exp( Xi−1 j=1 σ(pj)δj), (1) ˆd(u, v) = XNp i=1 Ti(1 −exp(−σ(pi)δi))d(pi), (2) ˆs(u, v) = XNp i=1 Ti(1 −exp(−σ(pi)δi))s(pi), (3) where d(·), σ(·) and s(·) are distance, volume density and semantic of the sampled point, respectively. Since the occupancy network will predict the occupancy probability and semantics, we can easily obtain σ(pi) and s(pi) by scattering the voxel predictions into the corresponding sampled point pi. Moreover, δi = d(pi+1) −d(pi) is the distance between two adjacent sampled points. Finally, we obtain depth and semantic maps in i-th perspective view through collecting View Image VFM Shape Segments Segment Grouping Rendered Semantic Affinity Matrix Figure 4: The generation of affinity matrix. We first adopt a visual foundation model (VFM), i.e., SAM, to extract segments into the original image. After that, we conduct segment grouping in rendered semantic features in each segment, obtaining the affinity matrix. results from all pixels, i.e., Si = {ˆs(u, v) | u ∈[1, H], v ∈ [1, W]} and Di = { ˆd(u, v) | u ∈[1, H], v ∈[1, W]}, where (H, W) is the size of view image. To facilitate the definition, we respectively denote rendered depth and semantics results from teacher and student as DT/S = {DT/S 1 , ..., DT/S n } and ST/S = {ST/S 1 , ..., ST/S n }, where n is the number of views. Rendered depth consistency. After acquiring the rendered depth, a simplistic approach involves directly imposing constraints between the output of teacher and student models. However, this approach is a hard constraint, and the differences in rendered depths between the teacher and student models are typically within a narrow range. To address this issue, we propose an innovative approach that aligns the ray termination distribution during the volume rendering process. As shown in Figure 3, we plot ray distribution over the distance traveled by the ray. Although the rendered depths of the two models are quite similar, their ray distribution shows a great discrepancy. When a ray traverses through single objects (the red point), we find that the ray termination distribution of the teacher model is typically unimodal, while that of the student exists multiple peaks. Aligning this distribution makes the student model tend to predict a similar latent distribution as the teacher model. Finally, rendered depth consistency (RDC) loss Lrdc is formulated as R(·) (u,v) = {Ti(1 −exp(−σ(pi)δi))}Np i=1, (4) Lrdc = 1 HW H X u=1 W X v=1 DKL(Rteacher (u,v) ||Rstudent (u,v) ). (5) Here, Ti is calculated as Eqn. (1). The notation Rteacher and Rstudent respectively denote the ray distribution of the teacher and student networks, which are aligned through KL divergence DKL(·||·). Rendered semantic consistency. Besides simply using KL divergence to align the semantic logits, we also leverage the strengths of vision foundation models (VFMs) (Kirillov et al. 2023) to perform a segment-guided affinity distillaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7063 Method Image Backbone mIoU others barrier bicycle bus car const. veh. motorcycle pedestrian traffic cone trailer truck drive. suf. other flat sidewalk terrain manmade vegetation Performances on Validation Set MonoScene R101-DCN 6.06 1.75 7.23 4.26 4.93 9.38 5.67 3.98 3.01 5.90 4.45 7.17 14.91 6.32 7.92 7.43 1.01 7.65 CTF-Occ R101-DCN 28.53 8.09 39.33 20.56 38.29 42.24 16.93 24.52 22.72 21.05 22.98 31.11 53.33 33.84 37.98 33.23 20.79 18.00 BEVFormer R101-DCN 39.24 10.13 47.91 24.90 47.57 54.52 20.23 28.85 28.02 25.73 33.03 38.56 81.98 40.65 50.93 53.02 43.86 37.15 PanoOcc R101-DCN 42.13 11.67 50.48 29.64 49.44 55.52 23.29 33.26 30.55 30.99 34.43 42.57 83.31 44.23 54.40 56.04 45.94 40.40 BEVDet† Swin-B 42.02 12.15 49.63 25.10 52.02 54.46 27.87 27.99 28.94 27.23 36.43 42.22 82.31 43.29 54.62 57.90 48.61 43.55 Baseline (ours) Swin-B 44.14 13.39 52.20 31.43 52.01 56.70 30.66 32.95 31.56 31.31 39.87 44.64 82.98 44.97 55.43 58.90 48.43 42.99 RadOcc (ours) Swin-B 46.06 9.78 54.93 20.44 55.24 59.62 30.48 28.94 44.66 28.04 45.69 48.05 81.41 39.80 52.78 56.16 64.45 62.64 Teacher (ours) Swin-B 49.38 10.93 58.23 25.01 57.89 62.85 34.04 33.45 50.07 32.05 48.87 52.11 82.9 42.73 55.27 58.34 68.64 66.01 Performances on 3D Occupancy Prediction Challenge BEVFormer R101-DCN 23.70 10.24 36.77 11.70 29.87 38.92 10.29 22.05 16.21 14.69 27.44 33.13 48.19 33.10 29.80 17.64 19.01 13.75 SurroundOcc† R101-DCN 42.26 11.7 50.55 32.09 41.59 57.38 27.93 38.08 30.56 29.32 48.29 38.72 80.21 48.56 53.20 47.56 46.55 36.14 BEVDet† Swin-B 42.83 18.66 49.82 31.79 41.90 56.52 26.74 37.31 30.01 31.33 48.18 38.59 80.95 50.59 53.87 49.67 46.62 35.62 PanoOcc-T⋆ Intern-XL 47.16 23.37 50.28 36.02 47.32 59.61 31.58 39.59 34.58 33.83 52.25 43.29 83.82 55.81 59.41 53.81 53.48 43.61 Baseline-T (ours) Swin-B 47.74 22.88 50.74 41.02 49.39 55.40 33.41 45.71 38.57 35.79 48.94 44.40 83.19 52.26 59.09 55.83 51.35 43.54 RadOcc-T (ours) Swin-B 49.98 21.13 55.17 39.31 48.99 59.92 33.99 46.31 43.26 39.29 52.88 44.85 83.72 53.93 59.17 55.62 60.53 51.55 Teacher-T (ours) Swin-B 55.09 25.94 59.04 44.93 57.95 63.70 38.89 52.03 53.21 42.16 59.90 50.45 84.79 55.70 60.83 58.02 67.66 61.40 Table 1: 3D occupancy prediction performance on the Occ3D. † denotes the performance reproduced by official codes. ⋆means the results provided by authors. ‘-T’ represents results through test-time augmentation (TTA). Please note that our visual model achieves a benchmark ranking of Top-4 on 16/08/2023, outperforming all previously published methods. tion (SAD). Specifically, we first employ the VFM to oversegment patches using the original view images as input, as illustrated in Figure 4. With the rendered semantic features from both the teacher and student networks, i.e., ST , SS ∈RH×W ×C, we can divide the rendered semantics into several groups based on the indices of aforementioned patches. After that, an average pooling function is applied within each group, extracting multiple teacher and student semantic embedding, i.e., ET ∈RM×C and ES ∈RM×C. Here, M is the number of patches generated by the VFM. Inspired but different from the previous work (Hou et al. 2022), we calculate an affinity matrix C(·) according to the above segments for the further distillation: Ci,j,r = E(i, r), E(j, r) ||E(i)||2||E(j)||2 . (6) The affinity score captures the similarity of each segment of semantic embedding and it can be taken as the high-level structural knowledge to be learned by the student. After that, the final RSC loss is a linear combination of affinity distillation loss and KL divergence between rendered semantics: Lsad = C X r=1 M X i=1 M X j=1 ||CT i,j,r −CS i,j,r||2 2, (7) Lrsc = Lsad/CM 2 + ωDKL(ST ||SS), (8) where CT and CS are affinity matrices of teacher and student networks, and ω is a hyperparameter in our experiment. Experiments Dataset ane Metric Dataset. We evaluate our proposed method on nuScenes (Caesar et al. 2020) for sparse prediction and Occ3D (Tian et al. 2023) for dense prediction. The data descriptions are provided in supplementary material. Evaluation metrics. Our study presents an independent evaluation of the model’s performance in both dense and sparse prediction tasks. Specifically, for dense prediction, we conduct experiments on the Occ3D dataset, which quantifies the mean Intersection over Union (mIoU) for 17 semantic categories within the camera’s visible region. On the other hand, for sparse prediction, we train the model with single-sweep LiDAR and assess the model’s performance on the nuScenes-lidarseg benchmark, which measures the mIoU for 16 semantic categories, with the ‘others’ category being treated as ‘ignored’. Experimental Settings Implementation. For the dense prediction, we follow the setting of BEVDet (Huang et al. 2021) and use Swin Transformer (Liu et al. 2021) as the image backbone. We adopt the semantic scene completion module proposed in (Yan et al. 2021) as our occupancy decoder, which contains several 3D convolutional blocks to learn a local geometry representation. Afterward, the features from different blocks are concatenated to aggregate information. Finally, a linear projection is utilized to map the feature into C + 1 dimensions. Since the challenging nature of the Occ3D test benchmark, we utilize 8 historical frames for temporal encoding and use The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7064 Method Input Modality Image Backbone mIoU barrier bicycle bus car const. veh. motorcycle pedestrian traffic cone trailer truck drive. suf. other flat sidewalk terrain manmade vegetation PolarNet LiDAR 69.4 72.2 16.8 77.0 86.5 51.1 69.7 64.8 54.1 69.7 63.5 96.6 67.1 77.7 72.1 87.1 84.5 Cylinder3D LiDAR 77.2 82.8 29.8 84.3 89.4 63.0 79.3 77.2 73.4 84.6 69.1 97.7 70.2 80.3 75.5 90.4 87.6 2DPASS LiDAR 80.8 81.7 55.3 92.0 91.8 73.3 86.5 78.5 72.5 84.7 75.5 97.6 69.1 79.9 75.5 90.2 88.0 TPVFormer Camera R50-DCN 59.2 65.6 15.7 75.1 80.0 48.8 43.1 44.3 26.8 72.8 55.9 92.3 53.7 61.0 59.2 79.7 75.6 BEVDet† Camera Swin-B 65.2 31.3 63.9 74.6 79.1 51.5 59.8 63.4 56.2 74.7 59.8 92.8 61.4 69.5 65.7 84.1 82.9 TPVFormer (BL) Camera R101-DCN 69.4 74.0 27.5 86.3 85.5 60.7 68.0 62.1 49.1 81.9 68.4 94.1 59.5 66.5 63.5 83.8 79.9 RadOcc (ours) Camera R101-DCN 71.8 49.1 34.2 84.5 85.8 59.2 70.3 71.4 62.5 79.7 69.0 95.4 66.2 75.1 72.0 87.4 86.0 Teacher (ours) Cam+Li R101-DCN 75.2 62.7 33.2 88.7 88.8 64.6 78.1 74.1 65.0 83.1 72.2 96.5 68.3 77.6 74.4 88.7 87.1 Table 2: LiDAR semantic segmentation results on nuScenes test benchmark. † denotes the performance is reproduced by official codes. Our method achieves state-of-the-art performance in camera-based methods. BL denotes the baseline method. RadOcc (Ours) Multi-view Images (a) Dense 3D Occupancy Prediction (b) Sparse 3D Occupancy Prediction barrier bicycle bus car c. v. motor. ped. t. c. trailer truck d. s. flat sidewalk terrain manmade veg. Figure 5: Qualitative results on Occ3D and nuScenes validation sets. RadOcc takes multi-view images as input and produces voxel predictions. More visualization comparisons can be found in the supplementary materials. 3 frames on the validation set. For the sparse prediction, we use previous art TPVFormer (Huang et al. 2023) as our baseline. The rendered size of the network is configured to 384 × 704. To speed up the rendering and reduce memory usage, we randomly sample 80,000 rays during each step. Results and Analysis Dense Prediction. To evaluate the performance of dense 3D occupancy prediction, we compare our proposed method with current state-of-the-art approaches on the Occ3D dataset (Tian et al. 2023), including the validation set and online benchmark. The upper part of Table 1 presents the validation set results, where all methods are trained for 24 epochs. Specifically, we compare our approach with MonoScene (Cao and de Charette 2022), BEVFormer (Li et al. 2022b), CTF-Occ (Tian et al. 2023) and PanoOcc (Wang et al. 2023b), which all employ the ResNet101-DCN (Dai et al. 2017) initialized from FCOS3D (Wang et al. 2021) checkpoint as the image backbone. Additionally, we report the results of BEVDet (Huang et al. 2021) that uses the same image backbone as ours. Our baseline model, trained from scratch, already outperforms prior state-of-the-art methods. However, by leveraging our proposed distillation strategy, we achieve significantly better occupancy results in terms of mIoU. The lower part of Table 1 presents the results on the 3D occupancy prediction challenge, where our proposed method achieves state-of-the-art performance and outperforms all previously published approaches by a large margin. Note that though PanoOcc (Wang et al. 2023b) adopts a stronger image backbone, i.e., InternImage-XL (Wang et al. 2022a), the results of them are still lower than ours, especially for the foreground objects with challenge nature. The visualization results for both dense and sparse prediction are The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7065 Method Consistency mIoU Gains BEVDet (baseline) 36.10 Hinton et al. Prob. 37.00 +0.90 Hinton et al. Feature 35.89 -0.21 BEVDistill Prob. + Feature 35.95 -0.15 RadOcc (ours) Render 37.98 +1.88 RadOcc (ours) Prob. + Render 38.53 +2.43 Table 3: Comparison for knowledge distillation. The results are obtained on Occ3D. To speed up the evaluation, we take BEVDet (Huang et al. 2021) with ResNet50 image backbone as our baseline. †: Since there is no object-level prediction, we replace the sparse distillation of BEVDistill (Chen et al. 2022) with logits distillation. shown in Figure 5. More visualization results can be found in the supplementary material. Sparse Prediction. To evaluate the effectiveness of model using sparse LiDAR supervision, we evaluate the performance of our proposed RadOcc model on the nuScenes LiDAR semantic segmentation benchmark. Our results, as shown in Table 2, demonstrate a significant improvement over the baseline TPVFormer (Huang et al. 2023) and outperform previous camera-based occupancy networks such as BEVDet (Huang et al. 2021). Surprisingly, our method even achieves comparable performance with some LiDARbased semantic segmentation methods (Zhang et al. 2020; Zhou et al. 2020). It should be noted that since we use voxelized single-sweep LiDAR as supervision, where the geometric details in data may be lost during the voxelization, the results of a multi-modal teacher network may not achieve comparable performance with state-of-the-art LiDAR-based methods (Yan et al. 2022a). Comparison for knowledge distillation. To further validate the efficacy of our proposed methodology upon previous teacher-student architectures, we conduct a comparative analysis of RadOcc with conventional knowledge transfer techniques as presented in Table 3. To facilitate the experimentation process, we choose BEVDet (Huang et al. 2021) with ResNet50 image backbone as our baseline, and all methods are trained with the same strategies for a fair comparison. The results in the table indicate that direct application of feature and logits alignment (Hinton, Vinyals, and Dean 2015; Chen et al. 2022) fails to achieve a significant boost on the baseline model, particularly for the former, which results in negative transfer. Notably, leveraging rendering-assisted distillation leads to a substantial improvement of 2% on mIoU. Furthermore, even when applying logit distillation, the model can still enhance the mIoU by 0.6%. Ablation study. We conduct an ablation study of rendering distillation in Table 4. Here, BEVDet with ResNet50 image backbone is selected as our baseline model. Model A directly conducts alignment through Scale-Invariant Logarithmic (Eigen, Puhrsch, and Fergus 2014) on rendered depth maps but fails to improve the performance. In contrast, Model B aligns the latent distribution of depth rendering and achieves an improvement of 0.7% in mIoU. On the Method RDC(-) RDC SAD RSC mIoU BEVDet 36.10 Model A ✓ 35.08 Model B ✓ 36.76 Model C ✓ 37.13 Model D ✓ 37.42 RadOcc (ours) ✓ ✓ 37.98 Table 4: Ablation study on Occ3D. We use BEVDet with ResNet50 image backbone as our baseline. Here, RDC and RSC are rendered depth and semantic consistency losses. RDC (-) denotes directly aligning the rendered depth map with Scale-Invariant Logarithmic loss. Method Segment mIoU Gains BEVDet w/ RSC SAM 37.42 Model E Super Pixel 37.05 -0.37 Table 5: Design analysis of SAD. We replace the segment extraction strategy with other designs. other hand, Model C demonstrates the results sorely using segment-guided affinity distillation (SAD) on rendered semantics, which increases the mIoU by 1.0%. Applying additional KL divergence between two rendered semantics can boost the performance to 37.42%. Finally, when we combine RDC and RSC losses, the model achieves the best result. In Table 5, we analyze the design of SAD by replacing its segment with other implementations in Model E. Specifically, when we use super-pixel (Achanta et al. 2012), the performance will decrease by about 0.37%. Conclusion In this paper, we present RadOcc, a novel cross-modal knowledge distillation paradigm for 3D occupancy prediction. It leverages a multi-modal teacher model to provide geometric and semantic guidance to a visual student model via differentiable volume rendering. Moreover, we propose two new consistency criteria, depth consistency loss and semantic consistency loss, to align the ray distribution and affinity matrices between the teacher and student models. Extensive experiments on the Occ3D and nuScenes datasets show RadOcc can significantly improve the performance of various 3D occupancy prediction methods. Our method achieves state-of-the-art results on the Occ3D challenge benchmark and outperforms existing published methods by a large margin. We believe that our work opens up new possibilities for cross-modal learning in scene understanding. Acknowledgments This work was supported by NSFC with Grant No. 62293482, by the Basic Research Project No. HZQBKCZYZ-2021067 of Hetao Shenzhen HK S&T Cooperation Zone, by Shenzhen General Program No. JCYJ20220530143600001, by Shenzhen-Hong Kong Joint The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7066 Funding No. SGDX20211123112401002, by Shenzhen Outstanding Talents Training Fund, by Guangdong Research Project No. 2017ZT07X152 and No. 2019CX01X104, by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No. 2022B1212010001), by the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen, by the NSFC 61931024&81922046&61902335, by the Shenzhen Key Laboratory of Big Data and Artificial Intelligence (Grant No. ZDSYS201707251409055), and the Key Area R&D Program of Guangdong Province with grant No. 2018B03033800, by Tencent&Huawei Open Fund. References Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; and S¨usstrunk, S. 2012. SLIC superpixels compared to state-ofthe-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34(11): 2274–2282. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11621–11631. Cao, A.-Q.; and de Charette, R. 2022. Monoscene: Monocular 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3991–4001. Chen, Z.; Li, Z.; Zhang, S.; Fang, L.; Jiang, Q.; and Zhao, F. 2022. Bevdistill: Cross-modal bev distillation for multiview 3d object detection. arXiv preprint arXiv:2211.09386. Chong, Z.; Ma, X.; Zhang, H.; Yue, Y.; Li, H.; Wang, Z.; and Ouyang, W. 2022. Monodistill: Learning spatial features for monocular 3d object detection. arXiv preprint arXiv:2201.10830. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, 764–773. Dai, X.; Jiang, Z.; Wu, Z.; Bao, Y.; Wang, Z.; Liu, S.; and Zhou, E. 2021. General instance distillation for object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7842–7851. Eigen, D.; Puhrsch, C.; and Fergus, R. 2014. Depth map prediction from a single image using a multi-scale deep network. Advances in neural information processing systems. Gan, W.; Mo, N.; Xu, H.; and Yokoya, N. 2023. A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving. arXiv preprint arXiv:2303.10076. Guo, J.; Han, K.; Wang, Y.; Wu, H.; Chen, X.; Xu, C.; and Xu, C. 2021. Distilling object detectors via decoupled features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2154–2164. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Hong, Y.; Dai, H.; and Ding, Y. 2022. Cross-modality knowledge distillation network for monocular 3d object detection. In European Conference on Computer Vision, 87– 104. Springer. Hou, Y.; Ma, Z.; Liu, C.; Hui, T.-W.; and Loy, C. C. 2020. Inter-region affinity distillation for road marking segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12486–12495. Hou, Y.; Zhu, X.; Ma, Y.; Loy, C. C.; and Li, Y. 2022. Pointto-voxel knowledge distillation for lidar semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8479–8488. Huang, J.; Huang, G.; Zhu, Z.; Ye, Y.; and Du, D. 2021. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view. arXiv preprint arXiv:2112.11790. Huang, Y.; Zheng, W.; Zhang, Y.; Zhou, J.; and Lu, J. 2023. Tri-perspective view for vision-based 3d semantic occupancy prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9223– 9232. Jiang, Y.; Zhang, L.; Miao, Z.; Zhu, X.; Gao, J.; Hu, W.; and Jiang, Y.-G. 2023. Polarformer: Multi-camera 3d object detection with polar transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 1042–1050. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. Lang, A. H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; and Beijbom, O. 2019. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12697–12705. Li, Y.; Bao, H.; Ge, Z.; Yang, J.; Sun, J.; and Li, Z. 2022a. Bevstereo: Enhancing depth estimation in multi-view 3d object detection with dynamic temporal stereo. arXiv preprint arXiv:2209.10248. Li, Y.; Ge, Z.; Yu, G.; Yang, J.; Wang, Z.; Shi, Y.; Sun, J.; and Li, Z. 2023. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 1477–1485. Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; and Dai, J. 2022b. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In European conference on computer vision, 1–18. Springer. Liu, Y.; Chen, K.; Liu, C.; Qin, Z.; Luo, Z.; and Wang, J. 2019. Structured knowledge distillation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2604–2613. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7067 Liu, Z.; Tang, H.; Amini, A.; Yang, X.; Mao, H.; Rus, D. L.; and Han, S. 2023. Bevfusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), 2774–2781. IEEE. Lu, J.; Zhou, Z.; Zhu, X.; Xu, H.; and Zhang, L. 2022. Learning ego 3d representation as ray tracing. In European Conference on Computer Vision, 129–144. Springer. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1): 99–106. Philion, J.; and Fidler, S. 2020. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, 194–210. Springer. Shi, S.; Wang, Z.; Shi, J.; Wang, X.; and Li, H. 2020. From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network. IEEE transactions on pattern analysis and machine intelligence, 43(8): 2647– 2664. Song, S.; Yu, F.; Zeng, A.; Chang, A. X.; Savva, M.; and Funkhouser, T. 2017. Semantic scene completion from a single depth image. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1746–1754. Tian, X.; Jiang, T.; Yun, L.; Wang, Y.; Wang, Y.; and Zhao, H. 2023. Occ3d: A large-scale 3d occupancy prediction benchmark for autonomous driving. arXiv preprint arXiv:2304.14365. Tong, W.; Sima, C.; Wang, T.; Wu, S.; Deng, H.; Chen, L.; Gu, Y.; Lu, L.; Luo, P.; Lin, D.; et al. 2023. Scene as Occupancy. arXiv preprint arXiv:2306.02851. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 913–922. Wang, W.; Dai, J.; Chen, Z.; Huang, Z.; Li, Z.; Zhu, X.; Hu, X.; Lu, T.; Lu, L.; Li, H.; et al. 2022a. InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions. arXiv preprint arXiv:2211.05778. Wang, X.; Zhu, Z.; Xu, W.; Zhang, Y.; Wei, Y.; Chi, X.; Ye, Y.; Du, D.; Lu, J.; and Wang, X. 2023a. Openoccupancy: A large scale benchmark for surrounding semantic occupancy perception. arXiv preprint arXiv:2303.03991. Wang, Y.; Chen, Y.; Liao, X.; Fan, L.; and Zhang, Z. 2023b. PanoOcc: Unified Occupancy Representation for Camera-based 3D Panoptic Segmentation. arXiv preprint arXiv:2306.10013. Wang, Y.; Guizilini, V. C.; Zhang, T.; Wang, Y.; Zhao, H.; and Solomon, J. 2022b. Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning, 180–191. PMLR. Wei, Y.; Zhao, L.; Zheng, W.; Zhu, Z.; Zhou, J.; and Lu, J. 2023. Surroundocc: Multi-camera 3d occupancy prediction for autonomous driving. arXiv preprint arXiv:2303.09551. Yan, X.; Gao, J.; Li, J.; Zhang, R.; Li, Z.; Huang, R.; and Cui, S. 2021. Sparse single sweep lidar point cloud segmentation via learning contextual shape priors from scene completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3101–3109. Yan, X.; Gao, J.; Zheng, C.; Zheng, C.; Zhang, R.; Cui, S.; and Li, Z. 2022a. 2dpass: 2d priors assisted semantic segmentation on lidar point clouds. In European Conference on Computer Vision, 677–695. Springer. Yan, X.; Zhan, H.; Zheng, C.; Gao, J.; Zhang, R.; Cui, S.; and Li, Z. 2022b. Let images give you more: Point cloud cross-modal training for shape analysis. Advances in Neural Information Processing Systems, 35: 32398–32411. Yan, Y.; Mao, Y.; and Li, B. 2018. Second: Sparsely embedded convolutional detection. Sensors, 18(10): 3337. Yuan, Z.; Yan, X.; Liao, Y.; Guo, Y.; Li, G.; Cui, S.; and Li, Z. 2022. X-trans2cap: Cross-modal knowledge transfer using transformer for 3d dense captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8563–8573. Zhang, L.; and Ma, K. 2020. Improve object detection with feature-based knowledge distillation: Towards accurate and efficient detectors. In International Conference on Learning Representations. Zhang, Y.; Zhou, Z.; David, P.; Yue, X.; Xi, Z.; Gong, B.; and Foroosh, H. 2020. Polarnet: An improved grid representation for online lidar point clouds semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9601–9610. Zhao, W.; Zhang, H.; Zheng, C.; Yan, X.; Cui, S.; and Li, Z. 2023. CPU: Codebook Lookup Transformer with Knowledge Distillation for Point Cloud Upsampling. In Proceedings of the 31st ACM International Conference on Multimedia, 3917–3925. Zhou, H.; Zhu, X.; Song, X.; Ma, Y.; Wang, Z.; Li, H.; and Lin, D. 2020. Cylinder3d: An effective 3d framework for driving-scene lidar semantic segmentation. arXiv preprint arXiv:2008.01550. Zhou, S.; Liu, W.; Hu, C.; Zhou, S.; and Ma, C. 2023a. UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird’s-Eye View. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5116–5125. Zhou, W.; Yan, X.; Liao, Y.; Lin, Y.; Huang, J.; Zhao, G.; Cui, S.; and Li, Z. 2023b. BEV@ DC: Bird’s-Eye View Assisted Training for Depth Completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9233–9242. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7068
2024
785
18,612
GSDD: Generative Space Dataset Distillation for Image Super-resolution Haiyu Zhang1, Shaolin Su1, Yu Zhu1*, Jinqiu Sun2, Yanning Zhang1 1School of Computer Science, Northwestern Polytechnical University 2School of Astronautics, Northwestern Polytechnical University zhanghaiyu@mail.nwpu.edu.cn Abstract Single image super-resolution (SISR), especially in the real world, usually builds a large amount of LR-HR image pairs to learn representations that contain rich textural and structural information. However, relying on massive data for model training not only reduces training efficiency, but also causes heavy data storage burdens. In this paper, we attempt a pioneering study on dataset distillation (DD) for SISR problems to explore how data could be slimmed and compressed for the task. Unlike previous coreset selection methods which select a few typical examples directly from the original data, we remove the limitation that the selected data cannot be further edited, and propose to synthesize and optimize samples to preserve more task-useful representations. Concretely, by utilizing pre-trained GANs as a suitable approximation of realistic data distribution, we propose GSDD, which distills data in a latent generative space based on GAN-inversion techniques. By optimizing them to match with the practical data distribution in an informative feature space, the distilled data could then be synthesized. Experimental results demonstrate that when trained with our distilled data, GSDD can achieve comparable performance to the state-of-the-art (SOTA) SISR algorithms, while a nearly ×8 increase in training efficiency and a saving of almost 93.2% data storage space can be realized. Further experiments on challenging real-world data also demonstrate the promising generalization ability of GSDD. Introduction Single image super-resolution (SISR) refers to recovering the high-resolution (HR) image from its low-resolution (LR) counterpart. With the increasing pursuit and preference for high-definition images, a rising requirement for the performance improvement of current SISR algorithms is urgently demanded. To this end, people attempt to build a massive amount of LR-HR image pairs to train SR models that cover rich textural and structural information existing in the real scenarios. However, one of the most prominent problems emerges accordingly is that its success was mainly derived at the cost of the massive data, which consumes huge computational and storage resources for model implementation. A feasible way to reduce the cost is to compact the data while preserving sufficient information for deep models. Thus, it *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. August 13, 2023 67 2023 ACM International Conference on Multimedia Animal-tiger Building-church Distilled Image RealSR GSDD Ground Truth 0.6188/0.3373 0.7424/0.1591 0.7207/0.1765 SSIM↑/LPIPS↓ 0.5191/0.2030 0.8196/0.1169 0.7938/0.1206 SSIM↑/LPIPS↓ Trained with 700 distilled images Trained with 10,324+ original images Real-ESRGAN Figure 1: A demonstration of the corresponding distilled images and SR results of two example categories. Two advanced SISR models trained on more than 10,324 original images (yellow part) are compared. The proposed GSDD (based on Real-ESRGAN (Wang et al. 2021) generator) was only trained on 700 distilled images (red part), but achieved approximate results with the SOTA. The arrow here indicates comparable performance. Please zoom in for best view. is natural to think of adopting the coreset selection (Phillips 2017) methods. By selecting a subset of data from the whole training data, models are expected to perform competitively with those trained on the whole population of data. Despite the straightforward methodology, it has two drawbacks: 1) most coreset selections are developed as upstream tasks, they do not guarantee an optimal solution for downstream tasks (e.g., image restoration); 2) the method simply selects samples from dataset, while the most informative data representation cannot be reached (Zhao and Bilen 2021). To overcome above limitations, dataset distillation1 (DD) was proposed to synthesize a bunch of compact data that has better informative property than the coreset (Wang et al. 2018a). This process can also be interpreted as learning a small set of representative images from a large amount of training data, while achieving similar generalization abilities but higher training efficiencies than models trained on the original data. Unlike coreset selection, the synthetic data are directly optimized for downstream tasks and thus leading to better testing performance (Zhao, Mopuri, and Bilen 2020). 1Dataset distillation is also referred to as dataset condensation in some literatures (Zhao and Bilen 2021; Zhao, Mopuri, and Bilen 2020; Kim et al. 2022). The two are essentially the same. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7069 Therefore, in this paper, to alleviate the massive data dependency for current SR models, we implement a pioneering study on DD for solving SISR problems by optimizing a compact image set in a latent generative space that fits best for training SISR models. Concretely, as pre-trained GANs (Goodfellow et al. 2020) can be interpreted as an approximation of complicated image distribution, we utilize this preferable property for realizing the distillation operations. By distilling data in a latent generative space based on GAN-inversion (Xia et al. 2023) techniques, we obtain a small set of compact but informative data, and models trained on them achieve comparable generalization ability with the ones trained on the original large-scale dataset. We also propose a regularization term R on the basis of distribution matching loss to strengthen the informative representation of the latent vectors during the distillation process. From Figure 1, it can be observed that GSDD obtains similar performances both quantitatively and qualitatively to Real-ESRGAN (Wang et al. 2021) (trained on more than 10,324 images) by only training on the distilled images, which is less than 7% of the original data amount. In summary, our contributions of this paper are as follows: ⋆We propose to optimize and distill current SR dataset to form a compact and informative representation. To the best of our knowledge, it is the first attempt to explore the possibility of applying DD techniques to SISR research. ⋆We establish both a GAN-inversion based scheme and a regularization term R for data optimization and distillation operations. Our approach ensures the distilled data match the realistic data distribution while preserving representative information for fulfilling SISR tasks. ⋆Extensive experimental results validate our effectiveness. We achieve comparable performance to SOTA SISR algorithms while obtain a nearly ×8 increase in training efficiency and a saving of almost 93.2% data storage space. Furthermore, GSDD can generalize well even to the challenging real-world scenes images. Related Works Dataset Distillation (DD) Compared to common knowledge distillation (also known as model distillation (Hinton, Vinyals, and Dean 2015)), DD (Wang et al. 2018a) distills the dataset rather than the model. Specifically, it synthesizes a small portion of data so that a model trained on these data maintains the performance of the model trained on the full dataset (Cazenavette et al. 2022). Over the past few years, it has drawn increasing attention in the filed of machine learning (ML) (Nguyen, Chen, and Lee 2020; Nguyen et al. 2021; Zhao and Bilen 2021; Zhao, Mopuri, and Bilen 2020) including various applications, such as continual learning (CL) (Rebuffi et al. 2017; Deng and Russakovsky 2022), neural architecture search (NAS) (Such et al. 2020; Cui et al. 2022), federated learning (FL) (Liu, Yu, and Zhou 2022; Hu et al. 2022) and privacy-preserving ML (Dong, Zhao, and Lyu 2022), etc. Different from traditional data compression, DD aims to generate a small-scale synthetic dataset that preserves adequate task-useful information so that the model trained on it can generalize well to other unseen data. Single Image Super-Resolution (SISR) In recent years, intensive research on deep neural networks (DNNs) has dramatically boosted the performance of many SISR models, as well as achieving SOTA results on various benchmarks. More specifically, from early approaches based on convolutional neural networks (CNNs) (e.g., SRCNN (Dong et al. 2014, 2016), VDSR (Kim, Lee, and Lee 2016), EDSR (Lim et al. 2017), etc.) to more recent promising methods using generative adversarial networks (GANs) (Goodfellow et al. 2020) (e.g., SRGAN (Ledig et al. 2017), ESRGAN (Wang et al. 2018c), RankSRGAN (Zhang et al. 2019), SFTGAN (Wang et al. 2018b), etc.), various deep learning strategies have been applied to this field. Meanwhile, datasets dedicated to SR models also have been proliferated. For example, DIV2K (800) (Agustsson and Timofte 2017) and Flickr2K (2,650) (Timofte et al. 2017) are often utilized for model training. In addition, merging multiple datasets for SR training process is also a popular trend, e.g., combining DIV2K and Flickr2K into DF2K (3,450) (Lim et al. 2017; Haris, Shakhnarovich, and Ukita 2021), combining DF2K and OST (Wang et al. 2018b) into DF2K OST (13,744) (Wang et al. 2021) and so on. Remarks In order to learn representations that contain sufficiently detailed information, most current SISR models often depend on datasets containing a large amount of data samples. As a result, the burden of training efficiency and data storage become a non-negligible issue. Although there are studies on DD and its applications on high-level computer vision areas, few attention has been paid to low-level image restoration tasks, especially for image SR community. To promote the training efficiency as well as maintain the generalization capability of current SISR models, we propose GSDD, which explores DD techniques based on GAN-inversion operations for solving SISR problems. Our underling hypothesis is that since existing GANs can well fit the large-scale image space to provide rich data information, conducting DD in the latent space of pre-trained GAN models can be regarded as an efficient and convenient solution. Since we are the first to apply the DD ideology to SISR tasks, we expect this to be a pioneering study that potentially facilitates the practical applications under model and data constraints. Proposed Method Figure 2 illustrates our framework, which consists of a data distillation phase and a model retraining & inference phase. Specifically, phase one distills and optimizes the latent vector set Z by minimizing a distribution matching loss between the training set T and synthetic set S. Wherein, a pretrained SISR generator combined with GAN-inversion operations is utilized to synthesize the small-scale set S, serving as the distilled dataset. In phase two, the distilled image pairs are used to retrain existing SISR models from scratch and then test their restoration and generalization performances. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7070 December 18, 2023 2023 ACM International Conference on Multimedia 6 (𝜔𝜔) (𝜔𝜔) Distribution Matching Super-Resolution Generator Feature Extraction Differentiable Siamese Augmentation (DSA)   Train on original dataset Train on distilled dataset Gradient backpropagation Latent Vector  Phase One Phase Two HR LR SISR Network Retraining Inference HR LR ×4 ×4 Figure 2: The flow diagram of the proposed GSDD model. Problem Formulation Given a large training dataset T = {xi, yi}m i=1, DD aims at extracting the knowledge of T into a small synthetic dataset S = {sj, yj}n j=1 (n ≪m) so that the model θS trained on S can achieve comparable performance to the model θT trained on T (Lei and Tao 2024). This process can be formulated as Equation 1. Ex∼PRDD[ℓ(ϕθT (x), y)] ≃Ex∼PRDD[ℓ(ϕθS(x), y)] (1) Where x is the input data, y is the label, and PRDD represents the real data distribution. ϕθT and ϕθS denote DNNs parameterized by θT and θS, respectively. GAN-Inversion Given a target image, GAN-inversion aims to map it back into the latent space of a pre-trained GAN model, so that the target image can be faithfully reconstructed from the feed-forward inverted code (Xia et al. 2023). In our practical approach, we employ the optimization-based (Creswell and Bharath 2019; Abdal, Qin, and Wonka 2019, 2020) inversion method, because it learns the latent vectors of each input image separately. For a pre-trained generator G, the latent vector for a real image is optimized by Equation 2. ˆz = arg min z ℓ(G(z) −x) (2) Where ℓcalculates the loss in feature or pixel space, z denotes the latent vector, x represents the target image. Since GAN-inversion focuses on the manipulation of visual effects for a specific image, the learned synthetic images are not guaranteed to be informative enough for training a wellbehaved DNN (Xia et al. 2023). Therefore, it is necessary to impose some constraints on the inversion process especially on the latent vector z. As shown in Equation 3, received the inspiration from latent space optimization (LSO) (Bojanowski et al. 2018), we use the latent space embedding (Abdal, Qin, and Wonka 2019), a simple but effective GANinversion method, to initialize and optimize the latent vectors via minimizing the feature and pixel distances between the synthetic images and training samples. z′ = arg min z Df + Dp (3) Df = 1 df ∥ψϑ(G(z)) −ψϑ(x)∥2 (4) Dp = 1 dI ∥G(z) −x∥2 (5) Where ψϑ is a pre-trained feature extractor, df and dI are the dimensions of feature and image, respectively. Pre-trained SISR Generator G Since the pre-trained generator can serve as an approximation of complicated data distribution (Chan et al. 2021), optimizing its latent space yields to more informative representations. In our proposed GSDD framework, the pre-trained generator can be taken from arbitrary existing GAN-based SISR models, such as ESRGAN (Wang et al. 2018c), SFTGAN (Wang et al. 2018b), DGAN (Li et al. 2022), FSSR (Fritsche, Gu, and Timofte 2019), RealSR (Ji et al. 2020), and Real-ESRGAN (Wang et al. 2021), etc. For better generalizability and robustness, we employ pre-trained generator from the SOTA Real-ESRGAN model covering a wide range of practical degradations. The generator serves two purposes here: 1) optimizing data in a more compact and informative latent coding space by back-propagation; 2) synthesizing distilled images by feed-forward processes. Distribution Matching Optimization As compared to the more commonly used gradient-matching (Zhao, Mopuri, and Bilen 2020) and trajectory-matching (Cazenavette et al. 2022) strategies for DD optimization, we employ the distribution-matching (Zhao and Bilen 2023) to avoid the intrinsic bi-level optimization resulting in an expensive consumption on time and memory resources (Sachdeva and McAuley 2023). Specifically, we match the distributions of data in training set T and synthetic set S. The underlying assumption is that two datasets which are similar according to a particular distribution divergence metric, also lead to similarly trained models (Zhao and Bilen 2023). In practice, we employ a pre-trained feature extractor ψϑ with parameter ϑ to achieve a mapping from input space to feature space. The synthetic data is optimized by Equation 6, where c ∈N ∗denotes the image category. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7071 Image Per Class (IPC) 1 5 10 20 40 60 90 100‡ 500† FULL Ratio 0.07% 0.34% 0.68% 1.36% 2.71% 4.07% 6.10% 6.78% 33.90% 100% ∗ESRGAN SSIM↑ 0.3390 0.4026 0.4632 0.4834 0.5795 0.6165 0.6875 0.7257 0.7598 0.7661 LPIPS↓ 0.6810 0.6334 0.5985 0.5101 0.4612 0.4185 0.3332 0.2940 0.2105 0.2012 ∗SFTGAN SSIM↑ 0.4528 0.4547 0.4995 0.4958 0.5831 0.6459 0.7477 0.7754 0.8391 0.8422 LPIPS↓ 0.7055 0.6720 0.6398 0.5549 0.4836 0.4511 0.3490 0.3266 0.2847 0.2738 ∗DGAN SSIM↑ 0.4137 0.4840 0.4911 0.4832 0.5904 0.6938 0.7236 0.7541 0.8445 0.8487 LPIPS↓ 0.6316 0.6003 0.5862 0.5028 0.4274 0.4021 0.2844 0.2589 0.2001 0.1966 ∗FSSR SSIM↑ 0.4489 0.4259 0.4970 0.4856 0.5794 0.6480 0.7110 0.7705 0.8286 0.8340 LPIPS↓ 0.6681 0.6511 0.5913 0.5154 0.4502 0.4077 0.2933 0.2702 0.2332 0.2217 ∗RealSR SSIM↑ 0.4173 0.4323 0.4288 0.4042 0.5124 0.6205 0.7154 0.7822 0.8219 0.8254 LPIPS↓ 0.5874 0.5667 0.5595 0.4723 0.4089 0.3805 0.2166 0.1610 0.1210 0.1123 ∗Real-ESRGAN SSIM↑ 0.4805 0.4201 0.4992 0.4964 0.5140 0.6273 0.7618 0.7986 0.8331 0.8367 LPIPS↓ 0.5102 0.5029 0.4807 0.4162 0.3841 0.3512 0.1979 0.1351 0.0924 0.0903 Table 1: The SSIM (Wang et al. 2004) and LPIPS (Zhang et al. 2018) results using different pre-trained generators for distillation w.r.t. different distilled data numbers on the OST (Wang et al. 2018b) dataset. The number of categories here is 7. IPC denotes distilled images per class. ∗indicates that the model was trained on the distilled dataset optimized from the corresponding generator G. FULL means the model was trained on the full-scale original OST dataset. The underline represents the best SSIM and bold represents the best LPIPS. † indicates the closest result to that on the full data, and ‡ indicates a second position. L(S) = C X c=1 ∥ψϑ(Sc) −ψϑ(Tc)∥2 (6) Training Data Distillation Inspired by the distribution matching idea discussed above, we distill dataset knowledge from real training images into synthetic ones that are produced by generator. In practice, we minimize the distillation loss Ldis to optimize latent vectors. Instead of directly optimizing the image pixels in (Zhao and Bilen 2023), we optimize the latent vectors from a pretrained GAN. The advantage of our approach is that it follows a generic framework, thus any pre-trained GAN can be integrated in our process. Furthermore, since we can generate arbitrary number of latent vectors to synthesize training images, the amount is more flexible and controllable compared with existing SR datasets which only have a fixed number of samples. It is also worth noting that it costs less to increase training samples by synthesizing data, compared with augmenting HR images for current limited SR datasets. To conduct a meaningful distillation optimization (i.e., simple and fast), as mentioned before, we make use of the distribution-matching based distillation method, while getting rid of the bi-level optimization and the second-order derivation. Specifically, in a feature extractor space ψϑ (i.e., randomly sampled embedding space), the synthetic samples are expected to have a distribution similar to the real training samples, which can be formulated as Equation 7. Ldis = Eϑ∼Pϑ, ω∼Ω∥A −B∥2 (7) Where A denotes 1 |T | P|T | i=1 ψϑ(A(xi, ω)) and B represents 1 |Z| P|Z| j=1 ψϑ(A(G(zj), ω)). Differentiable siamese augmentation (DSA) operation A(ω) parameterized with ω ∼Ωenables the effective use of data augmentation strategies in the image synthesis process (Zhao and Bilen 2021). Regularization Term R Because of the imbalance between the latent vector set |Z| and training sample set |T |, training the model with a same distillation loss Ldis can easily result in homogenization of different latent vectors (Elingaard et al. 2022). This reduces the data informativeness when training models, thus impairing the performance. Therefore, in accordance with common practice in the DD domain (Yu, Liu, and Wang 2024), we further introduce a regularization term R to prevent the risk of over-fitting, which can be formulated as Equation 8. R = Eϑ∼Pϑ, ω∼Ω∥C −D∥2 (8) Where C = ψϑ(A(xk, ω)) and D = ψϑ(A(G(zk), ω)). Unlike Equation 7 where the number of xi and zj is unbalanced, here xk and zk are in pairs to retain the original information. Finally, the overall training loss of GSDD is Equation 9, where λ serves as the regularization coefficient. Loverall = (1 −λ) ∗Ldis + λ ∗R (9) Experiments Dataset & Evaluation Following the paradigm of common DD protocols, the original data are required to have their explicit categories for better distillation. To satisfy the requirement, we select the SR dataset OST (Wang et al. 2018b) for our main experiments, in which the images all have their labeled classes. It consists of data totalling over 10,324 images in 7 categories (sky, water, building, grass, plant, animal, and mountain). Since it includes two subsets, we use OutdoorSceneTraining The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7072 Algorithm Volume NIQE↓ NRQM↑ PI↓ BICUBIC — 7.9556 3.1220 7.3963 ESRGAN 13,774 5.1441 4.7773 5.6033 SFTGAN 10,324 5.1622 4.5932 5.5227 DGAN 3,450 6.9110 4.0772 6.4135 FSSR 3,450 7.3122 5.1077 6.0089 RealSR 3,450 4.9824 6.1021 4.7405 Real-ESRGAN 13,774 4.5896 5.9741 4.2089 GSDD 700 4.7251 5.8498 4.4013 Difference 0.1355 0.2523 0.1924 Table 2: The comparison on the DPED (Ignatov et al. 2017) dataset with SOTA SISR models which trained on datasets from their official papers. Where bold represents the best and underline represents the second. The difference is between the proposed GSDD model and the best result. for training and OutdoorSceneTest300 for testing. We further employ a real-world dataset DPED (Ignatov et al. 2017) to verify the generalization capability of GSDD under realistic degradation. Similar to most SR comparisons, we adopt SSIM (Wang et al. 2004), LPIPS (Zhang et al. 2018), NIQE (Mittal, Soundararajan, and Bovik 2013), NRQM (Ma et al. 2017), and PI (Wang et al. 2018c) to evaluate the recovery accuracy of various SR models trained by the distilled data. Training Details In our practical experiments, we use the ResNet18 (He et al. 2016) for feature extraction, due to its similarity to most existing SISR networks. The scale factor for SISR task is ×4. To obtain distilled images in pairs, we adopt the classical bicubic down-sampling to HR distilled images and form LR counterparts, which facilitates implementation while ensuring the integrity of the image content and edge structure. We use Adam optimizer (Kingma and Ba 2014) with β1 = 0.9, β2 = 0.999, and learning rate η = 0.001 for training. The training iterations for optimizing latent vectors is 500K and regularization coefficient is set to λ = 0.1. Comparison to the State-of-the-art As we are the pioneer in exploring DD to SISR tasks, we first select various GAN-based SISR models to verify the effectiveness and versatility of our proposed DD pipeline, including ESRGAN (Wang et al. 2018c), SFTGAN (Wang et al. 2018b), DGAN (Li et al. 2022), FSSR (Fritsche, Gu, and Timofte 2019), RealSR (Ji et al. 2020), and Real-ESRGAN (Wang et al. 2021). To be more specific, we perform DD based on different generators from these SISR models, and show models performances when trained on small-scale distilled data. We further compare performances when trained on the full-scale original dataset for better demonstration. Quantitative Aspect The experiments include two parts, one exploring the recovery accuracy of SOTA GAN-based SISR models w.r.t different distilled image numbers (see Table 1), the second comparing to the reported performance of generalization capability of advanced methods (see Table 2). From Table 1, we make three observations: 1) as the number of distilled images grows, the recovery accuracy of each SISR model also increases (when IPC = 500, the performance is almost the same as the original data), due to the introduction of more informative features for model training; 2) even if trained on only 100 distilled samples per class (reducing almost 93.2% amount of data), models still achieve comparable performances in terms of SSIM and LPIPS to those trained on entire dataset, this fully validates our effectiveness; 3) when using Real-ESRGAN’s generator (i.e., the setting of our proposed GSDD model), it obtained the best LPIPS results among other models. Based on the above findings, we choose to use IPC = 100 for subsequent experiments considering the balance between data storage (or training efficiency) and model effectiveness. To verify the generalization capability of GSDD on more complex degradations, we further compare it with advanced SISR models on a challenging real-world dataset DPED (Ignatov et al. 2017). From Table 2, it can be observed that GSDD maintains good performance when tested on images distributed out of training samples. Notably, GSDD outperforms some SOTA solutions (e.g., DGAN (Li et al. 2022) and FSSR (Fritsche, Gu, and Timofte 2019)) on their reported performance, even trained on less samples, reflecting the better generalization capability of our model. Nevertheless, the ideal performance achieved of GSDD also relies on advanced SISR algorithms, as can be derived from the RealESRGAN results (see Table 1). Qualitative Aspect Here we show some visualization results. Firstly, in Figure 3, we select and exhibit one sample of the distilled images from each image category in the OST (Wang et al. 2018b) dataset. It is interesting to see that the distilled images from each category seem abstract and cannot be easily identified by human eyes, however, in the eyes of DNNs, they are good and meaningful image samples. If we take a closer look, we can find some peculiarities in these distilled images. For example, the distilled image for animal category has some furry textures, the image in plant category contains some fruit or leaf shapes, and the image in water category consists of ripple-like grains. This stimulates us to think further about how DNNs interpret good features and how we can construct those good training samples for DNNs in future related studies. Animal Building Grass Mountain Plant Sky Water Figure 3: A visual demonstration of distilled image samples generated from the OST (Wang et al. 2018b) dataset. The first row is the distilled image and the other two rows are the original images of the corresponding categories. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7073 Ground Truth ESRGAN SFTGAN DGAN FSSR RealSR Real-ESRGAN Ours Figure 4: Visual comparison of SR results generated by GSDD (trained on 700 distilled images) and SOTA SISR models (trained on 10,324 original images) on OST. The arrow indicates comparable visual effects. Please zoom in for best view. LR (×4 Nearest) FSSR Ours RealSR BICUBIC ESRGAN SFTGAN DGAN Real-ESRGAN Figure 5: Visual comparison on the real-world dataset DPED (Ignatov et al. 2017). Notably, we only trained on smallscale distilled data, while others were trained on full-scale training data. The arrow indicates comparable visual effects. Please zoom in for best view. We then provide restoration results derived from other advanced SISR models trained on the whole dataset and SR images reconstructed by the model trained on our distilled data (when IPC = 100). From Figure 4, it can be found that we obtained a fairly good perceptual result compared with other solutions, despite trained on only 700 distilled images, which is 1 15 of the size of the original dataset. Particularly, we are able to faithfully restore detailed textures in the original image, which seems even not possible for RealSR (Ji et al. 2020) model. Furthermore, we successfully achieve the closest visual effects to our baseline model Real-ESRGAN on small-scale synthetic data. These results fully validate the effectiveness of our approach. Next, in Figure 5, we display the SR results of different models on the real-world dataset Model SSIM↑ LPIPS↓ GSDD 0.7986 0.1351 ⋆ESRGAN 0.5968 0.4237 ⋆SFTGAN 0.6221 0.4418 ⋆DGAN 0.6577 0.3623 ⋆FSSR 0.6996 0.3558 ⋆RealSR 0.6831 0.2807 ⋆SRResNet 0.6434 0.3470 ⋆SwinIR 0.6670 0.2983 Table 3: The cross-architecture validation. The distilled data generated by proposed GSDD method are evaluated on other SISR architectures. ⋆means that the current SISR network has been retrained on the specified distilled images. DPED (Ignatov et al. 2017). As can be seen, even for the unknown test data in real scenes, we obtained relatively good perceptual results (a bit lower than Real-ESRGAN but obviously better than the other competitors), demonstrating the excellent generalizability achieved by our method. We also investigate how different distilled image numbers affect the visual effects of restored images. We set four settings including IPC = 50, 100, 200, and 500 to demonstrate this influence respectively. According to Figure 6, two trends can be observed: 1) as the number of IPC increases, it seems the textural features in distilled images are spreading out, causing that each distilled image is more identifiable with others; 2) the distilled data contains increasing details and finer textures, and gradually reaches the visual quality of the reference image (especially for the setting of IPC = 500). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7074 IPC=50 IPC=100 IPC=200 IPC=500 Distillation Restoration Texture Refinement Feature Decentralization Ground Truth Figure 6: Visual comparison of distilled and restored results on the OST dataset when GSDD trained with different distilled image numbers. Please zoom in for best view. Cross-architecture Validation To further verify the generality of our distilled data, we performed validation experiments. Concretely, we first learn latent vectors and generate distilled images from GSDD. Then we retrain other SISR models (we additionally add representative CNN-based (SRResNet (Ledig et al. 2017)) and Transformer-based (SwinIR (Liang et al. 2021)) models to verify the applicability of our approach.) on these data. At last, we test models on OutdoorSceneTest300 to observe the corresponding recovery accuracy. The results are listed in Table 3, we find that when the current synthetic training samples are applied to other SISR frameworks, acceptable restoration performance can still be achieved. This demonstrates that the distilled data do include some general features to different SISR models. In other words, the proposed distillation process is able to extract and retain some common representations that are informative for various deep learning based SISR models. Ablation Study The Effectiveness of Data Compression We compare with three representative SISR models (ESRGAN, RealSR, and Real-ESRGAN) on different training data. Concretely, they are coreset selection2 (CS), the proposed dataset distillation method (GSDD), and total dataset (TD). We evaluate their performance using single iteration training time (s) at a fixed number (300K) of iterations, and image restoration precision (LPIPS). According to Figure 7, it is obvious that we achieve comparable generalization performance to that using the full-size of training data (TD) but consuming much less time (e.g., a nearly ×8 improvement on training efficiency in Real-ESRGAN model). Furthermore, we outperform CS compression both in time consumption and recovery accuracy, which fully demonstrates our superiority. The Effectiveness of R, λ, and IPC We conduct the ablation experiment to validate R and show the corresponding results in Figure 8. Concretely, when the coefficient λ = 0, training unbalanced numbers of samples by identical Ldis would lead to the homogenization problem of latent vectors and reduce the informativeness of the distilled samples. 2For fair comparison, the number of typical samples from coreset selection (CS) is identical to the number of latent vectors from the proposed dataset distillation method (GSDD). Figure 7: The ablation studies of data compression. Figure 8: The ablation studies of R, λ, and IPC. It can be seen that the recovery accuracy (LPIPS) would drop from 0.1351 to 0.4586 for IPC = 100. Furthermore, we investigate the influence on recovery accuracy (LPIPS) in terms of varying λ (from 0∼1). Overall, according to Figure 8, we find that the recovery accuracy is not always on an increasing trend when λ increases. Specifically, for IPC = 50 and IPC = 100, LPIPS reaches its highest value when λ = 0.1, while for IPC = 1, this value is obtained at λ = 0.2. Conclusion This paper focuses on exploring the possibility of applying DD to benefit low-level CV tasks (especially for SISR). Overall, we propose a GSDD framework that optimizes and synthesizes training samples by the GAN-inversion manipulation in a latent generative space. Our optimization process is based on a distribution-matching data condensation strategy, so that the SISR network can have comparable performance to models trained on the original dataset. We further improve the approach by proposing a distillation loss with the regularization term R. Finally, we demonstrate its effectiveness via extensive experiments. We achieve a competitive performance compared to SOTA SISR solutions under the premise of a nearly ×8 increase in training efficiency and a saving of almost 93.2% data storage space. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7075 Acknowledgments This work was supported by National Science Foundation of China under Grant No.U19B2037 and No.61901384, Natural Science Basic Research Program of Shaanxi Province (Program No.2021JCW-03). References Abdal, R.; Qin, Y.; and Wonka, P. 2019. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Abdal, R.; Qin, Y.; and Wonka, P. 2020. Image2StyleGAN++: How to Edit the Embedded Images? In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8293–8302. Agustsson, E.; and Timofte, R. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1122–1131. Bojanowski, P.; Joulin, A.; Lopez-Pas, D.; and Szlam, A. 2018. Optimizing the Latent Space of Generative Networks. In Dy, J.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 600–609. PMLR. Cazenavette, G.; Wang, T.; Torralba, A.; Efros, A. A.; and Zhu, J.-Y. 2022. Dataset Distillation by Matching Training Trajectories. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10708–10717. Chan, K. C.; Wang, X.; Xu, X.; Gu, J.; and Loy, C. C. 2021. GLEAN: Generative Latent Bank for Large-Factor Image Super-Resolution. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14240– 14249. Creswell, A.; and Bharath, A. A. 2019. Inverting the Generator of a Generative Adversarial Network. IEEE Transactions on Neural Networks and Learning Systems, 30(7): 1967–1974. Cui, J.; Wang, R.; Si, S.; and Hsieh, C.-J. 2022. DCBENCH: Dataset Condensation Benchmark. arXiv preprint arXiv:2207.09639. Deng, Z.; and Russakovsky, O. 2022. Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks. In Neural Information Processing Systems (NeurIPS). Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2014. Learning a Deep Convolutional Network for Image Super-Resolution. In Fleet, D.; Pajdla, T.; Schiele, B.; and Tuytelaars, T., eds., Computer Vision – ECCV 2014, 184–199. Cham: Springer International Publishing. ISBN 978-3-319-10593-2. Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2016. Image Super-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2): 295–307. Dong, T.; Zhao, B.; and Lyu, L. 2022. Privacy for free: How does dataset condensation help privacy? In International Conference on Machine Learning, 5378–5396. PMLR. Elingaard, M. O.; Aage, N.; Bærentzen, J. A.; and Sigmund, O. 2022. De-homogenization using convolutional neural networks. Computer Methods in Applied Mechanics and Engineering, 388: 114197. Fritsche, M.; Gu, S.; and Timofte, R. 2019. Frequency separation for real-world super-resolution. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 3599–3608. IEEE. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative Adversarial Networks. Commun. ACM, 63(11): 139–144. Haris, M.; Shakhnarovich, G.; and Ukita, N. 2021. Deep Back-ProjectiNetworks for Single Image Super-Resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12): 4323–4337. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Hu, S.; Goetz, J.; Malik, K.; Zhan, H.; Liu, Z.; and Liu, Y. 2022. Fedsynth: Gradient compression via synthetic data in federated learning. arXiv preprint arXiv:2204.01273. Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; and Van Gool, L. 2017. DSLR-Quality Photos on Mobile Devices With Deep Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; and Huang, F. 2020. Real-world super-resolution via kernel estimation and noise injection. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 466– 467. Kim, J.; Lee, J. K.; and Lee, K. M. 2016. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1646–1654. Kim, J.-H.; Kim, J.; Oh, S. J.; Yun, S.; Song, H.; Jeong, J.; Ha, J.-W.; and Song, H. O. 2022. Dataset Condensation via Efficient Synthetic-Data Parameterization. In Chaudhuri, K.; Jegelka, S.; Song, L.; Szepesvari, C.; Niu, G.; and Sabato, S., eds., Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, 11102–11118. PMLR. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ledig, C.; Theis, L.; Husz´ar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; and Shi, W. 2017. Photo-Realistic Single Image SuperResolution Using a Generative Adversarial Network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 105–114. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7076 Lei, S.; and Tao, D. 2024. A Comprehensive Survey of Dataset Distillation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(1): 17–32. Li, W.; Zhou, K.; Qi, L.; Lu, L.; and Lu, J. 2022. Best-Buddy GANs for Highly Detailed Image Super-resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2): 1412–1420. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte, R. 2021. SwinIR: Image Restoration Using Swin Transformer. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 1833–1844. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Lee, K. M. 2017. Enhanced Deep Residual Networks for Single Image SuperResolution. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1132–1140. Liu, P.; Yu, X.; and Zhou, J. T. 2022. Meta knowledge condensation for federated learning. arXiv preprint arXiv:2209.14851. Ma, C.; Yang, C.-Y.; Yang, X.; and Yang, M.-H. 2017. Learning a no-reference quality metric for single-image super-resolution. Computer Vision and Image Understanding, 158: 1–16. Mittal, A.; Soundararajan, R.; and Bovik, A. C. 2013. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Processing Letters, 20(3): 209–212. Nguyen, T.; Chen, Z.; and Lee, J. 2020. Dataset metalearning from kernel ridge-regression. In 2020 International Conference on Learning Representations (ICLR). Nguyen, T.; Novak, R.; Xiao, L.; and Lee, J. 2021. Dataset distillation with infinitely wide convolutional networks. In 2021 Conference and Workshop on Neural Information Processing Systems (NeurIPS). Phillips, J. M. 2017. Coresets and sketches. In Handbook of discrete and computational geometry, 1269–1288. Chapman and Hall/CRC. Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; and Lampert, C. H. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001–2010. Sachdeva, N.; and McAuley, J. 2023. Data Distillation: A Survey. arXiv preprint arXiv:2301.04272. Such, F. P.; Rawal, A.; Lehman, J.; Stanley, K.; and Clune, J. 2020. Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data. In International Conference on Machine Learning, 9206–9216. PMLR. Timofte, R.; Agustsson, E.; Gool, L. V.; Yang, M.-H.; Zhang, L.; Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K. M.; Wang, X.; Tian, Y.; Yu, K.; Zhang, Y.; Wu, S.; Dong, C.; Lin, L.; Qiao, Y.; Loy, C. C.; Bae, W.; Yoo, J.; Han, Y.; Ye, J. C.; Choi, J.-S.; Kim, M.; Fan, Y.; Yu, J.; Han, W.; Liu, D.; Yu, H.; Wang, Z.; Shi, H.; Wang, X.; Huang, T. S.; Chen, Y.; Zhang, K.; Zuo, W.; Tang, Z.; Luo, L.; Li, S.; Fu, M.; Cao, L.; Heng, W.; Bui, G.; Le, T.; Duan, Y.; Tao, D.; Wang, R.; Lin, X.; Pang, J.; Xu, J.; Zhao, Y.; Xu, X.; Pan, J.; Sun, D.; Zhang, Y.; Song, X.; Dai, Y.; Qin, X.; Huynh, X.P.; Guo, T.; Mousavi, H. S.; Vu, T. H.; Monga, V.; Cruz, C.; Egiazarian, K.; Katkovnik, V.; Mehta, R.; Jain, A. K.; Agarwalla, A.; Praveen, C. V. S.; Zhou, R.; Wen, H.; Zhu, C.; Xia, Z.; Wang, Z.; and Guo, Q. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1110–1121. Wang, T.; Zhu, J.-Y.; Torralba, A.; and Efros, A. A. 2018a. Dataset distillation. arXiv preprint arXiv:1811.10959. Wang, X.; Xie, L.; Dong, C.; and Shan, Y. 2021. RealESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 1905–1914. Wang, X.; Yu, K.; Dong, C.; and Change Loy, C. 2018b. Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 606– 615. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Change Loy, C. 2018c. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops. Wang, Z.; Bovik, A.; Sheikh, H.; and Simoncelli, E. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600–612. Xia, W.; Zhang, Y.; Yang, Y.; Xue, J.-H.; Zhou, B.; and Yang, M.-H. 2023. GAN Inversion: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3): 3121–3138. Yu, R.; Liu, S.; and Wang, X. 2024. Dataset Distillation: A Comprehensive Review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(1): 150–170. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 586–595. Zhang, W.; Liu, Y.; Dong, C.; and Qiao, Y. 2019. RankSRGAN: Generative Adversarial Networks With Ranker for Image Super-Resolution. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 3096–3105. Zhao, B.; and Bilen, H. 2021. Dataset condensation with differentiable siamese augmentation. In 2021 International Conference on Machine Learning (ICML), 12674–12685. Zhao, B.; and Bilen, H. 2023. Dataset Condensation with Distribution Matching. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 6503– 6512. Zhao, B.; Mopuri, K. R.; and Bilen, H. 2020. Dataset condensation with gradient matching. In 2020 International Conference on Learning Representations (ICLR), 12674– 12685. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7077
2024
786
18,613
CSL: Class-Agnostic Structure-Constrained Learning for Segmentation Including the Unseen Hao Zhang1, Fang Li1, Lu Qi2, Ming-Hsuan Yang2, 3, Narendra Ahuja1 1University of Illinois at Urbana-Champaign 2University of California Merced 3Google Research {haoz19, fangli3}@illinois.edu, {lqi5, mhyang}@ucmerced.edu, n-ahuja@illinois.edu Abstract Addressing Out-Of-Distribution (OOD) Segmentation and Zero-Shot Semantic Segmentation (ZS3) is challenging, necessitating segmenting unseen classes. Existing strategies adapt the class-agnostic Mask2Former (CA-M2F) tailored to specific tasks. However, these methods cater to singular tasks, demand training from scratch, and we demonstrate certain deficiencies in CA-M2F, which affect performance. We propose the Class-Agnostic Structure-Constrained Learning (CSL), a plug-in framework that can integrate with existing methods, thereby embedding structural constraints and achieving performance gain, including the unseen, specifically OOD, ZS3, and domain adaptation (DA) tasks. There are two schemes for CSL to integrate with existing methods (1) by distilling knowledge from a base teacher network, enforcing constraints across training and inference phrases, or (2) by leveraging established models to obtain per-pixel distributions without retraining, appending constraints during the inference phase. We propose soft assignment and mask split methodologies that enhance OOD object segmentation. Empirical evaluations demonstrate CSL’s prowess in boosting the performance of existing algorithms spanning OOD segmentation, ZS3, and DA segmentation, consistently transcending the state-of-art across all three tasks. Introduction Semantic segmentation is a fundamental task in computer vision, which associates with each pixel in a given image probabilities of belonging to different classes. Recent approaches have achieved remarkable results on several closed-set benchmarks that contain images from known classes, called In-Distribution (ID) images. However, the segmentation with the unseen, e.g., Out-Of-Distribution (OOD), Zero-shot-semantic (ZS3) segmentation, is always challenging because it requires segmentation and discrimination based on training on only ID images. The existing methods for such tasks can be distinguished by whether they use OOD data for training. Some methods expand the training set to include OOD images from other datasets (Hendrycks, Mazeika, and Dietterich 2018a; Chan, Rottmann, and Gottschalk 2021a; Kang, Kwak, and Kang 2022; Tian et al. 2022), or utilize large-scale models, e.g., Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. SAM (Kirillov et al. 2023) to generate region proposals. Such expansion-based approaches are not of great interest in this paper since we aim to solve the general problem of OOD segmentation without having access to any OOD images for training. We propose to learn models of objects that extend to classes beyond those in the ID set. Existing segmentation methods typically infer OOD if some properties of the outputs are sufficiently different from those seen on ID images. An example of properties used is the uncertainty in pixel label prediction, as in the SML methods (Figure 2). Other examples of properties used are errors in image reconstruction (Lis et al. 2019), and the similarity to which results are perturbed by adversarial attacks (Liang, Li, and Srikant 2017; Besnier et al. 2021). However, these methods result in noisy predictions due to a lack of structured knowledge. Current techniques, such as those in (Nayal et al. 2023; Grci´c, ˇSari´c, and ˇSegvi´c 2023), tackle this issue using the region-based framework, Mask2Former (M2F) (Cheng et al. 2021). However, to achieve optimal performance, they necessitate OOD data and complete train the model from scratch. For ZS3 or open-word semantic segmentation, existing methods (Ding et al. 2022; Xu et al. 2022) typically leverage CA-M2F, trained on the ID set, as a region generator and utilize CLIP (Radford et al. 2021) to identify the semantic class for each region. Some works (Qi et al. 2022) empirically demonstrate that CA training benefits the performance on OOD data and since M2F decouples the per-pixel prediction task into 2 sub-tasks: (1) mask prediction and (2) per-mask class prediction optimized by the mask loss and class loss, a straightforward way is removing the class loss and leverage hard assignment as post-processing during inference. However, our observations reveal that such adjustments are insufficient in eliminating class information. The process of hard assignment frequently results in unanticipated outcomes. For instance, certain objects might not have corresponding masks, and in certain situations, multiple objects may be erroneously blended into a single mask. In this paper, we present Class-Agnostic StructureConstrained Learning (CSL) framework for seamless integration with existing methodologies, including OOD, ZS3, and DA segmentation, to improve their performance by incorporating structure constraints. CSL offers two plug-in integration schemes: (1) Knowledge distillation from a base The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7078 Figure 1: Overview of our CSL framework. CSL consists of a backbone, a pixel decoder, a transformer decoder, a base teacher network, and MLPs. N learnable region queries and the image features are fed to the transformer decoder and MLPs, to obtain N pairs of latent region prototypes and their validity scores. We calculate the normalized similarity between each element Eh,w of per-pixel embeddings and each Pn, n ∈{1, 2, ...N} by simple dot production followed by the sigmoid function to get N region scores. The validity scores indicate the degree of the region prototypes are valid for given images. During training, a valid loss, a region loss, and a distillation loss are used to optimize the model. Instead of assigning each pixel from the input image to one of the prior fixed classes, CSL assigns it to one of N learnable region prototypes by our proposed soft assignment. During inference, we introduce structure-constrained Fusion to calculate the final prediction. teacher network, potentially any existing method, with structure constraints imposed during training and inference. (2) Direct application of existing methods for per-pixel prediction, incorporating structure constraints solely during inference, bypassing retraining. While the first style facilitates end-to-end training, the second negates the need for retraining, and both surprisingly yield comparable gains over foundational methods. In semantic segmentation, annotations commonly amalgamate all instances of a class into a singular mask. We split this mask into multiple isolated components for training, mitigating bias from seen classes. During inference, CSL employs a soft assignment to derive region proposals at the disconnected-component level. Compared with the prevalent hard assignment, the soft assignment boosts the performance of unseen samples. The main contributions of this paper are as follows: • We present CSL, a modular plug-in framework with 2 2 schemes, designed for seamless integration with established methodologies, enhancing the segmentation of unseen classes by incorporating structural constraints. • We propose mask split preprocessing, splitting class masks into isolated components, effectively attenuating the bias of seen class data. Furthermore, we employ a soft assignment in post-inference for region proposal generation and elucidate the driving factors behind the observed performance enhancements. • Through extensive experimental validation, we ascertain that CSL markedly enhances 10 prevailing techniques across all three segmentation tasks, including OOD segmentation, ZS3, and DA segmentation, consistently outstripping state-of-the-art benchmarks. Related Work Out-of-Distribution Segmentation Uncertainty-based Methods. Leveraging pixel-wise prediction uncertainty, OOD segmentation methods (Hendrycks and Gimpel 2016; Lee et al. 2017; Liang, Li, and Srikant 2017; Tian et al. 2021; Zhang and Zhang 2022) avoid retraining, thus saving computation. However, issues arise in hard-predicted regions. Jung et al. (Jung et al. 2021) refine boundary anomaly scores, while others (Kendall and Gal 2017; Lakshminarayanan, Pritzel, and Blundell 2017; Mukhoti and Gal 2018) apply MC dropout, often with limited success. Image Reconstruction. Autoencoders and GANs dominate reconstruction methods (Baur et al. 2019; Creusot and Munawar 2015; Di Biase et al. 2021; Haldimann et al. 2019; Liu et al. 2020). Notably, ID-only trained models (Xia et al. 2020; Lis et al. 2020; Ohgushi, Horiguchi, and Yamanaka 2020; Creusot and Munawar 2015; Lis et al. 2019; Vojir et al. 2021) effectively reconstruct ID samples, but falter with OODs, hindered further by domain sensitivity and extended training/inference times. Adversarial Attacks. Adversarial attacks serve as OOD data simulators in image classification (Goodfellow, Shlens, and Szegedy 2014) and detection (Ma et al. 2018). Besnier et al.’s ObsNet, though utilizing Local Adversarial Attacks, faces noisy prediction challenges due to structural information absence. Outlier Exposure. The outlier exposure (OE) strategy by Hendrycks et al. (Hendrycks, Mazeika, and Dietterich 2018b) augments the training set with non-overlapping outliers. Conversely, some methods (Chan, Rottmann, and Gottschalk 2021b; Bevandi´c et al. 2019; Vandenhende et al. 2020; Bevandi´c et al. 2018; Liu et al. 2022; Tian et al. 2022) embed OOD objects from datasets like COCO (Lin et al. 2014) and ADE20K (Zhou The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7079 et al. 2019), potentially reducing OOD segmentation to mere binary segmentation due to overlaps. Proposed Method As shown in Figure 1, CSL provides two schemes to plug in existing methods. The first is an end-to-end scheme, which distills the knowledge from the base teacher network to the CSL framework. The second scheme directly utilizes the existing models as a base teacher network to obtain per-pixel distributions and fuse them with the class-agnostic region proposals during inference without retraining them. Validity loss, region loss, and distill loss, which is the mean Huber loss between the predicted per-pixel distribution and the output of the base teacher network (only for scheme1), are used for optimization. Class-Agnostic Training. To capture the essential features of semantic classes that are applicable beyond the training classes, we design CSL in a class-agnostic way. It learns region prototypes characterized by visual appearance and spatial features and uses them to generate region proposals. CSL firstly uses a backbone and a pixel decoder to generate multi-level feature embeddings El ∈RHl×Wl×Dl, where l ∈{4, 8, 16, 32} indicating the downsampled size of feature map compared to the original image. Dl is the dimension of the embeddings. In addition, we have N learnable queries, which cascadely interact with multi-level feature embeddings El, where l ∈{8, 16, 32}, to generate N region prototypes P ∈RN×256. These prototypes act as centers for grouping the per-pixel embeddings E ∈R H 4 × W 4 ×256 and follows by a upsampling to get the region prediction R ∈[0, 1]N×H×W . The region scores R, and the validity scores V ∈[0, 1]N are fed into the soft assignment module (Sec 3.3) and generate region proposals. Comparison with CA-M2F. In Mask2Former (Cheng et al. 2021), they use the semantic class predictions V ∈ [0, 1]N×C instead of the validity score, where C is the number of classes, and it empirically yields exceptional semantic segmentation results Y ∈RH×W ×C by matrix multiplication between V and R. We explain this matrix multiplication as the calculation of the likelihood p(xh,w ∈c), where xh,w and c denote the pixel at location (h, w) of input image X and the class c ∈C: max c p(xh,w ∈c) = max c N X n=1 p( xh,w ∈Rn ∩xh,w ∈c) = max c N X n=1 p(xh,w ∈Rn) × p(xh,w ∈c|xh,w ∈Rn) = max c N X n=1 rn,h,w × vn,c = max c (R⊤· V)h,w,c, (1) where the rn,h,w and vn,c at R and V indicate the probability of pixel xh,w belonging to region Rn and region Rn belonging to class c. Given that the pixels in the same region follow the same class distribution, we have p(Rn ∈c) = p(xh,w ∈c|xh,w ∈Rn). CA-M2F removes the class loss and uses R as the region proposals, which are demonstrated to be unsatisfactory due to the redundancy of regions. To reduce useless regions, some methods keep the class prediction but reduce it to a binary classification indicating the validity of the region. And they utilize hard assignments during inference to generate the region proposals. Soft Assignment. However, the hard assignment requires a manually selected threshold, which limits the generalization, and multiple experiments demonstrate its limited performance on OOD objects (Figure 3). Thus, we creatively propose a soft assignment module to maximize the objective of the class-agnostic semantic segmentation: p(xh,w ∈ Rn ∩Rn ∈V), where Rn ∈V denotes region Rn being valid. In CSL, we use binary cross-entropy losses for optimal validity score and region score by maximizing the likelihoods of p(xh,w ∈Rn) and p(xh,w ∈V|xh,w ∈Rn). This allows us to interpret vn, in the validity score matrix V, as the probability of the pixels in the region Rn being valid, which we call as the region’s validity score. Since multiple overlapping region prototypes exist, V helps select the valid masks. For instance, simple images with few objects result in fewer valid regions than complex ones. Similarly, we can interpret rn,h,w, in the region prediction matrix R, as the probability that pixel xh,w belongs to region Rn, which we call as the region score. The objective for class-agnostic segmentation can be derived from these two likelihoods as follows: max n p(xh,w ∈Rn ∩Rn ∈V) = max n p(xh,w ∈Rn) × p(xh,w ∈V | xh,w ∈Rn) = max n (rn,h,w × vn) = max n (R⊤∗V)h,w,n, (2) which maximizes the probability that the pixel xh,w is from region Rn and the region Rn is valid, where * denotes pixelwise multiplication. Note that given that pixel xh,w belongs to region Rn, the validity of Rn can be represented by xh,w because the validity score is assigned per region. Comparison with hard assignment. For panoptic segmentation, Mask2Former and EntitySeg employ hard assignment during inference Specifically, a binary region mask is generated from each region prediction by checking at each pixel if the mask score exceeds a certain threshold. The region masks are then stacked in ascending order of validity score, where a mask with a higher validity score covers those with lower scores. However, this approach has limitations. First, hard assignment employs a fixed threshold to filter out low-score regions, which often results in missed pixels that are not allocated to any region (a, c, and d in Figure 3). Second, it performs poorly in complex and detailed scenes as the final results are obtained based on hard region masks instead of per-pixel scores in the soft assignment (f in Figure 3). Mask Split. Another problem is that existing methods such as entity segmentation (Qi et al. 2022) work well when training with instance-wise labels while failing with semantic-wise labels. We believe and demonstrate it’s because the semantic-wise labels still contain class information, which introduces the bias of seen class. Therefore, we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7080 Figure 2: Visualisations for hard mask predictions at TPR=95% and per-pixel Out-of-distribution (OOD) scores. We compare the results of SML, and ObsNet with our proposed CSL. In the hard mask predictions, white and gray indicate being predicted to be OOD. In the OOD scores, the red and blue intensity values correspond to the magnitudes of the OOD scores above and below the decision boundary, respectively. (d) shows the region proposals from our CSL with scheme2. Method OOD Pixel Level Component Level AUPR FPR95 sIoU gt mean F1 PEBAL 49.1 40.8 38.9 14.5 ME 85.5 15.0 49.2 28.7 DH 78.0 9.8 54.2 31.1 SynBoost 56.4 61.9 34.7 10.0 IR 52.3 25.9 39.7 12.5 ObsNet 75.4 26.7 44.2 45.1 +CSL1 79.9 7.1 46.1 50.2 +CSL2 80.1 7.2 46.5 50.4 Table 1: Results on SMIYC-AT. Method OOD Pixel Level Component Level AUPR FPR95 sIoU gt mean F1 ME 85.1 0.75 47.9 50.4 DH 80.8 6.02 48.5 55.6 SynBoost 71.3 3.15 44.3 37.6 RI 54.14 47.1 57.6 36.0 IR 37.7 4.7 16.6 8.4 DaCUP 81.5 1.13 37.7 46.0 +CSL1 86.8 0.9 44.3 50.7 +CSL2 87.1 0.7 44.7 51.0 Table 2: Results on SMIYC-OT. present a simple yet effective preprocessing method named mask split. Annotation of semantic segmentation often consists of multiple disconnected regions of the same class in a single mask, which forces the model to predict all pixels from the same class to the same mask, thereby enforcing the embeddings of the pixels from the same semantic class to be similar, which causes the bias. To overcome this limitation, we propose a simple yet effective method called Mask Split to overcome this limitation. We split these two disconnected components as depicted in Supplementary Material. This approach reduces the class information and allows the model to predict instances without being biased toward any particular class. Structure-Constrained Fusion. Intuitively, making predictions at each pixel independently does not benefit from the predictions at other, nearby pixels, which are correlated. Method OOD Pixel Level Component Level AUPR FPR95 sIoU gt mean F1 ME 77.90 9.70 45.90 49.92 SynBoost 81.71 4.64 36.83 48.72 RI 82.93 35.75 49.21 52.25 DaCUP 81.37 7.36 38.34 51.24 +CSL1 83.07 6.88 40.43 51.57 +CSL2 83.41 6.92 40.89 51.36 NFlowJS 89.28 0.65 54.63 61.75 +CSL1 89.48 0.48 54.78 62.25 +CSL2 89.79 0.51 55.01 62.37 Table 3: Results on LAF NoKnown. To address this, we introduce structure-constrained rectification (SCF), which utilizes structure constraints, that interrelate predictions at different pixels, to optimize per-pixel predictions. This helps improve performance on multiple tasks. Our proposed approach leverages soft assignment to generate region proposals R ∈{0, 1}H×W ×N. These proposals, along with the per-pixel distribution D ∈RH×W ×C from scheme 1 or scheme 2, are fed into our proposed SCF. For OOD segmentation, we set C to 1 since the prediction is binary, and we only need to consider the probability of belonging to OOD. And, for the domain adaptation (DA) and zero-shot semantic segmentation (ZS3) tasks, C is equal to the number of classes. Each Dc ∈RH×W indicates per-pixel distribution for class c, where c ∈{1, ..., C}. We compute the region-wise score as the average of the pixel-wise scores within each region proposal Rn ∈{0, 1}H×W , where n ∈{1, ..., N}. Then we combine the region-wise scores and the pixel-wise scores to obtain the hybrid score H using the equation: Hc,n = P h,w Dc,n ∗Rn P h,w Rn × Dc,n, n ∈{1, ..., N} (3) where Hc,n, Dc,n indicates the hybrid score and per-pixel distribution for class c within region n, ∗is the pixel-wise multiplication and △indicates pixel-wise multiplication. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7081 SMIYC (AT)-val SMIYC (AT)-test Road Anomaly Method OOD Data FPR95 AP AUROC FPR95 AP AUROC FPR95 AP AUROC SML† 51.0 47.7 81.7 43.33 44.68 86.57 49.63 25.71 81.90 +CSL2 22.0↑29 55.4↑7.8 88.4↑6.7 39.7↑3.7 47.2↑2.6 87.5↑0.86 41.03↑8.60 31.78↑6.07 84.77↑2.87 IR† 32.17 49.36 87.03 69.79 33.43 79.92 +CSL2 21.7↑10.4 54.5↑5.1 89.3↑2.3 59.16↑10.63 35.48↑2.05 82.45↑2.53 ObsNet† 40.2 72.7 91.9 61.73 56.91 86.22 64.25 48.13 83.18 +CSL2 25.0↑15.2 75.5↑2.8 95.2↑3.3 32.1↑29.6 64.8↑7.8 92.24↑6.02 47.21↑17.04 53.24↑5.11 86.92↑3.74 ObsNet v2† 30.3 74.5 93.7 26.69 75.44 93.80 55.75 54.64 86.78 +CSL2 5.8↑24.5 83.6↑9.1 97.4↑3.7 7.16↑19.5 80.1↑4.6 96.46↑2.66 43.80↑11.95 61.38↑6.74 91.08↑4.3 Table 4: Quantitative results on SMIYC (Anomaly Track) and Road Anomaly. We show the results obtained by combining CSL2 with 3 well-established OOD segmentation methods (indicated by †). The best results are highlighted in bold. Method ST RT COCO-stuff PASCAL VOC 2012 mIoU(S) mIoU(U) hIoU mIoU(S) mIoU(U) hIoU ZegFormer 36.6 33.2 34.8 86.4 63.6 73.3 +CSL2 37.5 ↑0.9 36.2 ↑3 36.9 ↑2.1 87.1 ↑0.7 68.6 ↑5 76.9 ↑3.6 ZSSeg 39.3 36.3 37.8 83.5 72.5 77.5 +CSL2 40.1 ↑0.8 38.3 ↑2 39.2 ↑1.4 84.7 ↑1.2 76.9 ↑4.4 80.6 ↑3.1 ZegCLIP 40.2 41.4 40.8 91.9 77.8 84.3 +CSL∗ 2 40.4 ↑0.2 42.8 ↑1.4 41.6 ↑0.8 92.3 ↑0.4 79.4 ↑1.6 85.5 ↑1.2 Table 5: Quantitative results for ZS3 on COCO-stuff and PASCAL VOC benchmarks. The “mIoU(S)”, “mIoU(U)”, and “hIoU” denote the mIoU of seen classes, unseen classes, and their harmonic mean. ”ST” and ”RT” denote self-training and re-training. Experimental Results Experimental Setup In all our experiments 1, we utilize ResNet50 as the backbone and FPN as the pixel decoder. All experiments for OOD segmentation are performed without any OOD data. In the DA and ZS3 experiments, we use the same training data as the comparative methods. CSL1 and CSL2 represent scheme 1 and 2, and CSL2 doesn’t require retraining. Additional details and results for the benchmarks and implementation can be found in the supplementary material, and we plan to make the source code publicly available. Out-Of-Distribution Segmentation In the context of OOD Segmentation, Cityscapes (Cordts et al. 2016) including 19 seen classes are used as the training sets, while OOD images containing other classes beyond the seen classes are utilized for testing purposes. Several approaches leverage OOD images with ground truth labels from larger datasets to enrich the training set, which overlaps with the OOD classes in the test set. Thus, to ensure fairness, all methods are differentiated based on the usage of OOD data and our proposed CSL is free of OOD data. Comparison with SOTA Methods Table 1-4 show our results compared with existing methods on the SMIYC (Chan et al. 2021) Anomaly Track, Obstacle Track, LostAndFound-NoKnow (Pinggera et al. 2016), and Road Anomaly (Lis et al. 2019). There are 5 metrics for evaluation: (a) pixel-wise area under the precision-recall curve 1Except for experiments marked with ∗, which uses ResNet100 as the backbone. (AUPR), (b) pixel-wise false positive rate at a true positive rate of 95% (FPR95), (c) adjusted Intersection over Union averaged over all ground truth segmentation components (sIoU gt), (d) component-wise F1-score averaged over different detection thresholds (mean F1), and (f) area under the receiver operating characteristics (AUROC). SMIYC (Anomaly Track) consists of real-world images, where each image may contain multiple OOD samples of different sizes from various categories. In SMIYC (AT), our proposed approach CSL outperforms all methods without OOD data by a substantial margin based on ObsNet, e.g., CSL surpasses the former state of art method ObsNet (Besnier et al. 2021) by 4.7%, 19.5%, 2.3%, and 5.3% in AUPR, FPR95, sIou gt, and mean F1. CSL even reaches state-of-the-art performance in terms of FPR95 and mean F1 across all methods including those leverage the OOD data. SMIYC (Obstacle Track) focuses on evaluating the ability to detect small-size obstacles on the road. In SMIYC (OT), CSL improves the former approach DaCUP (Voj´ıˇr and Matas 2023) by 5.6%, 0.43%, 7%, and 5% in AUPR, FPR95, sIou gt, and mean F1 and achieve the state of art among all approaches in AUPR, FPR95, and mean F1. LostAndFound NoKnown also focuses on evaluating the ability to detect small-size obstacles on the road and CSL improves the former approach DaCUP (Voj´ıˇr and Matas 2023) by 2.04%, 0.44%, 2.55%, and 0.12% in AUPR, FPR95, sIou gt, and mean F1. And achieves state-of-theart performance when combined with NFlowJS (Grci´c, Bevandi´c, and ˇSegvi´c 2021). Road Anomaly has a similar setting with SMIYC (AT). As shown in Table 4, CSL achieves state of art performance among all methods including those with OOD data by improving the performance of ObsNet (Besnier et al. 2021) by 6.74%, 11.59%, and 4.3% in AP, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7082 Method road sidewalk building wall fence pole light sign veg terrain sky person rider car truck bus train motor bicycle mIoU Source Only 79 39 75 26 25 34 34 39 82 18 84 58 37 70 19 15 5 22 54 43 +CSL2 82 36 78 28 29 40 45 48 83 25 81 68 45 81 24 20 7 24 57 47↑4 AdvEent 94 59 85 28 26 38 43 43 86 28 89 61 36 87 32 46 25 25 57 52 +CSL2 94 60 85 28 35 45 48 50 86 28 89 65 46 87 38 49 32 24 59 56↑4 DAFormer 96 73 89 40 44 49 53 60 58 49 91 71 45 91 75 77 64 55 61 65 +CSL∗ 2 96 74 90 51 48 52 56 65 89 48 91 76 45 93 77 80 68 56 66 70↑4 Table 6: Quantitative results for domain adaptation on the Synscapes2Cityscapes benchmark, where the source domain is the synthetic city scenes dataset (Synscapes) and the target domain is a real-world city scenes dataset (Cityscapes). FPR95, and AUROC. Combination with Existing Methods without Retraining We combine CSL with three existing OOD segmentation methods (SML, Image Resynthesis, and ObsNet) with scheme 2, which doesn’t require retraining, and compared their performance in Table 4. Noted that the performance of ObsNet is affected by the input size. Therefore, we use the ObsNet and ObsNet v2 to represent the experiments we use the original image size and fixed-smaller image size, i.e., 512 × 1024. CSL outperformed all other methods in both benchmarks and even surpassed methods that use OOD data. Some methods achieved a decent AP but a poor FPR95 due to the difficulty of extracting OOD samples. ObsNet and ObsNet v2 achieved a high FPR95 in SMIYC (AT)-test, but our CSL significantly reduced it by 29.57 and 22.57, respectively. Figure 2 visually compares our proposed CSL with existing OOD segmentation methods, where we use ObsNet v2 to represent ObsNet due to its better performance. SML struggles to get acceptable results, and ObsNet produces decent AP but fails to achieve high recall with low FPR as shown in (e). In contrast, CSL demonstrates robustness to OOD samples as shown in (e). Zero-Shot Semantic Segmentation Table 5 presents a comparison of our proposed CSL method with previous state-of-the-art zero-shot semantic segmentation methods. We adopt the scheme2 to integrate CSL with existing methods, primarily due to its reduced computational cost. (Section scheme1 vs scheme2). CSL outperforms ZegFormer, ZSSeg (Xu et al. 2022), and ZegCLIP by 0.9%, 0.8%, 0.2% in seen classes, 3%, 2%, 1.4% in unseen classes, and 2.1%, 1.4%, 0.8% in harmonic classes in COCO-stuff benchmark and outperform those 3 methods by 5%, 4.4%, 1.6% in unseen classes in PASCAL VOC 2012 benchmark. The experiment follows the same setting as ZegFormer, using 156 classes for training, and testing on all 171 classes from the COCO-stuff dataset. Domain Adaptation in Semantic Segmentation The CSL approach demonstrates superior performance not only on out-of-distribution (OOD) samples but also on in-distribution (ID) samples with domain gaps. Notably, our method achieves excellent results on the Synscapes2Cityscapes benchmark, as reported in Table 6. In these experiments, we use Synscapes, a synthetic city scene dataset, as the source domain, and Cityscapes, a real-world city scene dataset, as the target domain. And we also choose scheme2 to integrate CSL with existing methods. CSL boost source-only by 4.02%, AdvEnt (Vu et al. 2019) by 3.5%, and DAFormer (Hoyer, Dai, and Van Gool 2022) by 4.1%. Ablations Negative Impact from Class Information Traditional methods for semantic segmentation assign each pixel from input images to prior semantic classes. However, this approach cannot handle OOD samples. The CA-RPG method assigns each pixel to N class-agnostic region prototypes, which learn more fundamental features that can represent both ID and OOD samples. Mask2Former and Zegformer also use a query-based framework, but introducing class supervision destroys the ability for OOD segmentation. The classification loss of ID classes causes the region prototypes to distribute within the subspace of ID classes, which makes it difficult to represent OOD classes effectively. In Figure 4, we can see the results of using the None-CA approach versus the CA-training approach on an image of a skier. The embeddings for the skier and background are not easily separable using the None-CA approach, while the CA-training approach allows for a better representation of both classes. CA Training and Soft Assignment Quantitative results in Table 7 demonstrate the effectiveness of CA training and our proposed soft assignment (SA). Before evaluation, we count the ground truth labels corresponding to all pixels in each region proposal and select the label with the highest frequency as the class of the entire region. This post-processing method is proposed in SMIYC (Chan et al. 2021) and allows us to use the same evaluation criteria (mIoU, fwIoU, mACC, and pACC) as semantic segmentation to assess the quality of region proposals. We present a comparison of three approaches for training a region proposal generator: None-CA, which employs binary mask and classification loss; CA+HA, which employs CA training and hard assignment during inference; and CA+SA, our proposed approach which combines CA training with soft assignment during inference. The model is trained on Cityscapes and tested on COCO-stuff, where ID represents seen classes from Cityscapes, and OOD represents those in the COCO but not in Cityscapes. Results The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7083 Figure 3: Visualisations of the efficacy of CA-training and soft assignment. CA+HA represents CA-M2F, where the model is trained in a class-agnostic way and inferences via the hard assignment, None-CA represents the model is trained with class loss and inferences via the soft assignment, and CA+SA represents CSL, where the model trained in a class-agnostic way and inferences via the soft assignment. Note that the model is only trained on the Cityscapes and tested on COCO-stuff. None-CA CA+HA CA+SA mIoU 45.99 49.17↑3.18 51.34↑5.35 ID-mIoU 68.44 76.35↑7.91 77.20↑8.76 OOD-mIoU 43.83 46.56↑2.73 48.85↑5.02 mACC 58.46 60.31↑1.85 63.10↑4.64 Table 7: Ablation study of CA-training and Soft Assignment. The model is trained on the Cityscapes-train and tested on the COCO-stuff. in Table 7 demonstrate that both CA training and soft assignment significantly improve performance across all metrics. Figure 3 visually illustrates the improvement. None-CA fails on most unseen objects, while CA+HA produces decent results but struggles with challenging cases such as indoor scenes, multiple animals, and small accessories. Soft assignment overcomes the limitations of hard assignment by assigning regions pixel-wise, providing more refined segmentation results. Comparison with Segment Anything Model A notable contribution of CSL is its capability to segment out-ofdistribution (OOD) objects without relying on any OOD data, utilizing only minimal training data. In this section, we employ a foundation model, SAM (Kirillov et al. 2023), which is used by many recent works (Zhang, Li, and Ahuja 2023), to produce CA region proposals and subsequently integrate it with ObeNet (Besnier et al. 2021), DACUP (Voj´ıˇr and Matas 2023), and NFlowJS (Grci´c, Bevandi´c, and ˇSegvi´c 2021) in the SMIYC-AT, OT, and LAF NoKnown benchmarks. This approach yields an improvement of 1.7% in AUPR and 0.3% in FPR95 on average across those three benchmarks compared with CSL in Table 1, 2, and 3, which demonstrates that the quality of CA Figure 4: Embedding visualisations of Figure 3-(e) by TSEN. We plot the region prototypes as red times symbols, the per-pixel embeddings from the background as green bullets, and the skier as blue bullets. The sizes of the prototypes indicate the validity scores. region proposals generated by CSL is satisfied, even in the absence of any OOD data. We believe the constraining factor influencing the outcome appears to be the classification accuracy of each region, rather than segmentation quality. More results are shown in the Appendix. Scheme1 vs Scheme2 In Tables 1, 2, and 3, we present results for scheme 1 and 2. While scheme1 trails by approximately 0.3% in AUPR relative to scheme2, the results on FPR95 display a mix of advantages for both methods. Notably, scheme2 demonstrates efficiency in training, requiring half the iterations to match the performance of scheme1. For context, in our integration experiments with ZegCLIP (Zhou et al. 2023) on the COCO-stuff benchmark, scheme1 demanded around 50K iterations to achieve satisfactory results, whereas scheme2 reached similar benchmarks in just 25K iterations. However, another key consideration is the inference time. Scheme1, being an end-to-end solution, is more efficient during inference: in our evaluations, scheme2 took 33% longer on average across all conducted experiments.” Conclusion This paper presents the Class-Agnostic StructureConstrained Learning (CSL) method for addressing the challenge of segmenting the unseen. CSL provides 2 different schemes, which can be utilized as an end-to-end framework or integrated with existing methods without retraining. Our experimental results demonstrate that CSL outperforms existing state-of-the-art methods across 3 challenging tasks. Moreover, we have provided an analysis of the reasons behind the effectiveness of our proposed method. We believe that the ability of CSL to learn about classes not seen during training, by eliciting class-agnostic information from the ID images, is a crucial factor contributing to its superior performance. Overall, CSL provides a promising solution for segmenting the unseen, and we hope our work will lead to other related work in this area. Acknowledgements The support of the Office of Naval Research under grant N00014-20-1-2444 and of USDA National Institute of Food and Agriculture under grant 2020-67021-32799/1024178 are gratefully acknowledged. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7084 References Baur, C.; Wiestler, B.; Albarqouni, S.; and Navab, N. 2019. Deep autoencoding models for unsupervised anomaly segmentation in brain MR images. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I 4, 161–169. Springer. Besnier, V.; Bursuc, A.; Picard, D.; and Briot, A. 2021. Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15701–15710. Bevandi´c, P.; Kreˇso, I.; Orˇsi´c, M.; and ˇSegvi´c, S. 2018. Discriminative out-of-distribution detection for semantic segmentation. arXiv preprint arXiv:1808.07703. Bevandi´c, P.; Kreˇso, I.; Orˇsi´c, M.; and ˇSegvi´c, S. 2019. Simultaneous semantic segmentation and outlier detection in presence of domain shift. In Pattern Recognition: 41st DAGM German Conference, DAGM GCPR 2019, Dortmund, Germany, September 10–13, 2019, Proceedings 41, 33–47. Springer. Chan, R.; Lis, K.; Uhlemeyer, S.; Blum, H.; Honari, S.; Siegwart, R.; Fua, P.; Salzmann, M.; and Rottmann, M. 2021. Segmentmeifyoucan: A benchmark for anomaly segmentation. arXiv preprint arXiv:2104.14812. Chan, R.; Rottmann, M.; and Gottschalk, H. 2021a. Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5128–5137. Chan, R.; Rottmann, M.; and Gottschalk, H. 2021b. Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5128–5137. Cheng, B.; Choudhuri, A.; Misra, I.; Kirillov, A.; Girdhar, R.; and Schwing, A. G. 2021. Mask2former for video instance segmentation. arXiv preprint arXiv:2112.10764. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 3213–3223. Creusot, C.; and Munawar, A. 2015. Real-time small obstacle detection on highways using compressive RBM road reconstruction. In 2015 IEEE Intelligent Vehicles Symposium (IV), 162–167. Di Biase, G.; Blum, H.; Siegwart, R.; and Cadena, C. 2021. Pixel-wise anomaly detection in complex driving scenes. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16918–16927. Ding, J.; Xue, N.; Xia, G.-S.; and Dai, D. 2022. Decoupling zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11583–11592. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Grci´c, M.; Bevandi´c, P.; and ˇSegvi´c, S. 2021. Dense anomaly detection by robust learning on synthetic negative data. arXiv preprint arXiv:2112.12833. Grci´c, M.; ˇSari´c, J.; and ˇSegvi´c, S. 2023. On Advantages of Mask-level Recognition for Outlier-aware Segmentation. arXiv:2301.03407. Haldimann, D.; Blum, H.; Siegwart, R.; and Cadena, C. 2019. This is not what i imagined: Error detection for semantic segmentation through visual dissimilarity. arXiv preprint arXiv:1909.00676. Hendrycks, D.; and Gimpel, K. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136. Hendrycks, D.; Mazeika, M.; and Dietterich, T. 2018a. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606. Hendrycks, D.; Mazeika, M.; and Dietterich, T. 2018b. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606. Hoyer, L.; Dai, D.; and Van Gool, L. 2022. Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9924–9935. Jung, S.; Lee, J.; Gwak, D.; Choi, S.; and Choo, J. 2021. Standardized max logits: A simple yet effective approach for identifying unexpected road obstacles in urban-scene segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15425–15434. Kang, B.; Kwak, J.; and Kang, S.-J. 2022. Anomaly Segmentation Using Class-aware Erosion and Smoothing. In 2022 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), 1–4. Kendall, A.; and Gal, Y. 2017. What uncertainties do we need in bayesian deep learning for computer vision? Advances in Neural Information processing Systems, 30. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. Lakshminarayanan, B.; Pritzel, A.; and Blundell, C. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in Neural Information Processing Systems, 30. Lee, K.; Lee, H.; Lee, K.; and Shin, J. 2017. Training confidence-calibrated classifiers for detecting out-ofdistribution samples. arXiv preprint arXiv:1711.09325. Liang, S.; Li, Y.; and Srikant, R. 2017. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7085 coco: Common objects in context. In European Conference on Computer Vision, 740–755. Lis, K.; Honari, S.; Fua, P.; and Salzmann, M. 2020. Detecting road obstacles by erasing them. arXiv preprint arXiv:2012.13633. Lis, K.; Nakka, K.; Fua, P.; and Salzmann, M. 2019. Detecting the unexpected via image resynthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2152–2161. Liu, Y.; Ding, C.; Tian, Y.; Pang, G.; Belagiannis, V.; Reid, I.; and Carneiro, G. 2022. Residual Pattern Learning for Pixel-wise Out-of-Distribution Detection in Semantic Segmentation. arXiv preprint arXiv:2211.14512. Liu, Y.; Tian, Y.; Maicas, G.; Pu, L. Z. C. T.; Singh, R.; Verjans, J. W.; and Carneiro, G. 2020. Photoshopping colonoscopy video frames. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 1–5. Ma, X.; Li, B.; Wang, Y.; Erfani, S. M.; Wijewickrema, S.; Schoenebeck, G.; Song, D.; Houle, M. E.; and Bailey, J. 2018. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613. Mukhoti, J.; and Gal, Y. 2018. Evaluating bayesian deep learning methods for semantic segmentation. arXiv preprint arXiv:1811.12709. Nayal, N.; Yavuz, M.; Henriques, J. F.; and G¨uney, F. 2023. RbA: Segmenting Unknown Regions Rejected by All. arXiv:2211.14293. Ohgushi, T.; Horiguchi, K.; and Yamanaka, M. 2020. Road obstacle detection method based on an autoencoder with semantic segmentation. In Proceedings of the Asian Conference on Computer Vision. Pinggera, P.; Ramos, S.; Gehrig, S.; Franke, U.; Rother, C.; and Mester, R. 2016. Lost and found: detecting small road hazards for self-driving vehicles. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1099–1106. Qi, L.; Kuen, J.; Wang, Y.; Gu, J.; Zhao, H.; Torr, P.; Lin, Z.; and Jia, J. 2022. Open World Entity Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 8748–8763. Tian, Y.; Liu, Y.; Pang, G.; Liu, F.; Chen, Y.; and Carneiro, G. 2022. Pixel-wise energy-biased abstention learning for anomaly segmentation on complex urban driving scenes. In European Conference on Computer Vision, 246–263. Tian, Y.; Pang, G.; Chen, Y.; Singh, R.; Verjans, J. W.; and Carneiro, G. 2021. Weakly-supervised video anomaly detection with robust temporal feature magnitude learning. In Proceedings of the IEEE/CVF international conference on computer vision, 4975–4986. Vandenhende, S.; Georgoulis, S.; Proesmans, M.; Dai, D.; and Van Gool, L. 2020. Revisiting multi-task learning in the deep learning era. arXiv preprint arXiv:2004.13379, 2(3). Voj´ıˇr, T.; and Matas, J. 2023. Image-Consistent Detection of Road Anomalies As Unpredictable Patches. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 5491–5500. Vojir, T.; ˇSipka, T.; Aljundi, R.; Chumerin, N.; Reino, D. O.; and Matas, J. 2021. Road anomaly detection by partial image reconstruction with segmentation coupling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15651–15660. Vu, T.-H.; Jain, H.; Bucher, M.; Cord, M.; and P´erez, P. 2019. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2517–2526. Xia, Y.; Zhang, Y.; Liu, F.; Shen, W.; and Yuille, A. L. 2020. Synthesize then compare: Detecting failures and anomalies for semantic segmentation. In European Conference on Computer Vision, 145–161. Springer. Xu, M.; Zhang, Z.; Wei, F.; Lin, Y.; Cao, Y.; Hu, H.; and Bai, X. 2022. A simple baseline for open-vocabulary semantic segmentation with pre-trained vision-language model. In European Conference on Computer Vision, 736–753. Springer. Zhang, H.; Li, F.; and Ahuja, N. 2023. Open-NeRF: Towards Open Vocabulary NeRF Decomposition. arXiv preprint arXiv:2310.16383. Zhang, H.; and Zhang, R. 2022. Active domain adaptation with multi-level contrastive units for semantic segmentation. In Proceedings of the Asian Conference on Computer Vision, 1640–1657. Zhou, B.; Zhao, H.; Puig, X.; Xiao, T.; Fidler, S.; Barriuso, A.; and Torralba, A. 2019. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127(3): 302–321. Zhou, Z.; Lei, Y.; Zhang, B.; Liu, L.; and Liu, Y. 2023. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11175–11185. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7086
2024
787
18,614
A Robust Mutual-Reinforcing Framework for 3D Multi-Modal Medical Image Fusion Based on Visual-Semantic Consistency Hao Zhang1*, Xuhui Zuo1*, Huabing Zhou2, Tao Lu2, Jiayi Ma1† 1Electronic Information School, Wuhan University, Wuhan 430072, China 2School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China zhpersonalbox@gmail.com, zuoxh2001@163.com, zhouhuabing@gmail.com, lutxyl@gmail.com, jyma2010@gmail.com Abstract This work proposes a robust 3D medical image fusion framework to establish a mutual-reinforcing mechanism between visual fusion and lesion segmentation, achieving their double improvement. Specifically, we explore the consistency between vision and semantics by sharing feature fusion modules. Through the coupled optimization of the visual fusion loss and the lesion segmentation loss, visual-related and semantic-related features will be pulled into the same domain, effectively promoting accuracy improvement in a mutual-reinforcing manner. Further, we establish the robustness guarantees by constructing a two-level refinement constraint in the process of feature extraction and reconstruction. Benefiting from full consideration for common degradations in medical images, our framework can not only provide clear visual fusion results for doctor’s observation, but also enhance the defense ability of lesion segmentation against these negatives. Extensive evaluations of visual fusion and lesion segmentation scenarios demonstrate the advantages of our method in terms of accuracy and robustness. Moreover, our proposed framework is generic, which can be wellcompatible with existing lesion segmentation algorithms and improve their performance. The code is publicly available at https://github.com/HaoZhang1018/RMR-Fusion. Introduction Multi-modal medical image fusion aims to combine the different-attribute information of the body, giving visually more informative fused results (Xu and Ma 2021; Tang et al. 2022) or locating the lesions (Zhou et al. 2022; Fang and Wang 2022) more accurately. According to its intended use, the broad concept of multi-modal medical image fusion can be categorized into two specific types. i) Visual fusion (Zhang et al. 2020a; Li et al. 2023). Its target audience is the medical doctor, which integrates multi-modal medical images into a single image or cube data, aiming to provide high-quality visual results with sufficient tissue structure information and significant functional pathological distribution. Then, the doctors can observe the generated visual fusion results to make a diagnosis with the support of long-term accumulated experience. ii) Semantic fusion. *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Mutual Reinforcing Semantic-related Features Multi-modal Data Visual-related Features (a) Separate manner Visual Fusion Lesion Segmentation (b) Proposed method Semantic-related Features Visual-related Features Visual Fusion Lesion Segmentation Multi-modal Data Figure 1: Existing methods treating visual fusion and lesion segmentation as (a) separate issues, and our proposed (b) mutual-reinforcing framework. It serves intelligent medical diagnostic machines, conducting lesion analysis by combining multi-modal medical images from the semantic perspective. Lesion segmentation is a typical semantic fusion technology, which is dedicated to pixel-level localization and classification of lesions (Zhang et al. 2023). Without loss of generality, we continue the following discussion using lesion segmentation as a representative of semantic fusion. In recent years, a lot of deep multi-modal medical image fusion methods have been proposed to solve the problems of visual fusion (Ma et al. 2020) and lesion segmentation (Li et al. 2020). However, most of these methods treat them as two separate issues, as shown in Fig. 1 (a). For these methods, the consistent way to achieve performance improvements is to design better distance measures (Xue et al. 2021) for optimization guidance and deeper network architectures (Dolz et al. 2019) for nonlinear fitting within their own domain of knowledge. Nevertheless, without introducing new priors for model solving, performance bottlenecks in both visual fusion and lesion segmentation will inevitably arise under such a separate paradigm. In addition, the rigor of the medical field requires that related methods must be quite tolerant of image degradations. Otherwise, a slight perturbation to the source images may lead to serious medical accidents. Unfortunately, few existing multi-modal medical image fusion methods (Xu et al. 2022b; Liu et al. 2022b) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7087 0 25 50 75 100 125 0.6 0.65 0.7 0.8 0.9 proposed method spearate manner (a) 0 25 50 75 100 125 0.2 0.3 0.4 0.5 0.7 0.9 proposed method spearate manner (b) Figure 2: The proposed framework can facilitate better loss optimization than the separate manner. (a) Curves of visual fusion loss. (b) Curves of lesion segmentation loss. consider the factor of image degradations, which means low robustness and reliability in real scenes. The above two challenges of existing methods can be summarized as accuracy bottleneck and robustness limitation. To break the accuracy bottleneck and establish robustness guarantees for visual fusion and lesion segmentation, we propose a robust mutual-reinforcing framework for 3D multi-modal medical image fusion, improving their performance by exploring visual-semantic consistency. First, we consider the improvement of accuracy, which relies on better optimization for these two tasks. Based on the visual-semantic consistency, we establish a mutualreinforcing mechanism between visual fusion and lesion segmentation, which are new solution priors for each other, as shown in Fig. 1 (b). It involves a philosophical question of whether intelligent machines and doctors make diagnoses on the same or positively related basic features. Considering that some studies (Tang, Yuan, and Ma 2022; Zhu et al. 2021) have demonstrated the unidirectional facilitation of semantics and vision, we further think that visual features and semantic features can be unified into one domain to some extent, which is proved by the subsequent experiments. Therefore, we use a shared feature fusion module to connect the visual head and the semantic head, and couple the optimization of the visual fusion loss and lesion segmentation loss. As a result, the models of visual fusion and lesion segmentation can be optimized better, as shown in Fig. 2. Second, we establish robustness guarantees for our fusion framework, increasing its tolerance to various degradations. Specifically, we design a powerful two-branch autoencoder using the Swin Transformer for feature extraction and reconstruction, in which three types of negative samples containing blur, noise, and structure loss are considered. Under the two-level refinement constraints in feature and image spaces, our method can effectively defend against various degradations, ensuring the performance of visual fusion and lesion segmentation. The major contributions of this work are summarized as follows: i) We propose a novel 3D fusion framework, which can be used as a general architecture to break the performance bottleneck of visual fusion and lesion segmentation. ii) A new idea that considers the consistency between vision and semantics is explored, which derives a mutualreinforcing mechanism to provide additional solution priors for visual fusion and lesion segmentation by integrating visual-related and semantic-related features into a unified domain. iii) We develop an autoencoder with strict twolevel refinement constraints, improving the robustness of our framework to common medical image degradations greatly. iv) Experiments on visual fusion and lesion segmentation scenarios demonstrate the advantages of our method in terms of accuracy and robustness. Related Work Deep Multi-modal Medical Visual Fusion. Deep learning technology has driven great progress in multi-modal medical visual fusion. Early deep methods (Yin et al. 2019; Lahoud and S¨usstrunk 2019; Liu et al. 2017) only participate in subparts of visual fusion, e.g., feature fusion. However, these methods cannot fully release the ability of the neural network, and other hand-crafted parts still limit the performance of visual fusion. Realizing this limitation, researchers start to develop end-to-end visual fusion models based on various network architectures. Notably, since there is no ground truth, these methods (Zhang et al. 2020a; Zhang and Ma 2021; Zhou et al. 2020b; Tang et al. 2022) have to preserve efficient information based on perceptual preference. Nevertheless, these methods essentially still follow the conventional paradigm of constructing similarity constraints between source images and the fused image, and do not introduce a new solution prior. Therefore, the performance bottleneck of visual fusion is still difficult to break. Besides, most visual fusion methods can only deal with two-dimensional slice images and cannot be directly applied to real medical volumetric images, limiting their practical application value. Deep Multi-modal Lesion Segmentation. Depending on whether segmentation labels are used, existing deep methods for multi-modal lesion segmentation can be classified into supervised (Sun et al. 2020; Zhou et al. 2020a) and unsupervised (Wu et al. 2021; Lu, Zheng, and Gupta 2022). Currently, the supervised method is still the mainstream route of the community, which can usually achieve better performance than unsupervised ones due to explicit optimization constraints by labels. Because of the explicit labels, the loss function in supervised methods is generally fixed as the distance between the prediction and the label. Therefore, supervised methods generally increase the segmentation accuracy by designing better network structures. Typically, the improvement of the network structure is carried out from two dimensions, i.e., the communication between multi-modal features, and the interaction between shallow and deep features (Hatamizadeh et al. 2022a; Dolz et al. 2019; Liu et al. 2020). Notably, these methods only rely on the conventional supervised loss, lacking the development of new regularization terms that are helpful for solving. Thus, a natural optimization bottleneck prevents further improvement of segmentation accuracy (Ding et al. 2022; Li et al. 2019). Besides, complicating and deepening the networks will lead to the demand for larger training data, which is not friendly enough to the medical field with scarce data. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7088 𝑭 𝑺 𝑿 Encoder X Encoder Y Fusion 𝒀 Decoder Segmentation Visual Head Semantic Head Complete Features Extraction Mutual Reinforcing Features Fusion Stage II 𝑿෩ 𝑿 Encoder X Encoder Y 𝒀 Decoder Complete Features Extraction Stage I Decoder 𝒀෩ Sharing Frozen Frozen Frozen Sharing Self-supervised Reconstruction 𝑿𝑫 𝒀𝑫 𝚿𝑿 𝚿𝒀 𝚿𝑿 𝑫 𝚿𝒀 𝑫 𝑿෩𝑫 𝒀෩𝑫 Figure 3: Overall pipeline of our proposed framework. {X, Y } and {XD, Y D} are paired clean and degraded multi-modal images, {ΨX, ΨY , ΨD X, ΨD Y } are extracted and purified features, { e X, eY , e XD, eY D} are reconstructed multi-modal images by the shared autoencoder, F and S are produced visual fusion result and lesion segmentation result. Proposed Method To address the challenges mentioned earlier, we propose to explore the potential consistency between visual perception (Huang et al. 2020) and semantic decision (Hatamizadeh et al. 2022a). Therefore, a robust 3D fusion framework is derived to establish a mutual-reinforcing mechanism between visual fusion and lesion segmentation. The overall procedure is shown in Fig. 3, which consists of two stages. First, an autoencoder is trained, which can drive the encoder to fully extract complete features from multimodal medical images, and constrain the decoder to have the ability of good visual reconstruction. In it, the consideration to refine degraded images greatly makes our framework robust to various negatives. Second, we freeze the parameters of the encoder and decoder, and introduce a fusion module to integrate multi-modal features. Then, the visual head and semantic head achieve visual fusion and lesion segmentation based on the shared fused features. Benefiting from the coupled optimization of visual fusion loss and lesion segmentation loss, the fusion module can integrate visual-related features and semantic-related features into a unified domain, establishing visual-semantic consistency to promote their performance in a mutual-reinforcing way. Degradation-robust Autoencoder According to the function setting, the role of our proposed degradation-robust autoencoder includes two aspects. First, it should be effectively trained to achieve complete feature extraction and good visual reconstruction in the absence of explicit supervision by the visual ground truth, which is the basis for the later implementation of high-quality visual fusion and lesion segmentation. Second, during the encodingdecoding process, it must be able to effectively filter out various degradation factors contained in the medical images, ensuring the robustness of our proposed framework. For the first goal, the self-supervised constraints of the autoencoder can naturally guide the effective feature extraction and visual reconstruction process. As for the second goal, we introduce a data augmentation strategy and construct two-level refinement constraints of feature and image spaces, suppressing the transfer expression of degradation factors. Therefore, we give the specific architecture of the degradation-robust autoencoder, as shown in Fig. 4. ሼ𝑿, 𝒀ሽ … … Encoder X Encoder Y Complete Features Extraction Self-supervised Reconstruction Sharing Decoder Reconstructed Data ሼ𝑿෩, 𝒀෩ሽ ሼ𝑿𝑫, 𝒀𝑫ሽ CNN Down Pooling Swin Transformer Up Pooling Noise Blur Structure Loss Multi-modal Data Figure 4: Architecture of degradation-robust autoencoder. Formally, we first construct negative samples {XD, Y D} by adding three common degradation factors to the clean multimodal images {X, Y }, including noise, blur, and structure loss. Then, two non-shared homogeneous encoders are used to implement feature extraction, obtaining mapped features: {ΨX, ΨY } = {ENX(X), ENY (Y )}, {ΨD X, ΨD Y } = {ENX(XD), ENY (Y D)}, (1) where ENX(·) and ENY (·) are the functions of encoders, {ΨX, ΨY } and {ΨD X, ΨD Y } are features from clean {X, Y } and dirty {XD, Y D}, respectively. Subsequently, a shared decoder is adopted to implement visual reconstruction, mapping {ΨX, ΨY , ΨD X, ΨD Y } from the feature space to image space. The reconstruction process is formulated as: { e X, eY , e XD, eY D} = DE({ΨX, ΨY , ΨD X, ΨD Y }) (2) where DE(·) indicates the function of the shared decoder, { e X, eY , e XD, eY D} are reconstructed images. To efficiently train our proposed autoencoder and filter out the contained degradations, we propose a new two-level refinement constraint. First, we design an image-level refinement loss LIR in the image space, which is formulated as: LIR =Q({ e X, eY }, {X,Y }) + Q({ e XD, eY D}, {X,Y }), (3) where Q(·) represents the distance function, and we specify it as three types of similarity metrics, including the mean The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7089 ሼ𝑿, 𝒀ሽ Multi-modal Data Encoder X Encoder Y Complete Features Extraction Frozen Frozen Fusion Module Mutual Reinforcing Decoder Visual Head Frozen Segment Semantic Head Segmentation Vision Fusion ሼ𝑭, 𝑺ሽ C Figure 5: Architecture of mutual-reinforcing fusion module. square error (MSE), total variation (TV) (Hou et al. 2020) and structural similarity index (SSIM) (Wang et al. 2004) as in our work. Intuitively, LIR requires that the autoencoder can reconstruct clean images regardless of whether the input is clean or degraded. However, LIR is to constrain the filtering of degradations from an overall perspective, which does not take into account the cleanliness in the feature space. Thus, the subsequent lesion segmentation based on features may still suffer from degradations. To address this challenge, we further develop a feature-level refinement loss LFR, which is defined as: LFR = Q({ΨD X, ΨD Y }, {ΨX, ΨY }). (4) LFR requires that the encoders can extract clean features regardless of whether the input is clean or degraded, so as to guarantee the robustness of subsequent lesion segmentation. The final two-level refinement loss LTR is obtained by the weighted summation of the image- and feature-level refinement losses, which are balanced by the hyper-parameter α: LTR = LIR + αLFR. (5) Our degradation-robust autoencoder is essentially a denoising autoencoder (Gondara 2016), but it is different from existing ones in two aspects. First, our degradation-robust autoencoder establishes a new two-level refinement loss, which not only requires the cleanliness of the reconstructed image like existing denoising autoencoder, but also additionally requires the purification of encoding features. This is crucial for the semantic segmentation task that relies on features to make decisions. Second, unlike the single-modal denoising autoencoder, ours is a cross-modal autoencoder with two encoders and a shared decoder. In particular, the shared decoder can effectively reconstruct a clean image from the purified features, regardless of which modality the features come from. This property is the basis for generating visually fused images from fused features. Network Architecture. Our degradation-robust autoencoder consists of two encoders and one decoder, as shown in Fig. 4. First, these two encodes are homogeneous but nonshared. In them, the 3D CNN is first used to extract shallow features with a local receptive field. Then, we use the downpooling operation to reduce the scale, and introduce Swin Transformer (Liu et al. 2021) on a small scale to capture spatial long-distance dependencies. Finally, we perform the up-pooling to restore the original scale, and utilize the 3D CNN to produce the final encoded features. While in the decoder, we use the pure 3D CNN to process encoded features, to fulfill the expected visual reconstruction. Segmentation Labels T1 T2-Flair Axial Axial Axial Figure 6: An example of MRI T1 and T2−F lair (2D axial slices for better visualization). Mutual-reinforcing Fusion After learning in the first stage, we already have a powerful encoder and decoder, which can achieve complete feature extraction and high-quality visual reconstruction while filtering out various degradation factors. Now, we can integrate the extracted complete features, and use the coupled visual head and semantic head to jointly modulate this integration process, as shown in Fig. 5. Specifically, we develop a mutual-reinforcing fusion module FN to fuse the multi-modal features {ΨX, ΨY } = {ENX(X), ENY (Y )} that are extracted by the frozen encoders, obtaining the fused features ΨF with high expressive ability: ΨF = FN(ΨX, ΨY ). Then, visual fusion and lesion segmentation are implemented based on the fused features simultaneously: F = DE(ΨF ), S = SH(ΨF ), (6) where F is the visual fusion result, S is the lesion segmentation result, DE(·) denotes the function of frozen decoder, and SH(·) indicates the function of lesion segmentation network. Because both visual-related and semantic-related attributes must be taken into account in the fused features, a mutual-reinforcing mechanism based on visual-semantic consistency is naturally established. Next, we describe the definitions and design motivations of the visual fusion loss and the lesion segmentation loss in detail. First, we consider the visual fusion loss. Typically, medical images can be divided into two types according to their characterization information: functional and structural. The former relies on the significance of contrast to reflect abnormal metabolic information such as lesions, while the latter describes the physical form of tissue structures. Taking the medical images used in this paper as an example, we specify X as MRI T1 that contains rich tissue texture, and Y as T2−F lair in which significant white regions clearly indicate the white matter lesions, as shown in Fig. 6. A good visual fusion result for doctors should maintain these salient (lesion) regions while preserving sufficient tissue textures. Therefore, the visual fusion loss LVF is defined as: LVF =∥F −Y ∥1 + δ∥∇F −max(∇X, ∇Y )∥1, (7) where ∥·∥1 designates the ℓ1 norm, ∇represents the 3D Sobel operator for seeking gradients, max(·) is the maximum function, and δ is a hyper-parameter for the tradeoff. The first term guarantees the maintenance of salient lesion information in the intensity domain, while the second term achieves the preservation of tissue texture based on the global maximum gradient approximation. Second, we use The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7090 T1 T2-Flair 3D-D. SF-N. IFC. SwinF. U2F. Ours Figure 7: Results of visual fusion on clean images. U2F., SwinF., IFC., SF-N. and 3D-D. denote U2Fusion, SwinFusion, IFCNN, SF-Net and 3D-DTCWT, respectively. Methods EN FMI AG SCD 3D-DTCWT 4.3262 0.8928 4.2412 1.1541 SF-Net 4.1902 0.9000 3.8287 0.9664 IFCNN 4.1913 0.8942 4.8879 0.9608 SwinFusion 4.2966 0.8907 4.8508 1.2182 U2Fusion 3.8856 0.8922 3.8439 0.6234 Ours 4.3779 0.8994 5.8144 1.3529 Table 1: Quantitative results of visual fusion on clean images. Bold/underline denotes the best/second best. These metrics are calculated on the whole testing set of the MRBrainS MRI dataset, and the same in the following tables. the popular Dice Loss (Milletari, Navab, and Ahmadi 2016) to define the lesion segmentation loss LLS: LLS = 1 −2 J J X j=1 PI i=1 Si,jLi,j PI i=1 Si,j + PI i=1 Li,j , (8) where J denotes the total number of classes, and I is the total number of voxels. Si,j and Li,j indicate the probability of the lesion segmentation result and the one-hot encoded labels for j-th class at i-th voxel, respectively. Now, we couple the visual fusion loss and lesion segmentation loss to obtain the final mutual-reinforcing loss LMF: LMF = LVF + ηLLS, (9) where η is a hyper-parameter for the trade-off. The visualsemantic consistency implied by the mutual-reinforcing loss can also be perceived in Fig. 6. On the one hand, the preservation of salient lesion regions by the visual head can help the semantic head to better implement small target segmentation. On the other hand, the segmentation of different regions by the semantic head can also help the visual head to achieve fine integration of the texture details. Network Architecture. In our mutual-reinforcing fusion module, we first perform the concatenation operation on the multi-modal features, and then utilize the pure 3D CNN to process the added features. Notably, the dense connection is used multiple times to promote feature communication, which has been shown to be very effective in feature fusion (Li and Wu 2019; Xu et al. 2020). The visual head is designated as the frozen decoder, which has a good visual reconstruction ability after autoencoder training at the previous stage. The semantic head can be specified as an existing lesion segmentation model, which is selected as the state-ofthe-art Swin Unetr (Hatamizadeh et al. 2022a) in this paper. Brain GT WML Ours 3D-U. Swin U. Hyper. VNet UNet Figure 8: Results of lesion segmentation on clean images. Swin U., 3D-U. and Hyper. denote Swin Unetr, 3D-UX-Net and HyperDenseNet, respectively. Experiments Implementation Details. Our method is implemented using PyTorch and the MONAI (Cardoso et al. 2022) framework, running on an NVIDIA TITAN RTX GPU and a 2.20 GHz Intel Xeon Platinum 8273CL CPU. Evaluation is performed on the MRBrainS MRI dataset1 (Mendrik et al. 2015). Data augmentation is applied to expand the source data, with a training-testing ratio of 4 : 1. Autoencoder and mutual-reinforcing fusion module training use a batch size of 2 for 200 and 140 epochs, respectively. The model has 63.09 M parameters and is optimized using the Adam Optimizer (Kingma and Ba 2014) with an initial learning rate of 1e−4. Hyperparameters α, δ, and η are set to 1, 0.08, and 0.10, respectively. Accuracy Evaluation First, we implement the comparison on clean data. Visual Fusion Five state-of-the-art visual fusion methods are selected for comparison, including 3D-DTCWT (Kushwaha et al. 2015), SF-Net (Liu et al. 2022a), IFCNN (Zhang et al. 2020b), SwinFusion (Ma et al. 2022), and U2Fusion (Xu et al. 2022a). The 2D slices are demonstrated in Fig. 7 for better visualization. Our method can simultaneously preserve the salience of lesion regions and integrate tissue structures, while other methods can not. Further, we use four common non-reference metrics to evaluate visual fusion results, including entropy (EN) (Roberts, Van Aardt, and Ahmed 2008), feature mutual information (FMI) (Haghighat, Aghagolzadeh, and Seyedarabi 2011), average gradient (AG) (Cui et al. 2015), and the sum of the correlations of differences (SCD) (Aslantas and Bendes 2015). As reported in Table 1, our method achieves three best and one second-best rankings, indicating that our results contain the most information, preserve the richest structures, and efficiently incorporate features from source images. Lesion Segmentation We use five state-of-the-art segmentation methods for comparison, including UNet (C¸ ic¸ek et al. 2016), VNet (Milletari, Navab, and Ahmadi 2016), 3D-UX-Net (Lee et al. 2023), HyperDenseNet (Dolz et al. 2019), and Swin Unetr (Hatamizadeh et al. 2022a). The visualization results of brain segmentation are presented in Fig. 8. Our method can produce segmentation results that are most consistent with the ground truth, having obvious 1https://mrbrains13.isi.uu.nl The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7091 Methods CSF GM WM WML Average UNet 0.7462 0.7533 0.7632 0.5122 0.6937 VNet 0.7607 0.7868 0.7661 0.5524 0.7165 HyperDenseNet 0.8129 0.8348 0.8008 0.5804 0.7572 3D-UX-Net 0.7884 0.8168 0.8140 0.6311 0.7626 Swin Unetr 0.7985 0.8309 0.8102 0.6187 0.7646 Ours 0.7946 0.8256 0.8197 0.6994 0.7848 Table 2: Quantitative results (Dice similarity coefficient) of lesion segmentation on clean images. Ours T2-Flair T1 SF-N. 3D-D. IFC. U2F. SwinF. Figure 9: Visual fusion results on degraded images. advantages over other methods, especially in white matter lesion regions. Furthermore, we introduce the Dice similarity coefficient (DSC) (Milletari, Navab, and Ahmadi 2016) to objectively assess the segmentation accuracy on four classes, including cerebrospinal fluid (CSF), gray matter (GM), white matter (WM), and white matter lesions (WML). As reported in Table 2, our method achieves the highest average segmentation accuracy, which is about 0.02 higher than our selected backbone Swin Unetr. This can prove that the proposed mutual-reinforcing framework can indeed promote the precision of semantic decisions. Meanwhile, it is worth noting that our method achieves far superior segmentation accuracy in our focused white matter lesion regions. Robustness Evaluation Next, we verify the robustness of these methods to various degradations. Specifically, we simulate the negatives introduced during the imaging process by adding a mixture of randomly sampled noise (std = 0.04, mean = 0), Gaussian blurring (std = 0.6, mean = 0), and structure loss (size of (4, 16, 16)’s random window with Fourier broken). Visual Fusion The qualitative results are shown in Fig. 9. These comparative methods cannot remove degradations contained in the source images. Inevitably, their results of visual fusion suffer from information loss, which will not be conducive to the doctor’s observation and analysis. In comparison, our method shows strong robustness, which effectively recovers useful information by removing degradation, presenting a clear visual fusion appearance. The quantitative results are further reported in Table 3. Since EN and AG are metrics highly correlated with high-frequency noise, our method does not achieve the best scores on them. For the other two metrics FMI and SCD, our method ranks first. Overall, these results demonstrate the robustness of our method to degradations on the visual fusion task. Lesion Segmentation Then, we present qualitative results of lesion segmentation on degraded data in Fig. 10. Clearly, the addition of degradations is disastrous for some comMethods EN FMI AG SCD 3D-DTCWT 6.5812 0.8845 7.1057 0.6296 SF-Net 6.5342 0.8909 7.1491 0.5844 IFCNN 6.6110 0.8834 8.6804 0.6000 SwinFusion 6.6137 0.8832 7.7014 0.7746 U2Fusion 6.2087 0.8859 6.7426 0.6613 Ours 4.7145 0.8948 5.5052 1.1275 Table 3: Quantitative visual fusion on degraded images. Brain GT WM L Ours 3D-U. Swin U. Hyper. VNet UNet Figure 10: Lesion segmentation results on degraded images. Methods CSF GM WM WML Average Drop UNet 0.6776 0.7258 0.6805 0.2326 0.5791 0.1146 VNet 0.7427 0.7533 0.7134 0.4796 0.6722 0.0443 HyperDenseNet 0.7161 0.5764 0.5417 0.3433 0.5444 0.2128 3D-UX-Net 0.7844 0.7924 0.7746 0.5576 0.7273 0.0402 Swin Unetr 0.7834 0.8015 0.7767 0.5281 0.7224 0.0422 Ours 0.7780 0.8035 0.7909 0.6568 0.7573 0.0275 Table 4: Quantitative segmentation on degraded images. 91.19% 2.86% 14.05% 8.81% 31.19% 51.90% 98.33% 40.95% 13.57% 11.19% 30.00% 5.95% Clean Degraded 0.00% Ours U2F. SwinF. IFC. SF-N. 3D-D. 20.00% 40.00% 60.00% 80.00% 100.00% Figure 11: The probability of each method entering the top two based on the psychophysical study. parative methods, such as UNet and HyperDenseNet. For Swin Unetr, 3D-UX-Net, and VNet, some significant decision misjudgments also occur in their results. Comparatively, our approach successfully defends the interference of degradations at the semantic level, still providing results that are more consistent with the ground truth. Further quantitative results are reported in Table 4. The average segmentation accuracy of our method on degraded data is only 0.0275 lower than that on clean data, while the performance drop of other methods is generally above 0.04. All these fully demonstrate the robustness of our proposed method. Psychophysical Study The visual fusion performance cannot be perfectly evaluated due to the lack of ground truth. Even though some nonreference metrics are used, some degradation factors may inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7092 Visual Fusion EN FMI AG SCD Segmentation CSF GM WM WML Average w/o LLS 4.0446 0.9019 4.1453 0.4616 w/o LVF 0.7963 0.8240 0.8166 0.6835 0.7801 Ours 4.3779 0.8994 5.8144 1.3529 Ours 0.7946 0.8256 0.8197 0.6994 0.7848 Table 5: Quantitative results of ablation studies. Ours T1 T2-Flair w/o sem. Ours T1 T2-Flair w/o sem. Figure 12: Qualitative impact of semantics on visual fusion. w/o sem. means removing the semantics head. Brain/WML GT Ours w/o Fusion Figure 13: Qualitative impact of visual fusion on semantics. terfere with their objectivity, such as EN and AG in Table 3. To address this, we invite 35 computer vision researchers in a psychophysical study. They are instructed to evaluate based on specific criteria: a good visual fusion result should exhibit rich structural information and salient lesions while suppressing degradations. Each researcher chooses the two best samples from each group to avoid randomness while ensuring fairness. Statistical results in Fig. 11 on both clean and degraded images reveal that 91.19% considered our results the best or second best on clean data, and this advantage becomes more evident on degraded data, with a voting rate of 98.33%. These statistics affirm the promising performance of our method in visual fusion. Ablation Studies Impact of Semantics on Visual Fusion In our framework, we utilize the underlying features crucial for semantic decisions to enhance appearance in visual fusion. By excluding the lesion segmentation loss LLS, we guide our framework to solely optimize visual fusion. Qualitative results in Fig. 12 reveal that without LLS, our method’s ability to integrate brain structures weakens. This is further confirmed by the quantitative results in the left column of Table 5, indicating that semantics play a positive role in enhancing visual fusion performance. Impact of Visual Fusion on Semantics Similarly, we remove the visual fusion loss LVF to force the fusion module to be guided only by the lesion segmentation loss LLS. The qualitative results are demonstrated in Fig. 13. Obviously, the visual fusion loss can make segmentation voxels more complete and accurate, especially in the white matter lesion regions. The 2D slice reflects this difference more clearly. The quantitative results are reported in the right column of Table 5. It can be seen that the segmentation performance decreases after removing LVF. These results demonstrate that visual fusion can facilitate segmentation. Original W/ Ours UNet 0.0541 0.0319 VNet SegR. 3D-U. 0.0140 0.0098 Unetr 0.0074 0.0202 0.69 Swin U. 0.71 0.73 0.75 0.77 0.79 Figure 14: Segmentation gains brought by our framework. 0.5628 0.5721 Average DSC 0.5002 0.5478 0.4964 0.5163 Transitional Zone Peripheral Zone Cancer Swin U. Ours GT Hyper. 3D-U. Vnet Unet Figure 15: Results of prostate cancer segmentation. Universality of the Proposed Framework Our method is more of a general framework than a specific lesion segmentation model. By replacing the Swin Unetr backbone in the semantic head with models like UNet, VNet, SegResNet (Myronenko 2018), 3D-UX-Net, and Unetr (Hatamizadeh et al. 2022b), our framework consistently improves the performance of these segmentation models, as shown in Fig. 14. Notably, our framework requires the semantic head (i.e., segmentation model) to use features integrated by our fusion module for decision-making. Hence, it may not be compatible with segmentation methods like HyperDenseNet, which primarily emphasize network structures for interacting multi-modal features. Extended Application We further apply the proposed framework to the Prostate158 dataset (Adams et al. 2022), for achieving prostate cancer segmentation. We demonstrate the qualitative and quantitative results in Fig. 15. It can be seen that our method still achieves a higher average segmentation accuracy than other methods, indicating its applicability and effectiveness. Conclusion In this paper, we designed a robust mutual-reinforcing 3D multi-modal medical image fusion framework. First, we proposed a Swin Transformer-based autoencoder with twostage refinement for robustness against degradations. Second, a feature fusion module was designed to couple visual fusion and lesion segmentation to mutually promote their accuracy. Extensive experiments have revealed its superiority and compatibility to existing lesion segmentation methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7093 Acknowledgements This work was supported by the National Natural Science Foundation of China (62276192). References Adams, L. C.; Makowski, M. R.; Engel, G.; Rattunde, M.; Busch, F.; Asbach, P.; Niehues, S. M.; Vinayahalingam, S.; van Ginneken, B.; Litjens, G.; et al. 2022. Prostate158-An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Computers in Biology and Medicine, 148: 105817. Aslantas, V.; and Bendes, E. 2015. A new image quality metric for image fusion: The sum of the correlations of differences. Aeu-International Journal of Electronics and Communications, 69(12): 1890–1896. Cardoso, M. J.; Li, W.; Brown, R.; Ma, N.; Kerfoot, E.; Wang, Y.; Murrey, B.; Myronenko, A.; Zhao, C.; Yang, D.; et al. 2022. MONAI: An open-source framework for deep learning in healthcare. arXiv preprint arXiv:2211.02701. C¸ ic¸ek, ¨O.; Abdulkadir, A.; Lienkamp, S. S.; Brox, T.; and Ronneberger, O. 2016. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, 424–432. Cui, G.; Feng, H.; Xu, Z.; Li, Q.; and Chen, Y. 2015. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications, 341: 199–209. Ding, W.; Abdel-Basset, M.; Hawash, H.; and Pedrycz, W. 2022. Multimodal infant brain segmentation by fuzzyinformed deep learning. IEEE Transactions on Fuzzy Systems, 30(4): 1088–1101. Dolz, J.; Gopinath, K.; Yuan, J.; Lombaert, H.; Desrosiers, C.; and Ayed, I. B. 2019. HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation. IEEE Transactions on Medical Imaging, 38(5): 1116–1126. Fang, L.; and Wang, X. 2022. Brain tumor segmentation based on the dual-path network of multi-modal MRI images. Pattern Recognition, 124: 108434. Gondara, L. 2016. Medical image denoising using convolutional denoising autoencoders. In Proceedings of the IEEE International Cconference on Data Mining Workshops, 241– 246. Haghighat, M. B. A.; Aghagolzadeh, A.; and Seyedarabi, H. 2011. A non-reference image fusion metric based on mutual information of image features. Computers & Electrical Engineering, 37(5): 744–756. Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.; and Xu, D. 2022a. Swin Unetr: Swin Transformers for semantic segmentation of brain tumors in MRI images. arXiv preprint arXiv:2201.01266. Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H. R.; and Xu, D. 2022b. Unetr: Transformers for 3D medical image segmentation. In Proceedings of the IEEE/CVF Conference on Applications of Computer Vision, 574–584. Hou, R.; Zhou, D.; Nie, R.; Liu, D.; Xiong, L.; Guo, Y.; and Yu, C. 2020. VIF-Net: An unsupervised framework for infrared and visible image fusion. IEEE Transactions on Computational Imaging, 6: 640–651. Huang, J.; Le, Z.; Ma, Y.; Fan, F.; Zhang, H.; and Yang, L. 2020. MGMDcGAN: Medical image fusion using multigenerator multi-discriminator conditional generative adversarial network. IEEE Access, 8: 55145–55157. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kushwaha, A.; Khare, A.; Prakash, O.; Song, J.-I.; and Jeon, M. 2015. 3D medical image fusion using dual tree complex wavelet transform. In Proceedings of the International Conference on Control, Automation and Information Sciences, 251–256. Lahoud, F.; and S¨usstrunk, S. 2019. Zero-learning fast medical image fusion. In Proceedings of the International Conference on Information Fusion, 1–8. Lee, H. H.; Bao, S.; Huo, Y.; and Landman, B. A. 2023. 3D UX-Net: A large kernel volumetric convNet modernizing hierarchical transformer for medical image segmentation. In Proceedings of the International Conference on Learning Representations. Li, H.; and Wu, X.-J. 2019. DenseFuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing, 28(5): 2614–2623. Li, J.; Liu, J.; Zhou, S.; Zhang, Q.; and Kasabov, N. K. 2023. GeSeNet: A General Semantic-Guided Network With Couple Mask Ensemble for Medical Image Fusion. IEEE Transactions on Neural Networks and Learning Systems. Li, J.; Yu, Z. L.; Gu, Z.; Liu, H.; and Li, Y. 2019. MMAN: Multi-modality aggregation network for brain segmentation from MR images. Neurocomputing, 358: 10–19. Li, K.; Yu, L.; Wang, S.; and Heng, P.-A. 2020. Towards cross-modality medical image segmentation with online mutual knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 775–783. Liu, L.; Hu, X.; Zhu, L.; Fu, C.-W.; Qin, J.; and Heng, P.-A. 2020. ψ-net: Stacking densely convolutional lstms for subcortical brain structure segmentation. IEEE Transactions on Medical Imaging, 39(9): 2806–2817. Liu, Y.; Chen, X.; Cheng, J.; and Peng, H. 2017. A medical image fusion method based on convolutional neural networks. In Proceedings of the International Conference on Information Fusion, 1–7. Liu, Y.; Mu, F.; Shi, Y.; and Chen, X. 2022a. Sf-net: A multitask model for brain tumor segmentation in multimodal mri via image fusion. IEEE Signal Processing Letters, 29: 1799– 1803. Liu, Y.; Shi, Y.; Mu, F.; Cheng, J.; and Chen, X. 2022b. Glioma segmentation-oriented multi-modal MR image fusion with adversarial learning. IEEE/CAA Journal of Automatica Sinica, 9(8): 1528–1531. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7094 the IEEE/CVF International Conference on Computer Vision, 10012–10022. Lu, C.; Zheng, S.; and Gupta, G. 2022. Unsupervised domain adaptation for cardiac segmentation: Towards structure mutual information maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2588–2597. Ma, J.; Tang, L.; Fan, F.; Huang, J.; Mei, X.; and Ma, Y. 2022. SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA Journal of Automatica Sinica, 9(7): 1200–1217. Ma, J.; Xu, H.; Jiang, J.; Mei, X.; and Zhang, X.-P. 2020. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Transactions on Image Processing, 29: 4980–4995. Mendrik, A. M.; Vincken, K. L.; Kuijf, H. J.; Breeuwer, M.; Bouvy, W. H.; De Bresser, J.; Alansary, A.; De Bruijne, M.; Carass, A.; El-Baz, A.; et al. 2015. MRBrainS challenge: Online evaluation framework for brain image segmentation in 3T MRI scans. Computational Intelligence and Neuroscience, 2015: 1–1. Milletari, F.; Navab, N.; and Ahmadi, S.-A. 2016. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the International Conference on 3D Vision, 565–571. Myronenko, A. 2018. 3D MRI brain tumor segmentation using autoencoder regularization. arXiv preprint arXiv:1810.11654. Roberts, J. W.; Van Aardt, J. A.; and Ahmed, F. B. 2008. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2(1): 023522. Sun, L.; Ma, W.; Ding, X.; Huang, Y.; Liang, D.; and Paisley, J. 2020. A 3D spatially weighted network for segmentation of brain tissue from MRI. IEEE Transactions on Medical Imaging, 39(4): 898–909. Tang, L.; Yuan, J.; and Ma, J. 2022. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Information Fusion, 82: 28–42. Tang, W.; He, F.; Liu, Y.; and Duan, Y. 2022. MATR: Multimodal medical image fusion via multiscale adaptive transformer. IEEE Transactions on Image Processing, 31: 5134– 5149. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600–612. Wu, X.; Bi, L.; Fulham, M.; Feng, D. D.; Zhou, L.; and Kim, J. 2021. Unsupervised brain tumor segmentation using a symmetric-driven adversarial network. Neurocomputing, 455: 242–254. Xu, H.; and Ma, J. 2021. EMFusion: An unsupervised enhanced medical image fusion network. Information Fusion, 76: 177–186. Xu, H.; Ma, J.; Jiang, J.; Guo, X.; and Ling, H. 2022a. U2Fusion: A unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1): 502–518. Xu, H.; Ma, J.; Le, Z.; Jiang, J.; and Guo, X. 2020. Fusiondn: A unified densely connected network for image fusion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 12484–12491. Xu, H.; Ma, J.; Yuan, J.; Le, Z.; and Liu, W. 2022b. Rfnet: Unsupervised network for mutually reinforcing multi-modal image registration and fusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 19679–19688. Xue, Z.; Li, P.; Zhang, L.; Lu, X.; Zhu, G.; Shen, P.; Shah, S. A. A.; and Bennamoun, M. 2021. Multi-modal co-learning for liver lesion segmentation on PET-CT images. IEEE Transactions on Medical Imaging, 40(12): 3531–3542. Yin, M.; Liu, X.; Liu, Y.; and Chen, X. 2019. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Transactions on Instrumentation and Measurement, 68(1): 49–64. Zhang, H.; and Ma, J. 2021. SDNet: A versatile squeezeand-decomposition network for real-time image fusion. International Journal of Computer Vision, 129: 2761–2785. Zhang, H.; Xu, H.; Xiao, Y.; Guo, X.; and Ma, J. 2020a. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 12797–12804. Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; and Zhang, L. 2020b. IFCNN: A general image fusion framework based on convolutional neural network. Information Fusion, 54: 99–118. Zhang, Y.; Peng, C.; Tong, R.; Lin, L.; Chen, Y.-W.; Chen, Q.; Hu, H.; and Zhou, S. K. 2023. Multi-Modal tumor segmentation with deformable aggregation and uncertain region inpainting. IEEE Transactions on Medical Imaging. Zhou, C.; Ding, C.; Wang, X.; Lu, Z.; and Tao, D. 2020a. One-pass multi-task networks with cross-task guided attention for brain tumor segmentation. IEEE Transactions on Image Processing, 29: 4516–4529. Zhou, T.; Fu, H.; Chen, G.; Shen, J.; and Shao, L. 2020b. Hi-net: Hybrid-fusion network for multi-modal MR image synthesis. IEEE Transactions on Medical Imaging, 39(9): 2772–2781. Zhou, T.; Ruan, S.; Vera, P.; and Canu, S. 2022. A TriAttention fusion guided multi-modal segmentation network. Pattern Recognition, 124: 108417. Zhu, L.; Ji, D.; Zhu, S.; Gan, W.; Wu, W.; and Yan, J. 2021. Learning statistical texture for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12537–12546. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7095
2024
788
18,615
Learning Task-Aware Language-Image Representation for Class-Incremental Object Detection Hongquan Zhang1, 3*, Bin-Bin Gao2*, Yi Zeng2, Xudong Tian1, 3, Xin Tan1, 3†, Zhizhong Zhang1, 3, Yanyun Qu4, Jun Liu2, Yuan Xie1,3 1East China Normal University 2Tencent YouTu Lab 3Chongqing Institute of East China Normal University 4 Xiamen University {51215901136,51194501066}@stu.ecnu.edu.cn, {xtan,zzzhang,yxie}@cs.ecnu.edu.cn, {csgaobb,yizengstudy,junsenselee}@gmail.com, yyqu@xmu.edu.cn Abstract Class-incremental object detection (CIOD) is a real-world desired capability, requiring an object detector to continuously adapt to new tasks without forgetting learned ones, with the main challenge being catastrophic forgetting. Many methods based on distillation and replay have been proposed to alleviate this problem. However, they typically learn on a pure visual backbone, neglecting the powerful representation capabilities of textual cues, which to some extent limits their performance. In this paper, we propose task-aware languageimage representation to mitigate catastrophic forgetting, introducing a new paradigm for language-image-based CIOD. First of all, we demonstrate the significant advantage of language-image detectors in mitigating catastrophic forgetting. Secondly, we propose a learning task-aware languageimage representation method that overcomes the existing drawback of directly utilizing the language-image detector for CIOD. More specifically, we learn the language-image representation of different tasks through an insulating approach in the training stage, while using the alignment scores produced by task-specific language-image representation in the inference stage. Through our proposed method, languageimage detectors can be more practical for CIOD. We conduct extensive experiments on COCO 2017 and Pascal VOC 2007 and demonstrate that the proposed method achieves state-ofthe-art results under the various CIOD settings. Introduction Object detection has shown remarkable advancements in facilitating various applications, including traffic monitoring, robotics (Xu et al. 2022) and autonomous driving (Li et al. 2023a). Most object detection works mainly focus on the offline training paradigm. However, online training plays a more important role in real-world applications in dynamic environments which urgently requires a model to continuously recognize new classes and maintain the ability on *These authors contributed equally, work done while at Tencent YouTu Lab. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. +15.9% +11.8% +9.2% +27.0% +32.1% +38.2% +1.0% +56.3% Figure 1: The performance comparisons (mAP[0.5]) of language-image and pure visual-based detectors on Pascal VOC 2007 and COCO 2017 with various incremental protocols. Here, we take Dyhead (Dai et al. 2021a) as a pure visual-based detector, and GLIP as a language-image detector, where GLIP replaces the classifier used in Dyhead with language-image alignment and others the same as Dyhead. We can see that the language-image detector (GLIP) brings clear improvements in most incremental settings. However, there is still catastrophic forgetting in some challenging settings (e.g., 40+40 on COCO). To further address this issue, we first learn task-aware language-image representation and then use a selective inference strategy for CIOD, which outperforms naive GLIP by a large margin. learned classes. Therefore, exploiting continual (incremental) object detection based on online data streaming has become an attractive yet challenging topic and aims to sequentially solve tasks with ideally no performance drop when inferred on the previously seen tasks. In order to mitigate performance drop on previous classes, some class-incremental object detection (CIOD) methods either use knowledge distillation (Shmelkov, Schmid, and Alahari 2017; Hu et al. 2021; Peng et al. 2021; Feng, Wang, and Yuan 2022) on image features or replay a small number of previous exemplars (Shieh et al. 2020; Joseph et al. 2021a,b; Liu et al. 2023b). Commonly, these methods typiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7096 cally learn on a pure visual backbone, and their performance is limited to training data (images and the corresponding annotations) to some extent. It’s worth noting that it always suffers from serious background-foreground conflict, which means a proposal that belongs to the foreground in previous tasks is likely to be the background in future tasks in incremental detection scenarios. Therefore, the catastrophic forgetting issue will further be exacerbated by adopting pure visual-based CIOD methods due to background-foreground conflict. Recently, language-image models have exhibited impressive results on zero-shot (Radford et al. 2021; Zhou et al. 2022; Zhai et al. 2022) and continual image classification (Li et al. 2023b). Meanwhile, this cross-modality learning paradigm has shown strong zero-shot and few-shot transferability to object detection, such as GLIP (Li et al. 2022), Grounding Dino (Liu et al. 2023a), and MQ-Det (Xu et al. 2023). Considering catastrophic forgetting issues in CIOD, we believe that this strong transferability should benefit incremental object detection tasks because of the separability between visual features and language representation. Here, we take GLIP as an example and simply extend it to the CIOD setting. The experimental comparisons of languageimage GLIP and pure visual baseline are shown in Figure 1 on COCO and VOC with different incremental protocols. We can see that the language-image detector (GLIP) brings clear improvements (e.g., 32.1% mAP with 10+10 setting on VOC) compared pure visual method in most incremental settings. However, there is still catastrophic forgetting in some challenging settings (e.g., 40+40 on COCO). The above drawback is mainly attributed to serious background-foreground conflict due to the instance of different categories contained in an image becoming more dispersed across different tasks as the categories increase. More specifically, the separability between visual features and language representation has been insufficient to resolve this serious background-foreground conflict, resulting in poor performance in these challenging settings. Inspired by Der (Dai et al. 2021a) which uses an independent model to learn visual representation for each task, we consider that this learning paradigm can reinforce the separability to better mitigate catastrophic forgetting, as learning independent representation will no longer be subject to the background-foreground conflict dilemma. To this end, we propose a learning task-aware languageimage representation method that further separates the visual features and language representation to mitigate catastrophic forgetting. Specifically, in the training stage, a Task-Aware Module (TAM) is proposed to account for a part of the nonoverlapping channels of the image feature map and the hidden states of text embedding for producing task-aware representation in each task. While in the inference stage, for the alignment scores predicted by language-image alignment, a Selective Inference Strategy (SIS) is proposed to use the task-aware portion of the alignment scores to unify a final clarifying prediction alignment score. Our contributions are summarized below: • We are the first to apply the language-image detector to class-incremental object detection and identify its superiority in mitigating catastrophic forgetting over the pure visual-based detector. • We propose a learning task-aware language-image representation method, which mitigates the backgroundforeground conflict by reinforcing the separability of the language-image detector. • The leap in performance compared with all competitors on various benchmarks demonstrates its efficacy, while substantial qualitative evidence verifies each of our designs. Related Work Incremental Learning Incremental learning algorithms intend to mitigate catastrophic interference while facilitating the transfer of skills whenever possible and achieve excellent performance in downstream tasks like classification, detection, etc. To this end, currently, popular studies can be roughly categorized as (1) the rehearsal-based approach aims to help the model not forget the old knowledge when learning a new task by saving a part of the old samples (Zhao et al. 2021; Petit et al. 2023); (2) the regularization-based method adds a penalty term to the loss function when learning new tasks so that the model is optimized to adapt all tasks (Yang et al. 2021; Zhao et al. 2023; Tian et al. 2023); (3) the method based on parameter isolation (Yan, Xie, and He 2021; Wang et al. 2022; Cai et al. 2023) separates the model parameters used by different tasks, so as to mitigate catastrophic forgetting. Most related to our work is Der (Yan, Xie, and He 2021), but it is only applicable for a few incremental steps due to the linear growth of model parameters. On the contrary, we use an approach based on learning task-aware representation, which is more compatible with incremental learning, and with arbitrary incremental steps, our model parameters keep constant. Incremental Object Detection Class-incremental object detection is a common scenario in practical applications (Shmelkov, Schmid, and Alahari 2017), where images could contain lots of instances that belong to different tasks, and the annotation of instances is provided only current task. To solve this problem, existing studies on this issue fall into two main categories: (1) Knowledge Distillation-based, which adds regularization terms to the learning objective as an attempt to preserve previous knowledge when training the model on new data (Cermelli et al. 2022; Feng, Wang, and Yuan 2022; Yang et al. 2022); (2) Rehearsal-based utilize a buffer to memorize some of the past training data, replaying them in the following phases to “call back” the old object categories (Shmelkov, Schmid, and Alahari 2017; Liu et al. 2023b). Several methods make different efforts to class-incremental object detection, e.g., meta learning-based (Joseph et al. 2021b), regularizationbased (Liu et al. 2020), and pseudo labels (Guan et al. 2018). However, these methods are based on visual-only detectors such as Faster-Rcnn (Girshick 2015), GFL (Li et al. 2020), and DETR-based detector (Zhu et al. 2021), but neglect the rich textual represents, leading to seriously catastrophic forgetting. In our work, we explore the application The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7097 Fixed / Trained channels Fixed / Trained hidden states Cow Zebra 𝑀𝑀1 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝑀𝑀1 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑧𝑧1 𝑥𝑥 Training 𝑤𝑤′ 𝑧𝑧′ Cow Sheep × × 0.1 0.0 0.9 0.2 0.2 0.9 0.1 0.0 1 … 1 0 … 0 prompts Task-Aware Module(TAM) T A M ෝ𝑤𝑤1 ̂𝑧𝑧1 prompts 𝑥𝑥 Alignment scores Selective Inference Strategy ̂𝑧𝑧2 ෝ𝑤𝑤2 Used 𝑀𝑀1 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖, 𝑀𝑀1 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 Used 𝑀𝑀2 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖, 𝑀𝑀2 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 Inference ෝ𝑤𝑤1 ̂𝑧𝑧1 𝑤𝑤1 0.3 0.8 0.4 0.2 0.1 0.2 0.6 0.2 0.3 0.6 0.5 0.2 0.1 0.4 0.7 0.2 Alignment scores 𝑤𝑤′ 𝑧𝑧′ Prediction: Cow Prediction: Sheep S I S S I S ZebraCow Sheep Bird Deep Fusion Visual Backbone Text Backbone Text Backbone Visual Backbone Deep Fusion “Cow. Bird. Sheep. Zebra. ” “Cow. Zebra. ” Figure 2: The whole pipeline of our method. For clear demonstration, we assume there are two tasks in the whole incremental learning process, and categories [Cow, Zebra] belong to Task 1 while [Sheep, Bird] belongs to Task 2. The TAM in the training stage and the corresponding inference strategy SIS are proposed to learn task-aware language-image representation. of language-image detectors in class-incremental object detection. Although it has a stronger ability to mitigate catastrophic forgetting than visual-only detectors, it still faces the problem of task alignment confusion. To this end, we propose an effective method to solve it. Language-Image Pre-training In recent years, language-image pre-training models have been widely developed and applied to various vision tasks like detection (Li et al. 2022) and classification (Radford et al. 2021). CLIP (Radford et al. 2021) is applied to incremental classification tasks, and main methods (Zhou et al. 2022; Wang et al. 2023) aims to design different prompts to better utilize the rich knowledge of pre-trained models to help mitigate catastrophic forgetting. Different from them, a phase grounding-based object detector GLIP is used as the baseline. We note that the pre-trained weights of GLIP are not used, and aim to explore the application of the languageimage alignment model in class-incremental object detection. Methodology Preliminaries Class-Incremental Object Detection. Let C = {1, . . . , c} be the set of object categories, In CIOD, a task Tt is defined as a subset of C, the detector is exposed to at time t : Tt ⊂C, where Ti ∩Tj = ∅, for any i, j ≤t. Let (x, y) ∈D denote a dataset D which contains images x and their corresponding ground truth sets of objects y, i.e., class labels and location information, such that Dt denote the images containing annotated class objects in Tt. CIOD aims to maintain original performance on {T1, T2, . . . , Tt−1} while continually learning task Tt without access to all of {D1, D2, . . . , Dt−1}. Grounded Language-Image Learning. GLIP (Li et al. 2022) is a language-image detector that reformulates detection as a grounding task by aligning each region in the image to a phrase in language prompts. Given object categorizes [airplane, car, cow, ..., cat], the prompt is designed as: “airplane. car. cow. ... cat”. GLIP is mainly composed of (1) a visual backbone fθ(·) and a language backbone gψ(·). Specifically, the image x ∈RH×W ×C and the word token e ∈RD are fed into fθ(·) and gψ(·) respectively to obtain the image feature map z ∈RH′×W ′×C′ and the text embedding w ∈RD×L, where H, W, and C are the heights, widths and channels of x respectively, while D and L indicate the amount of tokens and the length of each token; (2) a deep fusion module used to fuse the image feature maps and text embedding in the last few encoding layers and can be defined as: z′, w′ = DeepFusion(z, w). (1) On this basis, the alignment scores are z′(w′)⊺, and the alignment loss is formulated as follows. Lground = loss(z′(w′)⊺, T), where loss is a focal loss (Lin et al. 2017b), T is the corresponding token labels which is 1 if z′ and w′ aligned, and 0 otherwise. The training objective of GLIP is defined as: Lvl = Lground + Lreg + Lcenter, (2) where Lreg and Lcenter denote the box regression loss and the centerness loss (Tian et al. 2019) respectively. Incremental Language-Image Detector Baseline Setting. We employ GLIP (Li et al. 2022) as our baseline detector and build a CIOD framework based on fine-tuning. In the first task, the visual backbone and language backbone are initiated by pre-trained weights on ImageNet (Deng et al. 2009) and BERT (Kenton and Toutanova 2019), respectively. Afterward, the model is updated by optimizing Eq. (2) with D1. While in the incremental task Tt, the trained weight on Tt−1 is used to initial the whole model and then updates it by optimizing Eq. (2) with Dt. Here we note the prompts are disjoint for different tasks. Forgetting Analysis. As analyzed in Sec. 1, the above baseline has a strong ability to mitigate catastrophic forgetting, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7098 0.3 0.8 0.4 0.2 Task 1 0.3 0.6 0.5 0.2 0.1 0.2 0.6 0.2 0.1 0.4 0.7 0.2 Zebra Cow Sheep Bird 𝑠1, 𝑈𝑠𝑒𝑑𝑀𝑖𝑚𝑎𝑔𝑒 1 , 𝑀𝑡𝑒𝑥𝑡 1 𝑠2, 𝑈𝑠𝑒𝑑𝑀𝑖𝑚𝑎𝑔𝑒 2 , 𝑀𝑡𝑒𝑥𝑡 2 Task 2 𝐸𝐿𝐸max Cow Sheep(FP) Cow(FP) Sheep 𝐸𝐿𝐸mean Cow Sheep(FP) Cow(FP) Sheep 𝑅𝑂𝑊𝑚𝑎𝑥 0.3 0.8 0.4 0.2 0.3 0.6 0.5 0.2 0.1 0.2 0.6 0.2 0.1 0.4 0.7 0.2 0.3 0.8 0.4 0.2 0.3 0.6 0.5 0.2 0.1 0.2 0.6 0.2 0.1 0.4 0.7 0.2 0.3 0.8 0.4 0.2 0.3 0.6 0.5 0.2 0.1 0.2 0.6 0.2 0.1 0.4 0.7 0.2 0.3 0.8 0.6 0.2 0.3 0.6 0.7 0.2 0.2 0.5 0.5 0.2 0.2 0.5 0.6 0.2 0.3 0.8 0.0 0.0 0.3 0.6 0.0 0.0 0.0 0.0 0.6 0.2 0.0 0.0 0.7 0.2 0.3 0.8 0.0 0.0 0.0 0.0 0.7 0.2 Cow Sheep GT: Cow GT: Sheep Max Mean Mean Max Task-aware scores Max score’s row Max score’s row Task-aware scores All digitals are prediction alignment scores GT: ground truth FP: false positive 𝑠2 ′ 𝑠2 ′ 𝑠1 ′ 𝑠1 ′ Figure 3: A visualized example of our selective inference strategy which links to the SIS of Figure 2. Our ROWmax strategy makes clearer predictions than naive solutions (ELEmax and ELEmean). which can be attributed to the expressive power of the pretrained language branch, hence we maintain a rather slow update during the training phase to have it appropriately adapt to each task. However, as the amount of categories increases, its superiority in mitigating catastrophic forgetting rapidly vanishes. We first conduct the following empirical study to reveal how and where forgetting occurs. As shown in Figure 4, the image feature map z1 and z2 have subtle differences, while significant disparities exist on z′ 1 and z′ 2 . This phenomenon allows us to comprehend two aspects (1) seriously catastrophic forgetting occurs in the deep fusion module due to substantial distinctions of the deep fused image feature maps; (2) At Task 1, the total channels of the image feature map and hidden states of text embedding are used to deep fuse. But when it comes to Task 2, all of the channels and hidden states need to be reused, leading to only focusing on the labeled regions in the feature map of Task 2. Based on the above analysis, it becomes critical to address the catastrophic forgetting caused by using all channels and hidden states for deep fusion. Task-Aware Representation Learning To address the above problem, we propose to learn taskaware language-image representation by a Task-Aware Module (TAM) that selects partial channels and hidden states respectively for deep fusion. The whole incremental learning pipeline is shown in Figure 2. Specifically, for the training phase (the upper part of Figure 2), images x and prompts are fed into the visual and language backbone respectively, where image x and prompts consist of cat𝑧! 𝑧! " 𝑧# 𝑧# " Task 2: Person Task 1: Bottle Forgetting Figure 4: A visual illustration of where forgetting occurs, the yellow part of features with high activation. z1 and z2 are the image features extracted by vision backbone which belong to Task 1 and Task 2 respectively, while z′ 1 and z′ 2 are the image features obtained by deep fusion module which belong to Task 1 and Task 2 respectively. egory labels belong to D1. Afterward, the feature map z and text embedding w are partially utilized by the TAM, and then fed into the deep fusion module to learn taskaware language-image representation. Finally, we update the whole model by optimizing Eq. (2). While in the incremental task, images x and prompts consist of category labels belonging to Dt. What’s more, we only utilize the unexploited part of channels and hidden states in the previous task {T1, T2, . . . , Tt−1} to learn task-aware language-image representation. For the inference phase (the bottom part of Figure 2), the test image x belongs to all learned categories, and prompts consist of all learned category labels. The TAM will produce two groups of selected channels and hidden states. After being fed into a deep fusion module, two groups of alignment scores are obtained. Finally, we propose a Selection Inference Strategy to unify these alignment scores for final prediction. Task-Aware Module. The proposed task-aware module serves two purposes. On the visual side, we select different channels to learn task-aware visual representation to avoid reuse between different tasks. On the linguistic side, different hidden states are used to learn task-specific textual representation, hence the powerful representation ability of the pre-trained model can be applied to different tasks adaptively without interference from other tasks. We denote two modal (image and text) masks as Mimage t ∈ {0, 1}1×1×c and Mtext t ∈{0, 1}1×l, where c and l represent the total number of channels for image feature z and hidden states w for text embedding, respectively. Then, we select partial channels of z and partial hidden states of w with the corresponding masks for learning the task-aware language-image representation. To do element-wise multiplication between representations and masks, we have to expand the dimension of mask Mimage t and Mtext t to the same spatial resolution of z and w as shown in the TAM of Figure 2. Formally, we have: ˆz = zMimage t , ˆ w = wMtext t , (3) where ˆz and ˆ w will be used to learn task-aware representaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7099 Setting Method AP AP50 AP75 APS APM APL 70+10 SID (Peng et al. 2021) 32.8 49.0 35.0 17.1 36.9 44.5 ERD (Feng, Wang, and Yuan 2022) 34.9 51.9 37.4 18.7 38.8 45.5 ∗CL-DETR (Liu et al. 2023b) 37.6/40.1 56.5/57.8 39.4/43.7 20.5/23.2 39.1/43.2 49.9/52.1 Ours 42.9 59.2 45.2 24.3 45.1 54.1 60+20 SID (Peng et al. 2021) 32.7 49.8 34.6 17.2 37.6 43.5 ERD (Feng, Wang, and Yuan 2022) 35.8 52.9 38.4 20.6 39.4 46.5 Ours 38.9 55.3 42.2 22.2 42.6 53.3 50+30 SID (Peng et al. 2021) 33.8 51.0 36.1 17.6 38.1 45.1 ERD (Feng, Wang, and Yuan 2022) 36.6 54.0 38.9 19.4 40.4 48.0 Ours 41.2 58.5 44.8 23.0 45.4 57.2 40+40 SID (Peng et al. 2021) 34.0 51.4 36.3 18.4 38.4 44.9 ERD (Feng, Wang, and Yuan 2022) 36.9 54.5 39.6 21.3 40.4 47.5 ∗CL-DETR (Liu et al. 2023b) 37.0/37.5 56.2/55.1 39.1/40.3 20.9/20.9 38.9/40.8 49.2/50.7 Ours 40.4 57.4 43.9 23.3 44.7 54.5 Table 1: Incremental results (%) based on our detector on COCO benchmark under different scenarios, ∗indicates CL-DETR’s two detection baseline UP-DETR (Dai et al. 2021b)/Deformable DETR (Zhu et al. 2021) and the other compared results are borrowed from the ERD (Feng, Wang, and Yuan 2022). tion for the current task. In this way, the future tasks are able to adopt completely independent image-text representations, i.e, z(1 −Mimage t ) and w(1 −Mtext t ), which is no overlap with the previous tasks. We expect to alleviate catastrophic forgetting via learning the task-aware representation. Selective Inference Strategy. Given test images and prompts, we first extract their feature map z and text embedding w by visual and language backbone respectively. Then, Mimage t and Mtext t are used to select channels zMimage t and hidden states wMtext t that have been trained in different tasks. After that, deep fusion is used to produce task-aware language-image representation z′ and w′. Finally, a set of alignment scores sA×O t that focus on different tasks respectively for an image region is calculated as: sA×O t = z′(w′)⊺, (4) where A is the amount of image regions, and O is the amount of all learned categories. Please refer to Figure 3 for the graphic illustration of our Selective Inference Strategy, where we assume the total of image regions and tasks both as two, and each task includes two categories for simplicity. The s1 is produced via using Mimage 1 and Mtext 1 , and the same for s2. The simple solution is to directly unify the maximum/average alignment scores to generate a final prediction score: smax = ELEmax(s1, s2), (5) smean = ELEmean(s1, s2), (6) where ELEmax is the element-wise maximum operation and ELEmean is the element-wise average operation. Since there is no overlap in the prompts used in the different tasks, Eq. (5) and Eq. (6) can consequently make each image region more prone to be assigned with a series of false categories, i.e., False Positive predictions, and thus result in poor predictions. The Selective Inference Strategy is proposed to solve this dilemma, shown in the ROWmax part of Figure 3, for any alignment scores s, e.g., s1, s2, only the portion produced via task-aware representation is used, i.e., s′ 1, s′ 2. Finally, we unify these task-specific alignment scores by: su = ROWmax(s′ 1, s′ 2), (7) where ROWmax is the row-wise maximum operation. Specifically, we use the maximum confidence prediction between s′ 1 and s′ 2 as the final prediction. Experiments Datasets and Evaluation Metrics. Existing methods prove the validity of the method in two dataset settings, one using Pascal VOC 2007 and Microsoft COCO 2014, and the other using only Microsoft COCO2017. In order to better prove the validity of our method, the proposed method is evaluated on two benchmark datasets, i.e., Pascal VOC 2007 and Microsoft COCO 2017. VOC 2007 has 20 object classes, and we use the trainval subset for training and the test subset for evaluation, the mean average precision (mAP) at 0.5 IoU threshold is used to measure the performance. We ensure consistency between data partitioning methods and CIOD (Dong et al. 2023) for VOC 2007. COCO 2017 has 80K images in the training set and 40K images in the validation set for 80 object classes, we use the train set for training and the minival set for testing, and the standard COCO protocols are used as the evaluation metrics, i.e., AP, AP50, AP75, APS, APM, and APL. We ensure consistency between data partitioning methods and ERD (Feng, Wang, and Yuan 2022) for COCO 2017. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7100 Method 19+1 15+5 10+10 1-19 20 1-20 Avg 1-15 16-20 1-20 Avg 1-10 11-20 1-20 Avg Upper 77.2 80.3 77.4 78.8 78.8 73.0 77.4 75.9 77.2 77.5 77.4 77.4 Fine-tuning 61.4 62.8 61.5 62.1 46.6 59.2 40.9 52.9 33.0 68.1 50.6 50.6 LOD (Zhou et al. 2020) 70.5 53.0 69.6 61.8 Meta (Joseph et al. 2021b) 70.9 57.6 70.2 64.3 71.7 55.9 67.8 63.8 68.3 64.3 66.3 66.3 MVCD (Yang et al. 2022) 70.2 60.6 69.7 65.4 69.4 57.9 66.5 63.7 66.2 66.0 66.1 66.1 CIOD (Dong et al. 2023) 70.3 65.3 70.1 67.8 71.4 57.5 67.9 64.5 69.8 64.4 67.1 67.1 Ours 73.2 66.5 72.9 69.9 73.6 60.2 70.3 66.9 71.2 70.0 70.6 70.6 Table 2: mAP@0.5% results on single incremental step on Pascal-VOC 2007, all compared results are borrowed from the corresponding papers. Figure 5: Incremental results (mAP%) on COCO 2017 dataset under different scenarios, all compared results are borrowed from the corresponding papers. TAM Strategies 1-10 11-20 1-20 1-10 11-20 first 75% last 25% 68.0 71.9 70.0 first 25% last 75% 67.6 72.0 69.9 random 50% rest 50% 69.7±1.0 71.0±1.1 70.4±1.0 first 50% last 50% 71.2 70.0 70.6 Table 3: mAP@0.5% results on different feature map channels and text embedding hidden states selection strategies on VOC 2007 dataset with 10+10 setting. Experiments Setup. Specifically, we conduct experiments with different splits in the following class-incremental learning scenarios. One-step: we notate this setup as B + I, i.e., Base + Incremental. we observe a fraction B B+I of the training samples with B categories annotated in the first step. Then, in the second step, we observe the remaining I B+I of the training samples, where I new categories are annotated. Four settings for the COCO 2017 dataset, i.e., B + I = 40+40, 50+30, 60+20, 70+10 and three settings for VOC 2007 dataset, i.e., B + I = 19 + 1, 15 + 5, 10 + 10. Multistep: we notate this setup as B + I × N, where N is the incremental number of times. For the COCO 2017 dataset, two-step, and four-step settings with 20 and 10 new classes respectively added each time, i.e., B +I ×N = 40+20×2 and 40 + 10 × 4. We run each experiment three times in different random orders of categories and report the average mAP. Implementation Details. We build our method on GLIP, which uses Swin-Tiny (Liu et al. 2021) with FPN (Lin et al. 2017a) and BERT (Kenton and Toutanova 2019) as visual and language backbone respectively. All the experiments are performed on 8 NVIDIA Tesla V100 GPUs, with a batch size of 16, we use the ADAMW as the optimizer with the learning rate of the language backbone is 5 × 10-6 and other parts are 5 × 10-5. For the usage of feature map channels and text embedding hidden states, we divided them according to the proportion of categories, for example, the amount of channels and the amount of hidden states used in the two tasks are 50% and 50% respectively in the VOC 2007 dataset with the 10+10 setting. Overall Performance For the COCO 2017, we compared with CL-DETR (Liu et al. 2023b), ERD (Feng, Wang, and Yuan 2022), SID (Peng et al. 2021), and LwF (Li and Hoiem 2017), while for the VOC 2007, we compared with LOD (Zhou et al. 2020), Meta (Joseph et al. 2021b), MVCD (Yang et al. 2022), and CIOD (Dong et al. 2023). The total of the above methods is based on visual-only detectors. One-step. For the COCO 2017, Figure 1 demonstrates the performance under four settings, all of our method outperforms the current state-of-the-art CL-DETR and other classincremental object detection methods. For 70+10 and 40+40 settings, the AP of our method is 2.8 and 2.9 percentage points higher than CL-DETR, respectively; for 60+20 and 50+30 settings, on which CL-DETR has not experimented, the AP of our method is 3.1 and 4.6 percentage points higher than ERD, respectively. Table 2 shows the experimental results on the VOC 2007, the Avg metric equally weights new and old classes averaging their aggregated mAP. Under the three experimental settings of 19+1,15+5,10+10, our mAP is 2.8, 2.4, 3.5 percentage points higher than the state-of-theart method, and outperforms the CIOD on both the new and old classes. All analysis above illustrates that the proposed method can effectively overcome background-foreground conflict even in these challenging settings. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7101 Lines VB LB Ave Max Sel Params 1-10 11-20 1-20 1 Fine-tuning (Baseline) 232M 33.0 68.1 50.6 2 ✓ 464M 42.9 69.7 56.3 3 ✓ 464M 42.2 71.0 56.6 4 ✓ 464M 71.9 72.5 72.2 5 ✓ ✓ 232M 70.2 70.0 70.1 6 ✓ ✓ 232M 35.2 71.0 53.1 7 ✓ ✓ ✓ 232M 38.8 65.8 52.3 8 ✓ ✓ ✓ 232M 33.4 68.3 50.9 9 ✓ ✓ ✓ 232M 71.2 70.0 70.6 Table 4: mAP@0.5% results on VOC 2007 dataset with 10+10 setting. VB and LB indicate the vision branch and language branch respectively. Ave, Max, and Sel indicate three inference strategies, i.e., Eq. (5), Eq. (6), and Eq. (7). Multi-step. Figure 5 shows the AP in 40+20×2 and 40+ 10×4 settings on the COCO 2017. The AP for our first phase and related are 44.5 and 44.1, respectively. Compared with the state-of-the-art method CL-DETR, the AP of our final stage improved by 2.1 and 2.5 percentage points respectively at 40+20×2 and 40+10×4 settings. This fully demonstrates that our method is stable and can still maintain its ability to mitigate catastrophic forgetting under different scenarios. Ablation Study We validate the effectiveness of the various parts of our method on the VOC 2007 dataset, and all experiments are performed in the 10+10 setting. Sensitivity of Hyper-parameters Effectiveness of TAM and SIS. Table 4 illustrates the effectiveness of using the channel selection strategy on different branches and different inference strategies. Shown in lines 2-4, for each task, we utilize an alone model to learn independent representation and ensemble all predictions by Eq. (6), Eq. (5), and Eq. (7), this is likely Der (Dai et al. 2021a). We find that our inference strategy Eq. (7) is effective. Compared to line 4 with line 9, although the previous one gets better performance, the model parameters are twice as much as our method, with the increase of incremental tasks, the model parameters grow linearly, resulting in a huge storage burden. Lines 5-6 and 9 illustrate the effectiveness of the TAM on different branches. When TAM is used in the language branch (LB) only, there is an improvement (line 6) in the old classes’ performance compared to directly fine-tuning (line 1). There is a huge improvement when applying TAM only in the visual branch (line 5). The results demonstrate that our method reinforces the separability between image features and language representation, and has effectively solved the background-foreground conflict problem of classincremental object detection. Line 7 and line 8 use Eq. (6) and Eq. (5), respectively, to directly average or maximize the alignment scores, which produces a lot of false negative predictions due to the confusion of alignment between tasks, making it very ineffective. Line 9 uses Eq. (7) and achieves the peak performance by selectively using the alignment score strategy. Task 1: Cat Cow Bicycle Task 2: Tv Person Figure 6: Visualisation of the VOC 2007 dataset under 10+10 setting. The first column is the original image, the second and third columns are feature maps using {Mimage 1 , Mtext 1 } and {Mimage 2 , Mtext 2 } respectively. Analyze of Selection Strategies. We made four different selection strategies for image feature map channels and text embedding hidden states (shown in Table 3). The first and second lines use 75% and 25% of the channels and hidden states in the first task, respectively, and we find that the performance of the old and new classes is related to the amount of used channels and hidden states, so we divided the amount used in the different tasks according to the amount of classes to achieve the best results (line 4). Specifically, in line 3, when we randomly select the channels and hidden states (the channels and hidden states are not all consecutive), the results do not differ much from those using consecutive ones, which demonstrates the generalisability of our method. Visualized Analysis We conduct a visual analysis of the feature maps of the images after deep fusion, and Figure 6 illustrates the results. The first column displays the original image, while the second column shows the feature map obtained after deep fusion using Mimage 1 and Mtext 1 . The third column represents the feature map obtained using Mimage 2 and Mtext 2 . From the visualization, it is evident that our task-aware languageimage learning method effectively segregates the categories of different tasks in the feature maps. It focuses solely on the regions specific to each task, ensuring task-specific information is captured accurately. Conclusion In this paper, we implement the first application of a visuallanguage detector for class incremental object detection. The language-image detector is found to have a better ability to mitigate catastrophic forgetting when there are fewer categories, which fails when there are more categories due to increased task alignment confusion. To this end, we propose to learn task-aware language-image representation to segregate visual feature map channels and text embedding hidden states for different tasks. State-of-the-art results are achieved on both VOC 2007 and COCO 2017 benchmark datasets, demonstrating the effectiveness of our approach. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7102 Acknowledgments This work is supported by the National Key Research and Development Program of China (2021ZD0111000); National Natural Science Foundation of China (No.62222602, 62302167, U23A20343, 62176092, 62106075, 62176224), Shanghai Science and Technology Commission (No.21511100700), Development Project of Ministry of Industry and Information Technology (ZTZB-23-990-016), Shanghai Sailing Program under Grant (23YF1410500), Natural Science Foundation of Shanghai (23ZR1420400), Natural Science Foundation of Chongqing (CSTB2023NSCQJQX0007, CSTB2023NSCQ-MSX0137), and CCFTencent Rhino-Bird Young Faculty Open Research Fund (RAGR20230121). References Cai, T.; Zhang, Z.; Tan, X.; Qu, Y.; Jiang, G.; Wang, C.; and Xie, Y. 2023. Multi-Centroid Task Descriptor for Dynamic Class Incremental Inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7298–7307. Cermelli, F.; Geraci, A.; Fontanel, D.; and Caputo, B. 2022. Modeling missing annotations for incremental learning in object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3700– 3710. Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; and Zhang, L. 2021a. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7373– 7382. Dai, Z.; Cai, B.; Lin, Y.; and Chen, J. 2021b. Up-detr: Unsupervised pre-training for object detection with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1601–1610. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on computer vision and pattern recognition, 248–255. Dong, N.; Zhang, Y.; Ding, M.; and Bai, Y. 2023. Classincremental object detection. Pattern Recognition, 139: 109488. Feng, T.; Wang, M.; and Yuan, H. 2022. Overcoming catastrophic forgetting in incremental object detection via elastic response distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9427– 9436. Girshick, R. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, 1440–1448. Guan, L.; Wu, Y.; Zhao, J.; and Ye, C. 2018. Learn to detect objects incrementally. In Proceedings of 2018 IEEE Intelligent Vehicles Symposium, 403–408. Hu, X.; Tang, K.; Miao, C.; Hua, X.-S.; and Zhang, H. 2021. Distilling causal effect of data in class-incremental learning. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 3957–3966. Joseph, K.; Khan, S.; Khan, F. S.; and Balasubramanian, V. N. 2021a. Towards open world object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5830–5840. Joseph, K.; Rajasegaran, J.; Khan, S.; Khan, F. S.; and Balasubramanian, V. N. 2021b. Incremental object detection via meta-learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12): 9209–9216. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, 4171–4186. Li, J.; Xu, R.; Ma, J.; Zou, Q.; Ma, J.; and Yu, H. 2023a. Domain adaptive object detection for autonomous driving under foggy weather. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 612–622. Li, L. H.; Zhang, P.; Zhang, H.; Yang, J.; Li, C.; Zhong, Y.; Wang, L.; Yuan, L.; Zhang, L.; Hwang, J.-N.; et al. 2022. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10965–10975. Li, W. L.; Gao, B.-B.; Xia, B.; Wang, J.; Liu, J.; Liu, Y.; Wang, C.; and Zheng, F. 2023b. Cross-Modal Alternating Learning with Task-Aware Representations for Continual Learning. IEEE Transactions on Multimedia. Li, X.; Wang, W.; Wu, L.; Chen, S.; Hu, X.; Li, J.; Tang, J.; and Yang, J. 2020. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. In Proceedings of Advances in Neural Information Processing Systems, 21002–21012. Li, Z.; and Hoiem, D. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12): 2935–2947. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017a. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2117–2125. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017b. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. Liu, L.; Kuang, Z.; Chen, Y.; Xue, J.-H.; Yang, W.; and Zhang, W. 2020. Incdet: In defense of elastic weight consolidation for incremental object detection. IEEE transactions on neural networks and learning systems, 32(6): 2306–2319. Liu, S.; Zeng, Z.; Ren, T.; Li, F.; Zhang, H.; Yang, J.; Li, C.; Yang, J.; Su, H.; Zhu, J.; et al. 2023a. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499. Liu, Y.; Schiele, B.; Vedaldi, A.; and Rupprecht, C. 2023b. Continual detection transformer for incremental object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 23799–23808. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7103 Peng, C.; Zhao, K.; Maksoud, S.; Li, M.; and Lovell, B. C. 2021. SID: Incremental learning for anchor-free object detection via Selective and Inter-related Distillation. Computer vision and image understanding, 210: 103229. Petit, G.; Popescu, A.; Schindler, H.; Picard, D.; and Delezoide, B. 2023. Fetril: Feature translation for exemplar-free class-incremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3911–3920. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of International conference on machine learning, 8748–8763. Shieh, J.-L.; Haq, Q. M. u.; Haq, M. A.; Karam, S.; Chondro, P.; Gao, D.-Q.; and Ruan, S.-J. 2020. Continual learning strategy in one-stage object detection framework based on experience replay for autonomous driving vehicle. Sensors, 20(23): 6777. Shmelkov, K.; Schmid, C.; and Alahari, K. 2017. Incremental learning of object detectors without catastrophic forgetting. In Proceedings of the IEEE international conference on computer vision, 3400–3409. Tian, X.; Zhang, Z.; Tan, X.; Liu, J.; Wang, C.; Qu, Y.; Jiang, G.; and Xie, Y. 2023. Instance and Category Supervision are Alternate Learners for Continual Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5596–5605. Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9627–9636. Wang, F.-Y.; Zhou, D.-W.; Ye, H.-J.; and Zhan, D.-C. 2022. Foster: Feature boosting and compression for classincremental learning. In Proceedings of European conference on computer vision, 398–414. Wang, R.; Duan, X.; Kang, G.; Liu, J.; Lin, S.; Xu, S.; L¨u, J.; and Zhang, B. 2023. AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3654–3663. Xu, G.; Khan, A. S.; Moshayedi, A. J.; Zhang, X.; and Shuxin, Y. 2022. The Object Detection, Perspective and Obstacles In Robotic: A Review. EAI Endorsed Transactions on AI and Robotics, 1(1). Xu, Y.; Zhang, M.; Fu, C.; Chen, P.; Yang, X.; Li, K.; and Xu, C. 2023. Multi-modal Queried Object Detection in the Wild. In Proceedings of Neural Information Processing Systems. Yan, S.; Xie, J.; and He, X. 2021. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3014–3023. Yang, D.; Zhou, Y.; Zhang, A.; Sun, X.; Wu, D.; Wang, W.; and Ye, Q. 2022. Multi-view correlation distillation for incremental object detection. Pattern Recognition, 131: 108863. Yang, Y.; Zhou, D.-W.; Zhan, D.-C.; Xiong, H.; Jiang, Y.; and Yang, J. 2021. Cost-effective incremental deep model: Matching model capacity with the least sampling. IEEE Transactions on Knowledge and Data Engineering, 35: 3575–3588. Zhai, X.; Wang, X.; Mustafa, B.; Steiner, A.; Keysers, D.; Kolesnikov, A.; and Beyer, L. 2022. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18123–18133. Zhao, H.; Wang, H.; Fu, Y.; Wu, F.; and Li, X. 2021. Memory-efficient class-incremental learning for image classification. IEEE Transactions on Neural Networks and Learning Systems, 33(10): 5966–5977. Zhao, Z.; Zhang, Z.; Tan, X.; Liu, J.; Qu, Y.; Xie, Y.; and Ma, L. 2023. Rethinking Gradient Projection Continual Learning: Stability/Plasticity Feature Space Decoupling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3718–3727. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhou, W.; Chang, S.; Sosa, N.; Hamann, H.; and Cox, D. 2020. Lifelong object detection. arXiv preprint arXiv:2009.01129. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2021. Deformable detr: Deformable transformers for end-to-end object detection. In Proceedings of International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7104
2024
789
18,616
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial Attacks Anastasia Antsiferova1,2, Khaled Abud3, Aleksandr Gushchin1,2,3, Ekaterina Shumitskaya3, Sergey Lavrushkin1,2, Dmitriy Vatolin1,2,3 1MSU Institute for Artificial Intelligence 2ISP RAS Research Center for Trusted Artificial Intelligence 3Lomonosov Moscow State University {aantsiferova, khaled.abud, alexander.gushchin, ekaterina.shumitskaya, sergey.lavrushkin, dmitriy}@graphics.cs.msu.ru Abstract Nowadays, neural-network-based image- and video-quality metrics perform better than traditional methods. However, they also became more vulnerable to adversarial attacks that increase metrics’ scores without improving visual quality. The existing benchmarks of quality metrics compare their performance in terms of correlation with subjective quality and calculation time. Nonetheless, the adversarial robustness of image-quality metrics is also an area worth researching. This paper analyses modern metrics’ robustness to different adversarial attacks. We adapted adversarial attacks from computer vision tasks and compared attacks’ efficiency against 15 no-reference image- and video-quality metrics. Some metrics showed high resistance to adversarial attacks, which makes their usage in benchmarks safer than vulnerable metrics. The benchmark accepts submissions of new metrics for researchers who want to make their metrics more robust to attacks or to find such metrics for their needs. The latest results can be found online: https://videoprocessing.ai/benchmarks/ metrics-robustness.html. Introduction Nowadays, most new image- and video-quality metrics (IQA/VQA) employ deep learning. For example, in the latest NTIRE challenge on perceptual quality assessment (Gu et al. 2022), all winning methods were based on neural networks. With the increased sizes of datasets and availability of crowdsourced markup, deep-learning-based metrics started to outperform traditional approaches in correlation with subjective quality. However, learning-based methods, including IQA/VQA metrics, are more vulnerable to adversarial attacks. A simple metric like PSNR is more stable to image modifications that aim to manipulate quality scores (any changed pixel will decrease the score). In contrast, the behaviour of deep metrics is much more complex. The existing benchmarks evaluate metrics’ correlation with subjective quality but do not consider their robustness. At the same time, the possibility to manipulate IQA/VQA metrics scores is already being exploited in different real-life scenarios. Below are some examples of such scenarios and potential negative impacts from using non-robust IQA/VQA. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Decrease of perceptual quality. Metrics-oriented optimization modes are already being implemented in video encoders. libaom (Deng, Han, and Xu 2020) and LCEVC (VNova 2023) have options that optimize bitstream for increasing a VMAF score. Such tuning was designed to improve the visual quality of the encoded video; however, as VMAF is a learning-based metric, it may decrease perceptual quality (Zvezdakova et al. 2019; Siniukov et al. 2021). Using unstable image quality metrics as a perceptual proxy in a loss function may lead to incorrect restoration results (Ding et al. 2021). For instance, LPIPS is widely used as a perceptual metric, but optimizing its scores leads to increased brightness (Kettunen, H¨ark¨onen, and Lehtinen 2019), which is unwanted or even harmful (for example, when analyzing medical images). Cheating in benchmarks. The developers of image- and video-processing methods can use metrics’ vulnerabilities to achieve better competition results. For example, despite LPIPS already being shown to be vulnerable to adversarial attacks, it is still used as the main metric in some benchmarks, e.g. to compare super-resolution methods (Zhang et al. 2021). In some competitions that publish the results of subjective comparisons and objective quality scores, we can see the vast difference in these leaderboards. For instance, the VMAF leaders in 2021 Subjective Video Codecs Comparisons differ from leaders by subjective quality (Comparison 2021). Manipulating the results of image web search. Search engines use not only keywords and descriptions but also image quality measurement to rank image search results. For example, the developers of Microsoft Bing used image quality as one of the features to improve its output (Bing 2013). As shown in MediaEval 2020 Pixel Privacy: Quality Camouflage for Social Images competition (MediaEval 2020), there are a variety of ways to fool image quality estimators. Our study highlights the necessity of measuring the adversarial robustness of contemporary metrics for the research community. There are different ways to cheat on IQA/VQA metrics, such as increasing or decreasing their scores. In our study, we focus on analyzing metrics’ resistance to attacks that increase estimated quality scores, as this kind of attack has already appeared in many real-life cases. Also, by choosing to investigate metrics’ stability to scores increasing, we do not limit the generability of the results. We believe that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 700 the existing image- and video-quality metrics benchmarks must be supplemented with metrics’ robustness analysis. In this paper, we first attempt to do this and apply several types of adversarial attacks to a number of quality metrics. Our contributions are as follows: a new benchmark methodology, a leaderboard published online 1, and an analysis of currently obtained results. We published our code 2 for generating adversarial attacks and a list of open datasets used in this study, so the developers of IQA/VQA methods can measure the stability of their methods to attacks. For those who want their approach published on our website, the benchmark accepts new submissions of quality metrics. Try our benchmark using pip install robustness-benchmark. Related Work Depending on the availability of the undistorted image, IQA/VQA metrics can be divided into three types: noreference (NR), full-reference (FR) or reduced-reference (RR). NR metrics have the broadest applications but generally show lower correlations with subjective quality than FR and RR metrics. However, recent results show that new NR metrics outperformed many existing FR methods, so we mainly focused on NR metric evaluation in this paper. The performance of IQA/VQA metrics is traditionally evaluated using subjective tests that measure the correlation of metric scores with perceptual ones. The most well-known comparisons were published within NTIRE Workshop (Gu et al. 2022), and two benchmarks currently accept new submissions: MSU Video Quality Metrics Benchmark (Antsiferova et al. 2022) and UGC-VQA (Tu et al. 2021). These studies show how well the compared metrics estimate subjective quality but do not reflect their robustness to adversarial attacks. There are different ways to measure the robustness of neural network-based methods. It can be done via theoretical estimations, e.g. Lipschitz regularity. However, this approach has many limitations, including the number of parameters in the evaluated network. A more universal approach is based on applying adversarial attacks. This area is widely studied for computer vision models. However, not all methods can be adapted to attack quality metrics. The first methods for measuring the robustness of IQA/VQA metrics were based on creating a specific situation in which the metric potentially fails. Ciaramello and Reibman (2011a) first conducted such analysis and proposed a method to reveal the potential vulnerabilities of an objective quality model based on the generation of image or video pairs with the intent to cause misclassification errors (Brill et al. 2004) by this model. Misclassification errors include false ordering (FO, the objective model rates a pair opposite to humans), false differentiation (FD, the objective model rates a pair as different but humans do not), and false tie (FT, humans order a pair as different, but the objective model does not). H. Liu and A. Reibman (2016) introduced a soft1https://videoprocessing.ai/benchmarks/metrics-robustness. html 2https://github.com/msu-video-group/MSU Metrics Robustness Benchmark Benchmark # attacks / # metrics Metrics type Test datasets Ciaramello and Reibman (2011a) 5 / 4 FR 10 images Ciaramello and Reibman (2011b) 5 / 9 NR, FR 473 images Liu and Reibman (2016) 5 / 11 NR, FR 60 images Shumitskaya et al. (2022) 1 / 7 NR 20 videos Zhang et al. (2022) 1 / 4 NR 12 images Ghildyal and Liu (2023) 6 / 5 FR 12,227 images Ours 9 / 15 NR, FR 3000 images, 1 video Table 1: Comparisons of image- and video-quality metrics’ stability to adversarial attacks. ware called “STIQE” that automatically explores an imagequality metric’s performance. It allows users to execute tests and then generate reports to determine how well the metric performs. Testing consists of applying several varying distortions to images and checking whether the metric score rises monotonically as the degree of the applied distortion. Nowadays, metrics’ adversarial robustness is primarily estimated by adapting attacks designed for computer vision tasks to image quality metrics. A more detailed description of existing attacks against metrics that we used in our study is given in the section “List of adversarial attacks”. There are two recently published attacks that we aim to add to the benchmark shortly: a new CNN-based generative attack FACPA (Shumitskaya, Antsiferova, and Vatolin 2023), attack with human-in-the-loop by Zhang et al. (Zhang et al. 2022) and spatial attack that was adapted for metrics (Ghildyal and Liu 2023). Recently, a new study on the adversarial robustness of full-reference metrics was published (Ghildyal and Liu 2023). The authors showed that six full-reference metrics are susceptible to imperceptible perturbations generated via common adversarial attacks such as FGSM (Goodfellow, Shlens, and Szegedy 2015), PGD (Madry et al. 2017), and the One-pixel attack (Su, Vargas, and Sakurai 2019). They also showed that adversarial perturbations crafted for LPIPS metric (Zhang et al. 2018) using stAdv attack can be transferred to other metrics. As a result, they concluded that more accurate learning-based metrics are less robust to adversarial attacks than traditional ones. We summarised the existing research on IQA/VQA metrics’ robustness to adversarial attacks in Table 1. Benchmark List of Metrics In this paper, we focused on the evaluation of only noreference metrics for several reasons: firstly, there exists a similar evaluation of full-reference metrics (Ghildyal and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 701 Liu 2023); secondly, no-reference metrics have a more comprehensive range of applications and are more vulnerable to attacks; thirdly, these metrics are mostly learning-based. We considered state-of-the-art metrics according to other benchmarks and various other no-reference metrics. All tested metrics assess image quality, except for VSFA (Li, Jiang, and Jiang 2019) and MDTVSFA (Li, Jiang, and Jiang 2021), which are designed for videos. RankIQA (Liu, Van De Weijer, and Bagdanov 2017) pretrains a model on a large dataset with synthetic distortions to compare pairs of images, then fine-tunes it on a small realistic dataset. MetaIQA (Zhu et al. 2020) introduces a quality prior model pre-trained on several dozens of specific distortions and fine-tuned on a smaller target dataset, similar to RankIQA. WSP (Su and Korhonen 2020) is concerned with Global Average Pooling feature aggregation used by most existing methods and replaces it with Weighted Spatial Pooling to distinguish important locations. CLIP-IQA (Wang, Chan, and Loy 2023) predicts the quality perception and image-provoked abstract emotions by feeding heterogeneous text prompts and the image to the CLIP network. PAQ-2-PIQ (Ying et al. 2020) introduces a large subjective picture quality database of about 40,000 images, trains a CNN with ResNet-18 backbone to predict patch quality and combines the predictions with RoI pooling. HyperIQA (Su et al. 2020) focuses on real-life IQA and proposes a hyperconvolutional network that predicts the weights of fully connected layers. MANIQA (Yang et al. 2022) assesses quality of GAN-based distortions. The model uses vision transformer features processed by proposed network modules to enhance global and local interactions. The final score prediction utilizes patch weighting. TReS (Golestaneh, Dadsetan, and Kitani 2022) proposes to compute local features with CNN and non-local features with self-attention, introduces a per-batch loss for correct ranking and a self-supervision loss between reference and flipped images. FPR (Chen et al. 2022) hallucinates pseudo-reference features from the distorted image using mutual learning on reference and distorted images with triplet loss. Attention maps are predicted to aggregate scores over patches. VSFA (Li, Jiang, and Jiang 2019) estimates video quality using ResNet-50 features for content awareness and differentiable temporal aggregation, which consists of gated recurrent units with min pooling. MDTVSFA (Li, Jiang, and Jiang 2021) enhances VSFA with explicit mapping between predicted and datasetspecific scores, supported by multi-dataset training. NIMA (Talebi and Milanfar 2018) predicts a distribution of scores instead of regressing a single value and considers both technical and aesthetic image scores. It is trained on the Aesthetic Visual Analysis database using squared earth mover’s distance as a loss. LINEARITY (Li, Jiang, and Jiang 2020) invents the norm-in-norm loss, which shows ten times faster convergence than MSE or MAE with ResNet architecture. SPAQ (Fang et al. 2020) collects a database of 11,125 smartphone photos, proposes a ResNet-50 baseline model and three modified versions incorporating EXIF data (MT-E), subjective image attributes (MT-A) and scene labels (MTS). KonCept512 (Hosu et al. 2020) collects KonIQ-10k, a diverse crowdsourced database of 10,073 images and trains a model with InceptionResNetV2 backbone. We also used MSE, PSNR and SSIM (Wang et al. 2004) as proxy metrics to estimate image quality degradation after attacks. The choice is motivated by their structure (fullreference and not learning-based), which makes them more stable to adversarial attacks. List of Adversarial Attacks In all attacks, we define the loss function as J(θ, I) = 1 −score(I)/range and minimize it by making small steps along the gradient direction in image space, which increases the attacked metric score. range is computed as the difference between maximum and minimum metric values on the dataset and serves to normalize the gradient magnitude across different metrics. FGSM-based attacks are performed for each image. The pixel difference is limited by ε. FGSM (Goodfellow, Shlens, and Szegedy 2015) is a basic approach that makes one gradient step: Iadv = I −ε · sign(∇IJ(θ, I)). I-FGSM (Kurakin, Goodfellow, and Bengio 2018) is a more computationally expensive method that uses T iterations and clips the image on each step: Iadv t+1 = ClipI,ε{Iadv t − α · sign(∇IJ(θ, Iadv t ))}, where t = 0, 1, . . . T −1, I0 is the input image I, and α is the perturbation intensity. The clipped pixel value at position (x, y) and channel c satisfies |Iadv t (x, y, c) −I(x, y, c)| < ε . PGD (Madry et al. 2017) is identical to I-FGSM except for the random initialization in the ε-vicinity of the original image; due to its similarity to I-FGSM, we didn’t include it in the experiments. MI-FGSM (Dong et al. 2018) uses gradient momentum: Iadv t+1 = ClipI,ε{Iadv t −α · sign(gt)}, t = 0, 1, . . . T −1, gt = ∇IJ(θ, Iadv t ) + ν · gt−1, g−1 = 0, where ν controls the momentum preservation. AMI-FGSM (Sang et al. 2022) is identical to MI-FGSM, except the pixel difference limit ε is set to 1/NIQE(I) by computing the NIQE (Mittal, Soundararajan, and Bovik 2012) no-reference metric. Universal Adversarial Perturbation (UAP)-based attacks generate adversarial perturbation for an attacked metric, which is the same for all images and videos. When UAP is generated, the attack process consists of the mere addition of an image with UAP. The outcome is the image with an increased target metric score. We used three methods to train UAPs. Cumulative-UAP is obtained by averaging nonuniversal perturbation on the training dataset. Non-universal perturbations are generated using one step of gradient descent. Optimized-UAP is obtained by training UAP weights using batch training with Adam optimizer and loss function defined as target metric with opposite sign. GenerativeUAP is obtained by auxiliary U-Net generator training. The network is trained to generate a UAP from random noise with uniform distribution. The Adam optimizer is used for training, and the loss function is defined as the target metric with the opposite sign. Once the network is trained, a generated UAP is saved and further used to attack new images. Perceptual-aware attacks use other image quality metrics to control attack imperceptibility to the human eye. Korhonen et al. (Korhonen and You 2022) proposes a method for generating adversarial images for NR quality metrics The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 702 with perturbations located in textured regions. They use gradient descent with additional elementwise multiplication of gradients by a spatial activity map. The spatial activity map of an image is calculated using horizontal and vertical 3×3 Sobel filters. MADC (Wang and Simoncelli 2008) is a method for comparing two image- or video-quality metrics by constructing a pair of examples that maximize or minimize the score of one metric while keeping the other fixed. In our study, we fixed MSE while maximizing an attacked metric. The projected gradient descent step and binary search are performed on each iteration. Let g1 be the gradient with direction to increase the attacked metric and g2 the gradient of MSE on some iteration. The projected gradient is then calculated as pg = g1 −g2T ·g1 g2T ·g2 · g2. After projected gradient descent, the binary search to guarantee a fixed MSE is performed (with 0.04 precision). The binary search is the process that consists of small steps along the MSE gradient: if the precision is bigger than 0.04, then steps are taken along the direction of reducing MSE and vice versa. Methodology Datasets This study incorporated pre-trained quality metrics as a part of our evaluation benchmark. We did not perform metrics fine-tuning on any data. We used six datasets summarised in Table 2. These datasets are widely used in the computer vision field. We chose them to cover a diverse range of real-life scenarios, including images and video, with varying resolutions from 299 × 299 up to 1920 × 1080 (FullHD). All datasets have an open license that allows them to be used in this work. Our analysis categorized the adversarial attacks into trainable and non-trainable attacks. Three datasets were used to train adversarial attacks, and three were used for testing. We trained UAP attacks using each training dataset, resulting in three versions of each attack. These versions were subsequently evaluated on the designated testing datasets, and the results for different versions were averaged among each UAP-attack type and amplitude. Non-trainable attacks were directly evaluated on the testing datasets. We have analyzed the efficiency and generalization capabilities of both trainable and non-trainable adversarial attacks across various data domains while also considering the influence of training data on metric robustness. NIPS 2017: Adversarial Learning Development Set (2017) was also used to train metrics’ domain transformations (described further in “Evaluation metrics”). Implementation Details We used public source code for all metrics without additional pretraining and selected the default parameters to avoid overfitting. The training and evaluation of attacks on the metrics were fully automated. We employed the CI/CD tools within a GitLab repository for our measurement procedures. We established a sophisticated end-to-end pipeline from the attacked metrics’ original repositories to the resulting robustness scores to make the results entirely verifiable and reproducible. The pipeline scheme, the list of used attack’s hyper-parameters and the hyperparameter choice justification are presented in the supplementary materials (Antsiferova et al. 2023). UAP-based attacks (UAP, cumulative UAP and generative UAP) were averaged with three different amplitudes (0.2, 0.4 and 0.8). Quality metrics implementations were obtained from official repositories. We only modified interfaces to meet our requirements and used default parameters provided by the authors. Links to original repositories and a list of applied patches (where it was needed to enable gradients) are provided in supplementary materials (Antsiferova et al. 2023). Calculations were performed on two computers with the following characteristics: • 4 x GeForce RTX 3090 GPU, an Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz • 4 x NVIDIA RTX A6000 GPU, AMD EPYC 7532 32Core Processor @ 2.40GHz All calculations took a total of about 2000 GPU hours. The values of parameters (ϵ, number of iterations, etc.) for the attacks are listed in the supplementary materials (Antsiferova et al. 2023). Evaluation Metrics Before calculating metrics’ robustness scores, metric values are transformed with min-max scaling so that the values before the attack lie in the range [0,1]. To compensate for the nonlinear dependence between metrics (Zhang et al. 2022), we converted all metrics to the same domain before comparison. MDTVSFA (Li, Jiang, and Jiang 2021) was used as the primary domain, as it shows the best correlations with MOS among tested metrics according to the MSU Video Quality Metrics benchmark results. We employed the 1-Dimensional Neural Optimal Transport (Korotin, Selikhanovych, and Burnaev 2023) method to build the nonlinear transformation between the distributions of all metrics to one general shape. We also present the results without the nonlinear transformation in the supplementary materials (Antsiferova et al. 2023). Absolute and Relative gain. Absolute gain is calculated as the average difference between the metric values before and after the attack. Relative gain is the average ratio of the difference between the metric values before and after the attack to the metric value before the attack plus 1 (1 is added to avoid division problems, as values before the attack are scaled to [0,1]). Abs.gain = 1 n Pn i=1 (f(x′ i) −f(xi)) , Rel.gain = 1 n Pn i=1 f(x′ i)−f(xi) f(xi)+1 , (1) where n is the number of images, xi is the clear image, x′ i — it’s attacked counterpart, and f(.) is the IQA metric function. Robustness score (Zhang et al. 2022) Rscore is defined as the average ratio of maximum allowable change in quality prediction to actual change over all attacked images in a logarithmic scale: Rscore = 1 n n X i=1 log10 max{β1 −f(x′ i), f(xi) −β2} |f(x′ i) −f(xi)|  . (2) As metric values are scaled, we use β1 = 1 and β2 = 0. Wasserstein score (Kantorovich 1960) Wscore and Energy Distance score (Szekely 2002) Escore are used to evaluate the statistical differences between distributions of metric values before and after the attack. Large positive values The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 703 Training datasets (for UAP attacks) Type Number of samples Resolution Testing datasets Type Number of samples Resolution COCO (2014) Images 300,000 640 × 480 NIPS (2017) Images 1,000 299 × 299 Pascal VOC (2012) Image 11,530 500 × 333 Derf’s collection (2001) Video 24 (∼10k frames) 1920 × 1080 Vimeo-90k Train set (2019) Triplets of images 2,001 448 × 256 Vimeo 90k Test set (2019) Triplets of images 11,346 448 × 256 Table 2: Summary of the datasets used in our study. of these scores correspond to a significant upward shift of the metric’s predictions, values near zero indicate the absence of the metric’s response to the attack, and negative ones show a decrease in the metric predictions and the inefficiency of the attack. These scores are defined as corresponding distances between distributions multiplied by the sign of the difference between the mean values before and after the attack: Wscore = W1( ˆP, ˆQ) · sign(¯x ˆ Q −¯x ˆ P ), W1( ˆP, ˆQ) = infγ∈Γ( ˆ P , ˆ Q) R R2|x −y|dγ(x, y) = = R ∞ −∞| ˆF ˆ P (x) −ˆF ˆ Q(x)|dx; (3) Escore = E( ˆP, ˆQ) · sign(¯x ˆ Q −¯x ˆ P ), E( ˆP, ˆQ) = (2 · R ∞ −∞( ˆF ˆ P (x) −ˆF ˆ Q(x))2dx) 1 2 , (4) where ˆP and ˆQ are empirical distributions of metric values before and after the attack, ˆF ˆ P (x) and ˆF ˆ Q(x) are their respective empirical Cumulative Distribution Functions, and ¯x ˆ P and ¯x ˆ Q are their respective sample means. Results The main results of our study are aggregated across the different attack types, training and testing datasets. Tables and figures for other robustness measures, by specific datasets and attacks, are presented in the supplementary materials (Antsiferova et al. 2023) and on the benchmark webpage. Metrics that are robust to UAP-based attacks. Despite the three types of implemented UAP-based attacks resulting in different attack efficiency, the most and least robust metrics for these attacks are similar. MANIQA showed the best robustness score for all amplitudes of Optimized UAP and is within top-3 metrics robust to Generative UAP. This metric uses ViT and applies attention mechanisms across the channel and spatial dimensions, increasing interaction among different regions of images globally and locally. HYPER-IQA showed high resistance to all UAP attacks. Besides FPR, the PAQ-2-PIQ showed the worst energy distance score. The robustness scores of analyzed attacks are provided in Table 3 and illustrated in Fig. 1. Annotations include only five best and five worst methods judged by robustness score for better visibility. Metrics that are robust to iterative attacks. CLIP-IQA shows the best robustness to most iterative attacks, followed by RANK-IQA and MDTVSFA. RANK-IQA also offers the 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Mean SSIM 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.5 Mean Robustness score FPR LINEARITY PAQ2PIQ SPAQ TRES META-IQA MDTVSFA MANIQA VSFA WSP FPR VSFA LINEARITY PAQ2PIQ WSP NIMA HYPER-IQA RANK-IQA MANIQA META-IQA FPR VSFA LINEARITY PAQ2PIQ TRES NIMA HYPER-IQA META-IQA RANK-IQA MANIQA Cumulative UAP, amplitude=0.8 Default UAP, amplitude=0.8 Generative UAP, amplitude=0.8 Figure 1: Metrics’ robustness score for UAP-based adversarial attacks and SSIM measured between original and attacked images. The results are averaged for all test datasets. best resistance to perceptually oriented MADC and Korhonen attacks. These attacks use approaches to reduce the visibility of distortions caused by an attack, which makes it more difficult for them to succeed. The robustness score of analyzed attacks is shown in Table 3 and illustrated in Fig. 2. Annotations include only five best and five worst methods judged by robustness score for better visibility. Metrics’ robustness at different levels of perceptual quality loss. As described in the Benchmark section, we used SSIM, PSNR and MSE as simple proxies for estimating perceptual quality loss of attacks in this study. Fig. 3 shows an averaged robustness score depending on SSIM loss of attacked images for all attacks. It shows that all metrics become less robust to attacks when more quality degradation is allowed. HYPER-IQA’s robustness is more independent from SSIM loss among all metrics. Otherwise, PAQ-2-PIQ, VSFA and FPR are becoming more vulnerable than other metrics with increasing SSIM degradation. Results for other proxy metrics (MSE and PSNR) are provided in the supplementary materials (Antsiferova et al. 2023) and on the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 704 O-UAP G-UAP C-UAP FGSM I-FGSM MI-FGSM AMI-FGSM MADC Korhonen et al. CLIP-IQA 0.632 0.397 0.067 0.398 0.836 0.821 0.819 0.823 0.812 META-IQA 0.183 -0.029 0.003 0.529 1.307 1.285 1.287 0.934 0.997 RANK-IQA 0.295 0.064 0.180 0.285 1.063 0.891 0.893 0.383 0.763 HYPER-IQA 0.072 -0.094 0.086 -0.406 1.366 1.387 1.396 0.848 1.329 KONCEPT 0.419 0.187 0.435 0.574 1.248 1.066 1.066 0.753 1.042 FPR 1.705 0.846 0.966 0.682 3.344 3.210 3.215 1.703 3.018 NIMA -0.024 0.046 0.018 0.258 1.203 1.147 1.148 0.959 1.041 WSP 0.784 0.155 0.012 0.405 1.260 1.251 1.257 0.760 0.894 MDTVSFA 0.756 0.359 0.005 0.185 1.011 0.983 0.983 0.914 0.805 LINEARITY 1.022 0.445 0.972 -0.220 1.284 1.218 1.224 0.816 1.204 VSFA 1.151 0.361 0.014 0.306 2.054 2.272 2.274 1.470 1.539 PAQ-2-PIQ 0.943 0.252 0.873 0.578 1.190 1.123 1.125 0.536 0.997 SPAQ 0.605 0.357 0.560 0.266 1.514 1.371 1.375 0.740 1.301 TRES 0.691 0.358 0.634 0.826 1.223 1.209 1.210 0.741 1.173 MANIQA -0.390 -0.174 -0.003 0.499 1.403 1.225 1.226 0.698 0.843 Table 3: Metrics’ robustness calculated using energy distance score measure to different types of attacks. The results are averaged across test datasets. O-UAP stands for “Optimised-UAP”, G-UAP for “Generative-UAP”, C-UAP for “Cumulative-UAP”. 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 Mean SSIM 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 Mean Robustness score FPR VSFA TRES HYPER-IQA MANIQA CLIP-IQA MDTVSFA RANK-IQA NIMA LINEARITY FPR VSFA HYPER-IQA SPAQ LINEARITY CLIP-IQA RANK-IQA MDTVSFA MANIQA WSP FPR VSFA NIMA META-IQA MDTVSFA RANK-IQA PAQ2PIQ MANIQA CLIP-IQA KONCEPT FGSM-based Korhonen et al. MADC Figure 2: Metrics’ robustness score for iterative adversarial attacks and SSIM measured between original and attacked images. The results are averaged for all test datasets. benchmark webpage. Overall metrics’ robustness comparison. Table 4 and Fig. 4 show the general results of our study. First, we see that iterative attacks are more efficient against all metrics. However, metrics’ robustness is different for UAP and iterative attacks. We summarised the robustness of all attack types in the table and compared them using various measures. According to absolute and relative gain, the leaders are the same: MANIQA, NIMA and RANK-IQA, and they also perform well based on other measures. META-IQA and 0.0 0.1 0.2 0.3 0.4 0.5 0.6 SSIM loss 0.5 0.0 0.5 1.0 1.5 2.0 Mean Robustness score CLIP-IQA META-IQA RANK-IQA HYPER-IQA KONCEPT FPR NIMA WSP MDTVSFA LINEARITY VSFA PAQ2PIQ SPAQ TRES MANIQA Figure 3: Dependency of metrics’ robustness score of SSIM loss for attacked images (all types of attacks). MDTVSFA have high robustness scores. Energy measures also show similar results. FPR is the least stable to adversarial attacks, considering all tests and measures. One-sided Wilcoxon signed-rank tests. To study the statistical difference in the results, we conducted one-sided Wilcoxon tests on the values of absolute gains for all pairs of metrics. A table with detailed test results for different types of attacks can be found in the supplementary materials (Antsiferova et al. 2023). All metrics are statistically superior to the FPR metric, which means that FPR can be significantly increased under the influence of any of the conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 705 Abs.gain ↓ Rel.gain ↓ Rscore ↑ Escore ↓ Wscore ↓ CLIP-IQA 0.256 (0.254, 0.258) 0.184 (0.182, 0.185) 0.702 (0.698, 0.707) 0.424 0.256 META-IQA 0.241 (0.238, 0.243) 0.182 (0.180, 0.184) 1.168 (1.161, 1.176) 0.324 0.241 RANK-IQA 0.184 (0.183, 0.186) 0.12 (0.119, 0.122) 0.843 (0.839, 0.848) 0.285 0.184 HYPER-IQA 0.232 (0.228, 0.235) 0.151 (0.149, 0.153) 0.740 (0.735, 0.745) 0.277 0.237 KONCEPT 0.328 (0.326, 0.330) 0.227 (0.225, 0.228) 0.584 (0.579, 0.589) 0.489 0.328 FPR 2.591 (2.568, 2.615) 1.730 (1.714, 1.746) -0.229(-0.234, -0.224) 1.409 2.591 NIMA 0.17 (0.168, 0.172) 0.115 (0.114, 0.117) 1.152 (1.146, 1.158) 0.239 0.170 WSP 0.380 (0.377, 0.384) 0.276 (0.273, 0.278) 0.893 (0.886, 0.901) 0.449 0.380 MDTVSFA 0.279 (0.277, 0.281) 0.186 (0.184, 0.187) 0.99 (0.983, 0.998) 0.447 0.279 LINEARITY 0.683 (0.679, 0.687) 0.447 (0.444, 0.450) 0.267 (0.263, 0.272) 0.780 0.683 VSFA 0.899 (0.891, 0.907) 0.611 (0.606, 0.617) 0.659 (0.650, 0.667) 0.739 0.899 PAQ-2-PIQ 0.521 (0.518, 0.524) 0.341 (0.338, 0.343) 0.449 (0.443, 0.454) 0.675 0.521 SPAQ 0.671 (0.665, 0.678) 0.536 (0.531, 0.542) 0.493 (0.488, 0.499) 0.637 0.671 TRES 0.433 (0.431, 0.435) 0.305 (0.304, 0.307) 0.320 (0.317, 0.323) 0.627 0.433 MANIQA 0.104 (0.101, 0.107) 0.078 (0.076, 0.08) 0.986 (0.979, 0.993) 0.207 0.175 Table 4: Metrics’ robustness to tested adversarial attacks according to different stability measures. The results for abs. gain, rel. gain and R-score were averaged across different types of attacks and test datasets, so they are presented with confidence intervals. The Escore and Wscore were calculated using the whole set of attacked results without averaging. 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 Mean SSIM 0.5 0.0 0.5 1.0 1.5 Mean Robustness score CLIP-IQA META-IQA RANK-IQA HYPER-IQA KONCEPT FPR NIMA WSP MDTVSFA LINEARITY VSFA PAQ2PIQ SPAQ TRES MANIQA CLIP-IQA META-IQA RANK-IQA HYPER-IQA KONCEPT FPR NIMA WSP MDTVSFA LINEARITY VSFA PAQ2PIQ SPAQ TRES MANIQA Iterative UAP-based Figure 4: Mean robustness score of compared metrics versus SSIM averages for UAP-based and iterative attacks. sidered attacks. MANIQA, on the contrary, turns out to be one of the most stable metrics for all attacks on average, but it is inferior to CLIP-IQA on FGSM-based attacks. Overall, the results of the Wilcoxon one-sided tests are consistent with our evaluations of the obtained results. Stable metrics feature analysis. To analyze the relationship of metrics’ architectures with robustness, we summarised the main features of tested metrics in Table 1 of the supplementary materials. A common feature of robust metrics is the usage of the input image cropping or resizing. High stability to attacks was also shown by META-IQA, which does not transform input images but uses a relatively small backbone network that leverages prior knowledge of various image distortions obtained during so-called metalearning. Conclusion This paper analyzed the robustness of 15 no-reference image/video-quality metrics to different adversarial attacks. Our analysis showed that all metrics are susceptible to adversarial attacks, but some are more robust than others. MANIQA, META-IQA, NIMA, RANK-IQA and MDTVSFA showed high resistance to adversarial attacks, making their usage in practical applications safer than other metrics. We published this comparison online and are accepting new metrics submissions. This benchmark can be helpful for researchers and companies who want to make their metrics more robust to potential attacks. In this paper, we revealed ways of cheating on image quality measures, which can be considered to have a potential negative social impact. However, as was discussed in the Introduction, the vulnerabilities of image- and video-quality metrics are already being exploited in some real-life applications. At the same time, only a few studies have been published. We open our findings to the research community to increase the trustworthiness of image/video processing and compression benchmarks. Limitations of our study are listed in the supplementary materials (Antsiferova et al. 2023). Acknowledgments The authors would like to thank the video group of MSU Graphics and Media Laboratory, especially Kirill Malyshev and Vyacheslav Napadovsky, for setting up the infrastructure and helping to receive computational results for this research. The work was supported by a grant for research The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 706 centers in the field of artificial intelligence, provided by the Analytical Center in accordance with the subsidy agreement (agreement identifier 000000D730321P5Q0002) and the agreement with the Ivannikov Institute for System Programming dated November 2, 2021 No. 70-2021-00142. References 2001. Xiph.org Video Test Media [derf’s collection]. https: //media.xiph.org/video/derf/. 2017. NIPS 2017: Adversarial Learning Development Set. https://www.kaggle.com/datasets/google-brain/nips2017-adversarial-learning-development-set. Antsiferova, A.; Abud, K.; Gushchin, A.; Lavrushkin, S.; Shumitskaya, E.; Velikanov, M.; and Vatolin, D. 2023. Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks. arXiv:2310.06958. Antsiferova, A.; Lavrushkin, S.; Smirnov, M.; Gushchin, A.; Vatolin, D.; and Kulikov, D. 2022. Video compression dataset and benchmark of learning-based video-quality metrics. In Advances in Neural Information Processing Systems, volume 35, 13814–13825. Bing, M. 2013. A Behind the Scenes Look at How Bing is Improving Image Search Quality. https://blogs.bing.com/ search-quality-insights/2013/08/23/a-behind-the-sceneslook-at-how-bing-is-improving-image-search-quality. Brill, M. H.; Lubin, J.; Costa, P.; Wolf, S.; and Pearson, J. 2004. Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1. Signal Processing: Image Communication, 19(2): 101–107. Chen, B.; Zhu, L.; Kong, C.; Zhu, H.; Wang, S.; and Li, Z. 2022. No-Reference Image Quality Assessment by Hallucinating Pristine Features. IEEE Transactions on Image Processing, 31: 6139–6151. Ciaramello, F. M.; and Reibman, A. R. 2011a. Supplemental subjective testing to evaluate the performance of image and video quality estimators. In Human Vision and Electronic Imaging XVI, volume 7865, 249–257. SPIE. Ciaramello, F. M.; and Reibman, A. R. 2011b. Systematic stress testing of image quality estimators. In 2011 18th IEEE International Conference on Image Processing, 3101–3104. IEEE. Comparison, M. V. C. 2021. MSU Video Codecs Comparison 2021 Part 2: Subjective. http://www.compression.ru/ video/codec comparison/2021/subjective report.html. Deng, S.; Han, J.; and Xu, Y. 2020. Vmaf based ratedistortion optimization for video coding. In 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), 1–6. IEEE. Ding, K.; Ma, K.; Wang, S.; and Simoncelli, E. P. 2021. Comparison of full-reference image quality models for optimization of image processing systems. International Journal of Computer Vision, 129: 1258–1281. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9185–9193. Everingham, M.; Van Gool, L.; Williams, C. K. I.; Winn, J.; and Zisserman, A. 2012. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html. Fang, Y.; Zhu, H.; Zeng, Y.; Ma, K.; and Wang, Z. 2020. Perceptual quality assessment of smartphone photography. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686. Ghildyal, A.; and Liu, F. 2023. Attacking Perceptual Similarity Metrics. arXiv preprint arXiv:2305.08840. Golestaneh, S. A.; Dadsetan, S.; and Kitani, K. M. 2022. Noreference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1220–1230. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Gu, J.; Cai, H.; Dong, C.; Ren, J. S.; Timofte, R.; Gong, Y.; Lao, S.; Shi, S.; Wang, J.; Yang, S.; et al. 2022. NTIRE 2022 challenge on perceptual image quality assessment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 951–967. Hosu, V.; Lin, H.; Sziranyi, T.; and Saupe, D. 2020. KonIQ10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29: 4041–4056. Kantorovich, L. V. 1960. Mathematical Methods of Organizing and Planning Production. Management Science, 6(4): 366–422. Kettunen, M.; H¨ark¨onen, E.; and Lehtinen, J. 2019. E-lpips: robust perceptual image similarity via random transformation ensembles. arXiv preprint arXiv:1906.03973. Korhonen, J.; and You, J. 2022. Adversarial Attacks Against Blind Image Quality Assessment Models. In Proceedings of the 2nd Workshop on Quality of Experience in Visual Multimedia Applications, 3–11. Korotin, A.; Selikhanovych, D.; and Burnaev, E. 2023. Neural Optimal Transport. In International Conference on Learning Representations. Kurakin, A.; Goodfellow, I. J.; and Bengio, S. 2018. Adversarial examples in the physical world. In Artificial intelligence safety and security, 99–112. Chapman and Hall/CRC. Li, D.; Jiang, T.; and Jiang, M. 2019. Quality assessment of in-the-wild videos. In Proceedings of the 27th ACM International Conference on Multimedia, 2351–2359. Li, D.; Jiang, T.; and Jiang, M. 2020. Norm-in-norm loss with faster convergence and better performance for image quality assessment. In Proceedings of the 28th ACM International Conference on Multimedia, 789–797. Li, D.; Jiang, T.; and Jiang, M. 2021. Unified quality assessment of in-the-wild videos with mixed datasets training. International Journal of Computer Vision, 129: 1238–1257. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 707 Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Liu, H.; and Reibman, A. R. 2016. Software to stress test image quality estimators. In 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), 1– 6. IEEE. Liu, X.; Van De Weijer, J.; and Bagdanov, A. D. 2017. Rankiqa: Learning from rankings for no-reference image quality assessment. In Proceedings of the IEEE international conference on computer vision, 1040–1049. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. MediaEval. 2020. Pixel Privacy: Quality Camouflage for Social Images. https://multimediaeval.github.io/editions/2020/ tasks/pixelprivacy/. Mittal, A.; Soundararajan, R.; and Bovik, A. C. 2012. Making a “completely blind” image quality analyzer. IEEE Signal processing letters, 20(3): 209–212. Sang, Q.; Zhang, H.; Liu, L.; Wu, X.; and Bovik, A. 2022. On the Generation of Adversarial Samples for Image Quality Assessment. Available at SSRN 4112969. Shumitskaya, E.; Antsiferova, A.; and Vatolin, D. S. 2022. Universal Perturbation Attack on Differentiable NoReference Image- and Video-Quality Metrics. In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press. Shumitskaya, E.; Antsiferova, A.; and Vatolin, D. S. 2023. Fast Adversarial CNN-based Perturbation Attack of NoReference Image Quality Metrics. Siniukov, M.; Antsiferova, A.; Kulikov, D.; and Vatolin, D. 2021. Hacking VMAF and VMAF NEG: vulnerability to different preprocessing methods. In 2021 4th Artificial Intelligence and Cloud Computing Conference, 89–96. Su, J.; Vargas, D. V.; and Sakurai, K. 2019. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5): 828–841. Su, S.; Yan, Q.; Zhu, Y.; Zhang, C.; Ge, X.; Sun, J.; and Zhang, Y. 2020. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3667–3676. Su, Y.; and Korhonen, J. 2020. Blind natural image quality prediction using convolutional neural networks and weighted spatial pooling. In 2020 IEEE International Conference on Image Processing (ICIP), 191–195. IEEE. Szekely, G. J. 2002. E-statistics: The Energy of Statistical Samples. Technical Report 02-16, Bowling Green State University. Talebi, H.; and Milanfar, P. 2018. NIMA: Neural image assessment. IEEE transactions on image processing, 27(8): 3998–4011. Tu, Z.; Chen, C.-J.; Wang, Y.; Birkbeck, N.; Adsumilli, B.; and Bovik, A. C. 2021. Video Quality Assessment of User Generated Content: A Benchmark Study and a New Model. In 2021 IEEE International Conference on Image Processing (ICIP), 1409–1413. IEEE. V-Nova. 2023. FFmpeg with LCEVC. https://docs.v-nova. com/. Wang, J.; Chan, K. C.; and Loy, C. C. 2023. Exploring CLIP for Assessing the Look and Feel of Images. In AAAI. Wang, Z.; Bovik, A.; Sheikh, H.; and Simoncelli, E. 2004. Image Quality Assessment: From Error Visibility to Structural Similarity. Image Processing, IEEE Transactions on, 13: 600 – 612. Wang, Z.; and Simoncelli, E. P. 2008. Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual quantities. Journal of Vision, 8(12): 8–8. Xue, T.; Chen, B.; Wu, J.; Wei, D.; and Freeman, W. T. 2019. Video Enhancement with Task-Oriented Flow. International Journal of Computer Vision (IJCV), 127(8): 1106–1125. Yang, S.; Wu, T.; Shi, S.; Lao, S.; Gong, Y.; Cao, M.; Wang, J.; and Yang, Y. 2022. Maniqa: Multi-dimension attention network for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1191–1200. Ying, Z.; Niu, H.; Gupta, P.; Mahajan, D.; Ghadiyaram, D.; and Bovik, A. 2020. From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3575–3585. Zhang, K.; Li, D.; Luo, W.; Ren, W.; Stenger, B.; Liu, W.; Li, H.; and Yang, M.-H. 2021. Benchmarking ultrahigh-definition image super-resolution. In Proceedings of the IEEE/CVF international conference on computer vision, 14769–14778. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595. Zhang, W.; Li, D.; Min, X.; Zhai, G.; Guo, G.; Yang, X.; and Ma, K. 2022. Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop. arXiv preprint arXiv:2210.00933. Zhu, H.; Li, L.; Wu, J.; Dong, W.; and Shi, G. 2020. MetaIQA: Deep meta-learning for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14143–14152. Zvezdakova, A.; Zvezdakov, S.; Kulikov, D.; and Vatolin, D. 2019. Hacking VMAF with video color and contrast distortion. In CEUR Workshop Proceedings, 53–57. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 708
2024
79
18,617
Identification of Necessary Semantic Undertakers in the Causal View for Image-Text Matching Huatian Zhang, Lei Zhang, Kun Zhang, Zhendong Mao* University of Science and Technology of China, Hefei, China {huatianzhang, kkzhang}@mail.ustc.edu.cn, {leizh23, zdmao}@ustc.edu.cn Abstract Image-text matching bridges vision and language, which is a fundamental task in multimodal intelligence. Its key challenge lies in how to capture visual-semantic relevance. Finegrained semantic interactions come from fragment alignments between image regions and text words. However, not all fragments contribute to image-text relevance, and many existing methods are devoted to mining the vital ones to measure the relevance accurately. How well image and text relate depends on the degree of semantic sharing between them. Treating the degree as an effect and fragments as its possible causes, we define those indispensable causes for the generation of the degree as necessary undertakers, i.e., if any of them did not occur, the relevance would be no longer valid. In this paper, we revisit image-text matching in the causal view and uncover inherent causal properties of relevance generation. Then we propose a novel theoretical prototype for estimating the probability-of-necessity of fragments, PNf, for the degree of semantic sharing by means of causal inference, and further design a Necessary Undertaker Identification Framework (NUIF) for image-text matching, which explicitly formalizes the fragment’s contribution to imagetext relevance by modeling PNf in two ways. Extensive experiments show that our method achieves state-of-the-art on benchmarks Flickr30K and MSCOCO. 1 Introduction Image-text matching aims to search for semantically relevant images given text or retrieve descriptive texts given image, which is a fundamental task in multimodal intelligence facilitating many applications, such as information database search and e-commerce recommendation. Despite considerable development in recent years, image-text matching remains the challenge in capturing visual-semantic relevance. Extensive research has been done to study the semantic relevance from interactions of cross-modal contents. A common framework is aligning constituent fragments (image regions or text words) semantically and aggregating the resulted alignments accordingly. (Lee et al. 2018) proposed a cross-attention mechanism to capture all latent alignments by attending to regions and words with each other as context, and inspires a bunch of studies. (Zhang et al. 2020b; *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Illustration of necessary undertakers. (a) Although the man in blue box spuriously corresponds to text in fragmental aligning, removing it will not break image-text match. If the man in red box is removed, the matched relationship will no longer hold, i.e., the man in red box is necessary. (b) Basin is the critical conflict that causes image-text unmatch. Removing the basin will affect the degree of semantic sharing between image and text, but the pan will not, i.e., the basin is necessary to this unmatched relationship. Wehrmann, Kolling, and Barros 2020; Chen and Luo 2020; Liu et al. 2020) constructed thoughtful aligning rules to capture fine interactions. (Diao et al. 2021) explored selfattention reasoning as an aggregation mechanism to enhance meaningful alignments. (Zhang et al. 2022a) assigned high confidence to image regions consistent with the global semantics in aggregating. (Pan, Wu, and Zhang 2023) proposed to eliminate redundant or irrelevant fragment alignments from the perspective of information coding. In general, not all fragments contribute to image-text relevance, and a large branch of existing methods is devoted to mining the vital ones to measure the relevance accurately. Normally, how well image and text relate depends on the degree to which they overlap into shared semantics. Fragments that contribute to image-text relevance are those that, if any of them did not occur, the relevance would be no longer valid, i.e., they are necessary to image-text relevance. In other words, these fragments are necessary undertakers of the degree of semantic sharing between image and text. Although the unnecessary ones may also locally correspond to the other modality, identifying necessity can filter out such spurious correspondences from image-text relevance measure by their low necessity, to reduce the impact on matchThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7105 ing. As shown in Fig. 1(a), even if the unnecessary man in blue is locally related to the text, the necessity suppresses its contribution to overall relevance. Meanwhile, as the redundancy of unnecessary fragments, excluding them from image or text can help to establish alignments more discriminatively, without altering the inherently shared semantics. Particularly for hard negatives, identifying necessity helps to focus on the evidentiary conflicts. As shown in Fig. 1(b), necessary basin region points out the semantic contradiction. Treating the degree of semantic sharing as an effect and fragments as its possible causes, the necessary undertakers refer to those indispensable causes for the generation of the degree. Thereby, identifying necessary undertaker is equivalent to determining the probability that the fragment is the degree’s cause. For this purpose, we aim to answer “What would happen to image-text relevance if a fragment did not occur?”. From the perspective of causal inference (Pearl 2009; Glymour, Pearl, and Jewell 2016), the question hypothesizes an absence of fragment, and introduces a comparison on the degree of semantic sharing between actuality and the imaginary scenario where the fragment absents. To capture how the degree varies, we express semantic change of image or text caused by the absence of fragment by the fragment’s semantic dependency, which collects regions or words that have direct semantic causalities with it. Then we structurally model the generation of image-text relevance to specify the functional relationships that connect semantic changes and the relevance, and further uncover two causal properties of relevance generation in matching, the exogeneity of semantic dependency and the matching monotonicity. Based on the insights above, we propose a novel theoretical prototype for estimating the probability-of-necessity of fragments by means of causal inference, and further design a Necessary Undertaker Identification Framework (NUIF) for image-text matching, which quantitatively identifies necessary undertakers of the degree of semantic sharing between image and text in measuring overall relevance. Specifically, we first relate fragments between modalities to obtain vision-language alignments. Then we represent image and text adaptively by highlighting regions or words that are most semantically consistent with each other to which they aligned, to enable relevance measurement sensitive to such match-critical fragments so that it can clearly reflect the variation in how image and text overlap when semantics change. Finally, we quantify the probability-of-necessity of fragments counterfactually by relative variation in semantic overlapping after removing their semantic dependencies, and then aggregate the alignments queried by necessary fragments into image-text relevance for matching. Our framework explicitly formalizes the fragment’s contribution to image-text relevance from a causal perspective, which achieves the goal more intuitively. Our contributions are summarized as follows: (1) We revisit image-text matching in the causal view and, to the best of our knowledge, we are the first to propose a novel theoretical prototype for estimating the probability-of-necessity of fragments to undertake the degree of semantic sharing between image and text. (2) We propose a Necessary Undertaker Identification Framework (NUIF) that adaptively highlights match-critical fragments in representing image and text, and quantifies the probability-of-necessity of fragments counterfactually by relative variation in how image and text overlap after removing fragments’ semantic dependencies. (3) The experimental results validate the effectiveness of our proposed method, and demonstrate that NUIF achieves state-of-the-art on benchmarks Flickr30K and MSCOCO. 2 Related Work Image-Text Matching. To capture image-text relevance for matching from fine-grained interactions on fragments, extensive works have been proposed. Different from the research line that focuses on representing the holistic image or text to perform coarse cross-modal interaction (Chen et al. 2021; Yan, Yu, and Xie 2021; Li et al. 2022b; Fu et al. 2023), the research line examining fine-grained interactions attracts a lot of attention. One of the representative (Lee et al. 2018) proposed the cross-attention mechanism that aims to discover all region-word fragmental alignments and inspires a series of works (Wehrmann, Kolling, and Barros 2020; Chen and Luo 2020; Liu et al. 2020; Ji, Chen, and Wang 2021; Zhang et al. 2023a). Some works focused on exploiting more information, such as scene graph (Wang et al. 2020b), consensus knowledge (Wang et al. 2020a), and external pretrained knowledge (Wei et al. 2020; Qu et al. 2021; Yao et al. 2021), etc., to enhance cross-modal alignments. Another line of methods focused on constructing thoughtful aggregating rules to capture vital fragmental interactions. (Liu et al. 2020) and (Diao et al. 2021) explored the structure aligning between regions and words via graph neural network. (Zhang et al. 2022a) assigned confidence to regions to emphasize alignments queried by reliable ones in semantic relevance aggregation. (Zhang et al. 2022b) proposed the negative-aware attention to use the misaligned fragments explicitly. (Kim, Kim, and Kwak 2023) coded samples into a set of different embeddings that captures diverse semantics to handle ambiguity. (Pan, Wu, and Zhang 2023) proposed eliminating irrelevant alignments through cross-modal hard aligning based on coding theory. Causality in Computer Vision. Causal inference (Pearl 2009; Glymour, Pearl, and Jewell 2016) has been widely applied to computer vision to gain insight into the intrinsic causal mechanism of tasks, including visual recognition (Wang et al. 2020c; Liu et al. 2022; Mao et al. 2022), semantic segmentation (Zhang et al. 2020a), scene graph generation (Tang et al. 2020), video analysis (Li et al. 2021; Liu et al. 2021), domain generalization (Lv et al. 2022; Chen et al. 2023a), object navigation (Zhang et al. 2023b), etc. In multimodal machine learning, (Yang et al. 2021) alleviated the dataset bias in image captioning based on the backdoor and frontdoor adjustment principles. (Wei et al. 2022) synthesized counterfactual samples to augment training data for image-text matching. (Chen et al. 2023b) proposed a counterfactual samples synthesizing and training strategy to improve visual-explainable and question-sensitive abilities of visual question answering. (Zang et al. 2023) captured video features causally related to question to restrain redundant language semantics on question answering. In this paper, we examine image-text matching in the causal view, to estimate The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7106 the probability-of-necessity of fragments to undertake the degree of semantic sharing between image and text, in order to identify the necessary undertakers quantitatively in measuring image-text relevance. 3 Image-Text Matching in the Causal View We start by structurally modeling the generation of imagetext relevance from the perspective of causality, to understand the semantic change in image or text when a fragment absents, and extract the causal properties inherent in the relevance generation. Then we derive a theoretical prototype for estimating the probability-of-necessity of fragments to the degree of semantic sharing by counterfactual means. 3.1 Structual Modeling Given an image or text, image-text matching is to rank candidates (texts or images) based on semantic relevance between modalities. Generally, a limited number of visual concepts that have semantic dependencies with a region can host almost losslessly visual context related to this region in the image. The rest have no direct cause-and-effect on whether the region occurs or not. In the text, the words in a syntactic component or phrase are often linguistically interdependent and work together as a fine-grained semantic unit. Moreover, specific meanings activated for the region or word are constrained by the visual or linguistic context that it is involved. These facts inspire us to partition an image or a text for each constituent fragment into semantic dependency, which includes the fragment itself and gathers up regions or words that have direct semantic causalities with the fragment, and semantic complement. When a fragment is removed, the semantics in the image or text that emerge from its dependency will no longer hold due to causal disruption. As shown in Fig. 2(a), we build a causal graph to formalize the causalities among variables: image or text query Q, the semantic dependency D and complement C of a fragment F, heteromodal candidate H, and image-text relevance R between Q and H, each vertex corresponds to a variable, each edge denotes the cause-and-effect relationship between its end-vertices. Concretely, Q →R ←H denotes that the relevance R is determined by how well the semantics of query Q and candidate H overlap. D →Q ←C indicates that, from the perspective of fragment F, image or text query Q is composed of its dependency D and complement C organically. D gathers up the fragments that have direct semantic causalities with F, that is, it recapitulates the context of F in Q. D á C means that concepts in C cannot cause the occurrence or not of D in terms of semantic logic. While, as shown in Fig. 2(b), F alone cannot be free from the callingup from some other fragments in Q/F. 3.2 Necessity Estimation In causal theory, given an event Y and its possible cause X, a counterfactual interpretation of causation that effect Y would not have occurred in the absence of X captures how necessary the cause X is for the production of Y , i.e., probability-of-necessity. The potential response of Y to hypothetical action X = x is denoted as Yx, and yx indicates Figure 2: The causal graphs of image-text matching. (a) There is no cause-and-effect between D and the remaining complement C from F’s point of view. (b) Fragment F may semantically depend on its context fragments in Q/F. that Y would be y if X had been x. Then, under binary logic, the probability-of-necessity can be defined counterfactually as PN ∶= P (y′ x′ ∣x,y), standing for the probability of y′ x′ given that x and y did occur, where x and y denote respectively the events X = true and Y = true, otherwise false. Under certain assumptions, the quantity of probability-ofnecessity can be estimated from observational data facts. To estimate the probability-of-necessity of fragments for the degree of semantic sharing between image and text, we first uncover two inherent causal properties in the generation of image-text relevance as follows. Exogeneity of Semantic Dependency. In the causal graph, if variable D is fixed to d, the variation in potential response of R to D = d, Rd, will be dominated by other variables that can affect R. However, D and R have no common ancestor variable, i.e., no confounding. Hence variables capable of transmitting variations to R are independent of D, and so is Rd. For the semantic dependency of a fragment f, the way R would respond to its occurrence D = od or absence D = o′ d is independent of the actual value of D, thus: {Rod,Ro′ d} á D. (1) In causal terms, the dependency D is exogenous relative to image-text relevance R. Matching Monotonicity. For images, peeling away neither salient regions together with their dependent context nor trivial backgrounds will render the image that does not match a description match better. Similarly, in texts, masking out syntactic components will reduce the descriptive semantics and make the text sketchy or even blur its logic, thus the masked text ought not to be more relevant to image candidates. It can be summed up as the absence of semantic dependency, D = o′ d, cannot make query Q that does not match heteromodal candidate H turn to match. Furthermore, let M denotes matching degree between Q and H, in the binary case, m for M = true and m′ the opposite. Then: mo′ d ∧m′ od = false, (2) that is, the matching degree M is monotonic relative to the occurrence of semantic dependency D. Putting the insights above together, the probability-ofnecessity can be quantified for identifying necessary undertakers in image-text matching. Theorem 1 (Necessary Undertaker Identification). In image-text matching, for a fragment f in image or text, with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7107 Figure 3: Illustration of our proposed NUIF. The framework consists of three modules: vision-language aligning, adaptive representing, and necessary undertakers identifying. The symbols ①and ②in adaptive representing indicate the order of computation. We model the probability-of-necessity, PNf, for the degree of semantics sharing in two ways, PNf-d and PNf-r. semantic dependency d, its probability-of-necessity of undertaking the degree of semantics sharing between image and text, PNf, can be quantified by (P (m ∣od) − P (m ∣o′ d))/P(m ∣od), where m indicates the event that image and text match, od denotes f’s semantic dependency occurs and o′ d for its absence. Proof. See Appendix A for details. Note that it does not need to guarantee that M is binary in Thm. 1. For the case of continuous M, Thm. 1 can also measure the relative weakening in relevance between imagetext pairs with od and o′ d as the probability-of-necessity. 4 The Proposed Implementation We then elaborate on the implementation of PNf in Thm.1. Specifically, as shown in Fig. 3, given image I = {vi∣i = 1,2,...,M, vi ∈RD}, where vi denotes detected salient region, and text T = {uj∣j = 1,2,...,L, uj ∈RD}, where uj is word embedding, we design a Necessary Undertaker Identification Framework as: (1) Vision-Language Aligning: We relate regions and words to obtain visual-semantic alignments; (2) Adaptive Representing: We adaptively represent the image and text in matching by emphasizing regions or words that are semantically bijective with what they aligned in another modality, to enable relevance measure sensitive to such match-critical fragments so that the measure can acutely reflect the change in image-text semantic overlap; (3) Necessary Undertakers Identifying: We model the PNf in two ways, one to measure the difference ∆P = P (m ∣od)−P (m ∣o′ d) integrally and model the PNf as ∆P/P(m ∣od), named PNf-d, and the other to measure the ratio P r = P (m ∣o′ d)/P(m ∣od) as a whole and model the PNf as 1 −P r, named PNf-r. 4.1 Vision-Language Aligning To capture visual-semantic relevance at the fragment level, we obtain the semantically related fragments for one in another modality through the cross-attention mechanism. For region vi, we measure its attention weight on word uj by wv ij = e(λˆcij ) ∑N j=1 e(λˆcij ) , ˆcij = [cij]+ / √ ∑M i=1 [cij]2 +, where λ is a constant temperature parameter and cij is the cosine similarity between region vi and word uj, and aggregate vi’s relevant words as its linguistic context uv i = ∑L j=1 wv ijuj. Then we embody vision-language alignment as a vector-valued distance between vi and its attended context uv i as: av i = l2-normalized(tanh(Wv∣vi −uv i ∣2)), (3) where Wv ∈RP ×D denotes a learnable projection matrix. It can be said that alignment av i is queried by vi. Similarly, the alignment au j queried by word uj is au j = l2-normalized(tanh(Wu∣uj −vu j ∣2)), where vu j is the visual context of ujand aggregated from regions with ∑M i=1 wu jivi. 4.2 Adaptive Representing To make relevance measurement more responsively reflect the variation in how image and text overlap as their semantics change, e.g., from od to o′ d, we adaptively represent the image and text by highlighting regions or words that are most semantically consistent with each other, i.e., bijective with, to which they aligned in another modality. Such regions or words are match-critical since they are exactly the grounding bases of image-text semantic overlapping. In detail, to represent the image I in matching I and text T, for region vi, we first obtain its most semantically aligned word as ∑L j=1 wv−u ij uj by wv−u i = [wv−u i1 ,wv−u i2 ,...,wv−u iL ] = softmax(τ ⋅log(wv i )), where wv i is the attention distribution of vi on text words in Sec. 4.1, and τ is a temperature parameter. Then, we measure the likelihood that vi’s linguistic counterpart exists in the text T to indicate the degree to which the vi is match-critical by: sv i = vi ⋅∑ L j=1 wv−u ij vu j , (4) which is the similarity between vi and ∑L j=1 wv−u ij vu j that serves as the visual context of vi’s semantically aligned The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7108 word ∑L j=1 wv−u ij uj. It measures the similarity between vi and the regions aligned by word ∑L j=1 wv−u ij uj. The more similar the two are, the more likely there is a linguistic counterpart of vi in the text T, i.e., the more match-critical vi is. For the image I, we obtain sv = [sv 1,sv 2,...,sv M] and represent image I by: i = ∑ M i=1 softmax(sv)ivi, (5) which highlights match-critical regions within I through weights sv adaptively. Likewise, the text T is represented as t = ∑L j=1 softmax(su)juj, where su j is similar to Eq. 4. 4.3 Necessary Undertakers Identifying We first gather the semantic dependency of individual fragments as follows. In image I, the regions that are semantically dependent on region vi will tend to interact with it, which usually manifests as spatial proximity. For vi, we regard regions that are relatively close to vi, together with vi itself as its semantic dependency di. In text T, the phrase to which uj belongs is naturally its dependency dj, while the word not belonging to any phrase is its own dependency. Here it is ready for identifying the necessary undertakers. We model the PNf = (P (m ∣od) −P (m ∣o′ d))/P(m ∣od) in two ways as follows. PNf-d. The difference ∆P = P (m ∣od) −P (m ∣o′ d) expresses the variance in how well image and text match caused by removing f’s semantic dependency, and can be measured integrally as the relevance between the content emerged from dependency and image or text candidate instance. For region vi with semantic dependency di, we represent the dependency as di = ∑k∈idxi softmax(sv idxi)kvk, where idxi denotes the index set of di. Then we measure the ∆P w.r.t. vi, ∆P v i , as: ∆P v i = (1 + 1 N ∑ N n=1 (di)n ⋅tn ∥(di)n∥2 ∥tn∥2 )/2, (6) which is block-wise cosine similarity with N blocks. It reduces noise in similarity measure through the refinement on dimensions. Then we measure P (m ∣odi), the probability that image and text match, by similarity between i and t: P (m ∣odi) = (1 + 1 N ∑ N n=1 in ⋅tn ∥in∥2 ∥tn∥2 )/2. (7) Further, we model the probability-of-necessity of region vi as PNv i -d = ∆P v i /P (m ∣odi). Similarly, we measure the PNu j -d of the word uj with semantic dependency dj as ∆P u j /P (m ∣odj), where ∆P u j = max k∈idxj((1 + 1 N ∑N n=1 (uk)n⋅tn ∥(uk)n∥2∥tn∥2 )/2), which is the largest relevance variation can be caused by removing word in dj. PNf-r. The ratio P (m ∣o′ d)/P(m ∣od) means the retention rate of the probability that image and text match when the semantic dependency d changes from occurrence od to absence o′ d. It implies the level of likeness between image-text shared semantics under od and o′ d. For region vi, we represent its semantic complement in image I as ci = ∑k∈idxc i softmax(sv idxc i)kvk, where idxc i denotes the complement of idxi in image I. Then we formulate the P (m ∣o′ di) through vision-language aligning in Eq.3 between ci and t: P (m ∣o′ di) = av I/di = l2-normalized(tanh(Wv∣ci −t∣2)), (8) and the P (m ∣odi) through the aligning between i and t: P (m ∣odi) = av I = l2-normalized(tanh(Wv∣i −t∣2)). (9) Further, we measure the ratio P (m ∣o′ d)/P(m ∣od) by: P r i = (1 + av I/di ⋅av I)/2, (10) which means the projection of av I/di onto av I. Then we model the probability-of-necessity of vi as PNv i -r = 1 −P r i . Likewise, we measure the PNu j -r of the word uj with semantic dependency dj as 1 −P r j , where P r j = (1 + au T /dj ⋅ au T )/2, similar to Eq. 10. Then we aggregate the vision-language alignments queried by the necessary regions through PNv-d or PNv-r as aI = ∑M i softmax(PNv)iav i , and the alignments queried by the necessary words through PNu-d or PNu-r as aT = ∑L j softmax(PNu)jau j , then incorporate them into imagetext relevance r by: r(I,T) = tanh(wr([aI ∶aT ])), (11) where wr ∈R1×2P is a learnable vector, and the [∶] denotes the concatenation operation. 4.4 Training Feature Encoder. For a fair comparison with previous works, we use the ROI features of pre-trained object detector as detected regions, and transform them to D-dimensional vi via linear projection. For texts, we employ two types of extractors, Bi-GRU and pre-trained BERT (Kenton and Toutanova 2019). When using Bi-GRU, the embedding of the i-th word, uj, is averaged from its forward and backward hidden states. When using BERT, we linearly map its output hidden states to D-dimensional embeddings. Objective Function. Ranking objectives are adopted in image-text matching widely to force matched image-text pairs close to each other and pull unmatched ones away. We use the bi-directional triplet loss, focusing on the hardest negatives in-batch for efficiency: L(I,T) = [α −r(I,T) + r (I,T − h )]+ + [α −r(I,T) + r (I− h,T)]+ , (12) where α is a margin constraint, [⋅]+ = max(x,0). I− h = argmaxI−≠Ir(I−,T), and T − h = argmaxT −≠T r(I,T −) are the hardest negatives, given positive matched I and T. 5 Experiments 5.1 Datasets and Evaluation Metrics We evaluate the proposed framework on Flickr30K (Young et al. 2014) and MSCOCO (Lin et al. 2014) datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7109 Methods Flickr30K MSCOCO 1K IMG →TXT TXT →IMG rSum IMG →TXT TXT →IMG rSum R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 BUTD + Bi-GRU GSMN(Liu et al. 2020) 76.4 94.3 97.3 57.4 82.3 89.0 496.8 78.4 96.4 98.6 63.3 90.1 95.7 522.5 GPO∗(Chen et al. 2021) 76.5 94.2 97.7 56.4 83.4 89.9 498.1 78.5 96.0 98.7 61.7 90.3 95.6 520.8 SGRAF(Diao et al. 2021) 77.8 94.1 97.4 58.5 83.0 88.8 499.6 79.6 96.2 98.5 63.2 90.7 96.1 524.3 CMCAN(Zhang et al. 2022a) 79.5 95.6 97.6 60.9 84.3 89.9 507.8 81.2 96.8 98.7 65.4 91.0 96.2 529.3 NAAF†(Zhang et al. 2022b) 81.9 96.1 98.3 61.0 85.3 90.6 513.2 80.5 96.5 98.8 64.1 90.7 96.5 527.2 CHAN†∗(Pan et al. 2023) 79.7 94.5 97.3 60.2 85.3 90.7 507.8 79.7 96.7 98.7 63.8 90.4 95.8 525.0 Set-Based†(Kim et al. 2023) 80.9 94.7 97.6 59.4 85.6 91.1 509.3 80.6 96.3 98.8 64.7 91.4 96.2 528.0 NUIF-d∗(ours) 81.8 95.7 98.0 59.0 83.9 89.9 508.3 79.9 96.7 99.0 63.9 90.4 95.8 525.7 NUIF(ours) 84.3 96.3 98.0 60.7 85.0 90.7 515.1 81.7 97.0 99.0 65.1 91.4 96.3 530.6 BUTD + BERT GPO∗(Chen et al. 2021) 81.7 95.4 97.6 61.4 85.9 91.5 513.5 79.7 96.4 98.9 64.8 91.4 96.3 527.5 VSRN++(Li et al. 2022a) 79.2 94.6 97.5 60.6 85.6 91.4 508.9 77.9 96.0 98.5 64.1 91.0 96.1 523.6 MV-VSE∗(Li et al. 2022b) 82.1 95.8 97.9 63.1 86.7 92.3 517.5 80.4 96.6 99.0 64.9 91.2 96.0 528.1 CHAN∗(Pan et al. 2023) 80.6 96.1 97.8 63.9 87.5 92.6 518.5 81.4 96.9 98.9 66.5 92.1 96.7 532.6 HREM(Fu et al. 2023) 84.0 96.1 98.6 64.4 88.0 93.1 524.2 82.9 96.9 99.0 67.1 92.0 96.6 534.6 NUIF-d∗(ours) 83.9 96.5 98.2 67.9 89.2 93.6 529.4 83.3 97.3 98.9 69.2 92.7 96.9 538.2 NUIF(ours) 85.6 97.2 98.6 69.8 90.4 94.4 535.9 84.7 97.5 99.1 70.6 93.1 97.2 542.3 Table 1: Comparisons with state-of-the-arts on Flickr30K and MSCOCO 1K test sets. The † : the model has GloVe attached for text embedding, and ∗: only single model is reported. The bests are in bold. Flickr30K contains 31,000 images and each image with 5 texts. Following dataset splits in (Lee et al. 2018), 29,000 images for training, 1,000 images for validation, and 1,000 images for testing. MSCOCO contains 133,287 images and each image with 5 texts. We use 123,287 images for training, 5,000 images for validation, and 5,000 images for testing, and the results on MSCOCO are reported by both averaging over 5 folds of 1,000 test images and testing on the entire 5,000 test images. As common in the field of information retrieval, we measure the performance by recall R@K and rSum. The higher R@K indicates better performance. 5.2 Implementation Details We utilize the BUTD features (Anderson et al. 2018) extracted from Faster R-CNN (Ren et al. 2015) with pretrained ResNet-101 (He et al. 2016) as ROI inputs. M = 36 ROIs in each image. The dimension D = 1024 and P = 256. The temperature parameter λ = 9.0 and τ = 6.0. The number of blocks N = 16. The margin α = 0.2. In the semantic dependency gathering, we calculate polar coordinates (ρ,θ) of other regions relative to the target region and select regions with the first 2 small ρ in each of the scopes that are quartered by θ = π/4,3π/4,−3π/4,−π/4, and extract noun phrases from texts by the chunking function of the NLP tool spaCy. In using Bi-GRU, the dropout operation is applied on both region and word features after projecting them into 1024-dim and dropout rate is 0.4, and we employ the Adam optimizer with 0.0002 initial learning rate which is decayed by 10 times after 40 epochs on Flickr30K and after 30 epochs on MSCOCO. In using BERT, the Adam optimizer sets 0.0005 initial learning rate, and decays by 10 times after 20 epochs. Source code will be released1. 1https://github.com/htzhang-code/NUIF Methods IMG →TXT TXT →IMG rSum R@1 R@5 R@10 R@1 R@5 R@10 BUTD + Bi-GRU SGRAF 57.8 91.6 41.9 81.3 CMCAN 61.5 92.9 44.0 82.6 NAAF† 58.9 85.2 92.0 42.5 70.9 81.4 430.9 CHAN†∗ 60.2 85.9 92.4 41.7 71.5 81.7 433.4 Set-Based† 60.4 86.2 92.4 42.6 73.1 83.1 437.8 NUIF-d∗(ours) 59.3 85.5 92.0 41.9 71.3 81.8 431.8 NUIF(ours) 61.8 86.6 93.1 43.3 72.4 82.6 439.8 BUTD + BERT VSRN++ 54.7 82.9 90.9 42.0 72.2 82.7 425.4 MV-VSE∗ 59.1 86.3 92.5 42.5 72.8 83.1 436.3 CHAN∗ 59.8 87.2 93.3 44.9 74.5 84.2 443.9 HREM 64.0 88.5 93.7 45.4 75.1 84.3 450.9 NUIF-d∗(ours) 65.2 88.8 94.2 48.3 76.8 85.7 459.1 NUIF(ours) 67.8 89.8 94.8 49.9 77.9 86.7 466.9 Table 2: Comparisons with state-of-the-arts on MSCOCO 5K test set. The bests are in bold. 5.3 Comparisons with State-of-the-art Methods We compare our proposed NUIF with recent state-of-the-art methods on the Flickr30K and MSCOCO benchmarks. The experimental results are cited directly from respective papers. When using the BUTD+Bi-GRU encoder, for a fair comparison with more previous methods, we report performances without the pre-trained GloVe representation attached to text embedding. Quantitative results on Flickr30K and COCO 1K test sets are shown in Tab. 1. NUIF outperforms state-of-the-art methods on most metrics with large margins clearly and achieves consistent superiority in different encoder settings. Comparisons on COCO 5k test set are shown in Tab. 2, and our method also performs best on alThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7110 Methods IMG →TXT TXT →IMG rSum R@1R@5R@10R@1R@5R@10 BUTD + Bi-GRU w/o {dropout, PNf } 73.7 92.0 96.0 54.9 80.2 86.7 483.4 w/o PNf 79.4 94.9 97.8 58.6 83.4 90.0 504.1 NUIF-d (w/ PNf-d) 81.8 95.7 98.0 59.0 83.9 89.9 508.3 NUIF-r (w/ PNf-r) 82.5 94.9 97.9 59.6 83.8 90.1 508.8 NUIF-full 84.3 96.3 98.0 60.7 85.0 90.7 515.1 BUTD + BERT w/o PNf 82.0 95.9 98.5 66.8 88.6 93.4 525.2 NUIF-d (w/ PNf-d) 83.9 96.5 98.2 67.9 89.2 93.6 529.4 NUIF-r (w/ PNf-r) 83.7 96.5 98.4 67.4 89.1 93.3 528.4 NUIF-full 85.6 97.2 98.6 69.8 90.4 94.4 535.9 Table 3: Ablation studies of PNf’s modeling on Flickr30K. Methods IMG →TXT TXT →IMG rSum R@1R@5R@10R@1R@5R@10 COCO 1K, BUTD + Bi-GRU w/o {dropout, PNf } 76.0 94.9 97.9 61.0 88.0 94.4 512.2 w/o PNf 78.3 96.2 98.8 63.2 90.2 95.6 522.3 NUIF-d (w/ PNf-d) 79.9 96.7 99.0 63.9 90.4 95.8 525.7 NUIF-r (w/ PNf-r) 80.0 96.4 98.8 63.7 90.5 95.7 525.2 NUIF-full 81.7 97.0 99.0 65.1 91.4 96.3 530.6 COCO 5K, BUTD + Bi-GRU w/o {dropout, PNf } 54.2 82.5 89.2 39.5 68.0 78.5 412.0 w/o PNf 58.0 84.7 91.8 41.4 70.8 81.2 427.9 NUIF-d (w/ PNf-d) 59.3 85.5 92.0 41.9 71.3 81.8 431.8 NUIF-r (w/ PNf-r) 59.9 85.3 92.1 41.8 71.3 81.1 431.5 NUIF-full 61.8 86.6 93.1 43.3 72.4 82.6 439.8 COCO 1K, BUTD + BERT w/o PNf 82.9 97.0 98.8 68.7 92.6 96.8 536.8 NUIF-d (w/ PNf-d) 83.3 97.3 98.9 69.2 92.7 96.9 538.2 NUIF-r (w/ PNf-r) 84.2 97.2 99.1 69.3 92.6 96.9 539.3 NUIF-full 84.7 97.5 99.1 70.6 93.1 97.2 542.3 COCO 5K, BUTD + BERT w/o PNf 64.8 88.2 94.2 47.8 76.5 85.4 456.9 NUIF-d (w/ PNf-d) 65.2 88.8 94.2 48.3 76.8 85.7 459.1 NUIF-r (w/ PNf-r) 65.6 89.1 94.3 48.4 76.6 85.8 459.8 NUIF-full 67.8 89.8 94.8 49.9 77.9 86.7 466.9 Table 4: Ablation studies of PNf’s modeling on MSCOCO. most all metrics. The remarkable improvements of our proposed NUIF demonstrate its effectiveness and robustness. 5.4 Ablation Study To demonstrate that necessary semantic undertaker identifying does play an active role in accurately measuring imagetext relevance, we conduct ablation studies on Flickr30K and MSCOCO for probability-of-necessity modeling, as enumerated in Tab. 3 and 4. The baseline w/o PNf means to aggregate alignments without identifying necessity, i.e., alignments averaging, and NUIF-full is ensemble model from NUIF-d and NUIF-r. In BUTD+Bi-GRU, dropout operation (see Sec. 5.2) is beneficial to improve baseline. It can be seen that our necessary undertakers identifying improves the matching accuracy significantly. It is worth noting that MSCOCO’s performance gains are less significant than Flickr30K’s since MSCOCO’s weaker Figure 4: Illustration of the proportion of long text in MSCOCO is relatively smaller than that of Flickr30K. Figure 5: Visualization of the probability-of-necessity. We highlight regions with large PNv i -d, and annotate the minmax scaled vaules of PNu j -d for critical words. causality (due to its much smaller proportion of not-so-short text) rather than scale (see Fig. 4). Short texts have weak causality since they are generally rough, while not-so-short ones have rich causality since their fine-grained details, e.g., many regions in a sunset image are aligned with “beautiful sunset”, removing some regions (e.g., cloud) will not affect the degree of semantic sharing between sunset image and text, resulting in ambiguous (weak) causality. While for text “beautiful sunset with white clouds over a river”, the necessity of certain regions (e.g., cloud) is enhanced. 5.5 Visualization To further verify our method’s ability to specify the necessary undertakers of the degree of semantic sharing between image and text, we visualize the learned PNf-d of regions and words in Fig. 5. In row 2 column 2, due to the spatial proximity strategy in gathering region dependency, the girl closer to the fish region that semantically corresponds to the match-critical word “fish” is wrongly considered more necessary than the other. However, on the whole, our method effectively captures the regions and words necessary for judging semantic consistency or contradiction in matching. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7111 6 Conclusion In this paper, we revisit image-text matching in the causal view, and propose a novel theoretical prototype for estimating the probability-of-necessity of fragments for the degree of semantic sharing by means of counterfactual inference. Further, we implement a Necessary Undertaker Identification Framework (NUIF) for image-text matching to formalize the probability-of-necessity of fragments in two ways, which intuitively specifies the contribution of fragments to image-text relevance. Our method attributes the degree of image-text semantic sharing to constituent semantics. Extensive experiments demonstrate the superiority of our proposed NUIF. Future works include designing effective semantic dependency gathering, to reasonably infer fragments’ necessity in specific scenarios. A Necessary Undertaker Identification Proof. The semantic dependency d of the fragment f gathers up those fragments in image or text that have direct semantic causalities with f, containing f itself. Removing f will break these causalities and then distort the semantics emerged from dependency d. This is equivalent to the original semantics of d being altered from the image or text. That is, as of changes to o′ f, od changes to o′ d on semantics. Combining the definition of necessary cause (see Sec. 3.2) in causal inference (Pearl 2009; Glymour, Pearl, and Jewell 2016) , we express the probability that fragment f is necessary to the degree of semantic sharing between image and text, probability-of-necessity, as: PNf = P (m′ o′ d ∣m,od), (13) which means the probability that, given that d did occur and M = true in reality, the potential response of matching degree M to d’s hypothetical erasure is M = false. Since the matching degree M is determined by image-text relevance R monotonically and uniquely, and D is exogenous relative to the relevance R, i.e., {Rod,Ro′ d} á D, then: {Mod,Mo′ d} á D, (14) which implies: P(mod) = P(mod∣od) = P(m∣od), (15) that is: od ∧m = od ∧mod. (16) Then, for Eq. 13, we have: P (m′ o′ d ∣m,od) = P (m′ o′ d,m,od) P(m,od) = P (m′ o′ d,mod,od) P(m,od) = P (m′ o′ d,mod)P(od) P(m,od) = P (m′ o′ d,mod) P(m ∣od) . (17) Obviously, mo′ d ∨m′ o′ d = true, then: mod =mod ∧(mo′ d ∨m′ o′ d) =(mod ∧mo′ d) ∨(mod ∧m′ o′ d), (18) and mod ∨m′ od = true, then: mo′ d = (mo′ d ∧mod) ∨(mo′ d ∧m′ od), (19) considering the matching monotonicity, mo′ d ∧m′ od = false: mo′ d = mo′ d ∧mod. (20) Substituting Eq. 20 into Eq. 18, it holds that: mod = mo′ d ∨(m′ o′ d ∧mod). (21) Since the disjointness of mo′ d and m′ o′ d, and of mo′ d and mod (exogeneity of d), we obtain: P(mod) = P(mo′ d) + P(mod,m′ o′ d), (22) then taking the exogeneity of d, it yeilds: P(m∣od) = P(m∣o′ d) + P(mod,m′ o′ d). (23) Combining Eq. 17 and Eq. 23, we have: P (m′ o′ d ∣m,od) = (P (m ∣od) −P (m ∣o′ d))/P(m ∣od). (24) Thus, we obtain: PNf = (P (m ∣od) −P (m ∣o′ d))/P(m ∣od), (25) which concludes the proof. It is worth noting that, for a matched image-text pair, the matching degree m (M = true) means they are semantically related and m′ (M = false) means the relationship of matched is no longer vaild. For unmatched image and text, the m indicates the semantic relevance they achieve is being maintained and m′ indicates a weakening of the relevance. Acknowledgements This work is supported by the National Science Fund for Excellent Young Scholars under Grant 62222212. References Anderson, P.; He, X.; Buehler, C.; Teney, D.; Johnson, M.; Gould, S.; and Zhang, L. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6077–6086. Chen, J.; Gao, Z.; Wu, X.; and Luo, J. 2023a. Meta-causal Learning for Single Domain Generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7683–7692. Chen, J.; Hu, H.; Wu, H.; Jiang, Y.; and Wang, C. 2021. Learning the Best Pooling Strategy for Visual Semantic Embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15789– 15798. Chen, L.; Zheng, Y.; Niu, Y.; Zhang, H.; and Xiao, J. 2023b. Counterfactual samples synthesizing and training for robust visual question answering. IEEE Transactions on Pattern Analysis and Machine Intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7112 Chen, T.; and Luo, J. 2020. Expressing objects just like words: Recurrent visual embedding for image-text matching. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 10583–10590. Diao, H.; Zhang, Y.; Ma, L.; and Lu, H. 2021. Similarity Reasoning and Filtration for Image-Text Matching. In Proceedings of the AAAI Conference on Artificial Intelligence. Fu, Z.; Mao, Z.; Song, Y.; and Zhang, Y. 2023. Learning Semantic Relationship Among Instances for Image-Text Matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15159–15168. Glymour, M.; Pearl, J.; and Jewell, N. P. 2016. Causal inference in statistics: A primer. John Wiley & Sons. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Ji, Z.; Chen, K.; and Wang, H. 2021. Step-Wise Hierarchical Alignment Network for Image-Text Matching. In IJCAI. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT, 4171–4186. Kim, D.; Kim, N.; and Kwak, S. 2023. Improving CrossModal Retrieval With Set of Diverse Embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 23422–23431. Lee, K.-H.; Chen, X.; Hua, G.; Hu, H.; and He, X. 2018. Stacked cross attention for image-text matching. In Proceedings of the European Conference on Computer Vision (ECCV), 201–216. Li, K.; Zhang, Y.; Li, K.; Li, Y.; and Fu, Y. 2022a. Image-text embedding learning via visual and textual semantic reasoning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1): 641–656. Li, Y.; Yang, X.; Shang, X.; and Chua, T.-S. 2021. Interventional video relation detection. In Proceedings of the 29th ACM International Conference on Multimedia, 4091–4099. Li, Z.; Guo, C.; Feng, Z.; Hwang, J.-N.; and Xue, X. 2022b. Multi-View Visual Semantic Embedding. In IJCAI. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, 740–755. Springer. Liu, C.; Mao, Z.; Zhang, T.; Xie, H.; Wang, B.; and Zhang, Y. 2020. Graph structured network for image-text matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10921–10930. Liu, R.; Liu, H.; Li, G.; Hou, H.; Yu, T.; and Yang, T. 2022. Contextual Debiasing for Visual Recognition With Causal Mechanisms. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12755–12765. Liu, Y.; Chen, J.; Chen, Z.; Deng, B.; Huang, J.; and Zhang, H. 2021. The blessings of unlabeled background in untrimmed videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6176– 6185. Lv, F.; Liang, J.; Li, S.; Zang, B.; Liu, C. H.; Wang, Z.; and Liu, D. 2022. Causality Inspired Representation Learning for Domain Generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8046–8056. Mao, C.; Xia, K.; Wang, J.; Wang, H.; Yang, J.; Bareinboim, E.; and Vondrick, C. 2022. Causal Transportability for Visual Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7521– 7531. Pan, Z.; Wu, F.; and Zhang, B. 2023. Fine-Grained ImageText Matching by Cross-Modal Hard Aligning Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19275–19284. Pearl, J. 2009. Causality. Cambridge university press. Qu, L.; Liu, M.; Wu, J.; Gao, Z.; and Nie, L. 2021. Dynamic modality interaction modeling for image-text retrieval. In ACM SIGIR, 1104–1113. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28: 91–99. Tang, K.; Niu, Y.; Huang, J.; Shi, J.; and Zhang, H. 2020. Unbiased scene graph generation from biased training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3716–3725. Wang, H.; Zhang, Y.; Ji, Z.; Pang, Y.; and Ma, L. 2020a. Consensus-aware visual-semantic embedding for image-text matching. In ECCV, 18–34. Springer. Wang, S.; Wang, R.; Yao, Z.; Shan, S.; and Chen, X. 2020b. Cross-modal scene graph matching for relationship-aware image-text retrieval. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 1508–1517. Wang, T.; Huang, J.; Zhang, H.; and Sun, Q. 2020c. Visual commonsense r-cnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10760–10770. Wehrmann, J.; Kolling, C.; and Barros, R. C. 2020. Adaptive cross-modal embeddings for image-text alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 12313–12320. Wei, H.; Wang, S.; Han, X.; Xue, Z.; Ma, B.; Wei, X.; and Wei, X. 2022. Synthesizing Counterfactual Samples for Effective Image-Text Matching. In Proceedings of the 30th ACM International Conference on Multimedia, 4355–4364. Wei, X.; Zhang, T.; Li, Y.; Zhang, Y.; and Wu, F. 2020. Multi-modality cross attention network for image and sentence matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10941–10950. Yan, S.; Yu, L.; and Xie, Y. 2021. Discrete-continuous Action Space Policy Gradient-based Attention for Image-Text Matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8096–8105. Yang, X.; Feng, F.; Ji, W.; Wang, M.; and Chua, T.-S. 2021. Deconfounded video moment retrieval with causal intervention. In Proceedings of the 44th International ACM SIGIR The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7113 Conference on Research and Development in Information Retrieval, 1–10. Yao, L.; Huang, R.; Hou, L.; Lu, G.; Niu, M.; Xu, H.; Liang, X.; Li, Z.; Jiang, X.; and Xu, C. 2021. FILIP: Fine-grained Interactive Language-Image Pre-Training. In International Conference on Learning Representations. Young, P.; Lai, A.; Hodosh, M.; and Hockenmaier, J. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2: 67–78. Zang, C.; Wang, H.; Pei, M.; and Liang, W. 2023. Discovering the Real Association: Multimodal Causal Reasoning in Video Question Answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19027–19036. Zhang, D.; Zhang, H.; Tang, J.; Hua, X.-S.; and Sun, Q. 2020a. Causal intervention for weakly-supervised semantic segmentation. Advances in Neural Information Processing Systems, 33: 655–666. Zhang, H.; Mao, Z.; Zhang, K.; and Zhang, Y. 2022a. Show Your Faith: Cross-Modal Confidence-Aware Network for Image-Text Matching. In Proceedings of the AAAI Conference on Artificial Intelligence. Zhang, K.; Mao, Z.; Wang, Q.; and Zhang, Y. 2022b. Negative-Aware Attention Framework for Image-Text Matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15661–15670. Zhang, K.; Zhang, L.; Hu, B.; Zhu, M.; and Mao, Z. 2023a. Unlocking the Power of Cross-Dimensional Semantic Dependency for Image-Text Matching. In Proceedings of the 31st ACM International Conference on Multimedia, 4828– 4837. Zhang, Q.; Lei, Z.; Zhang, Z.; and Li, S. Z. 2020b. Contextaware attention network for image-text retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3536–3545. Zhang, S.; Song, X.; Li, W.; Bai, Y.; Yu, X.; and Jiang, S. 2023b. Layout-Based Causal Inference for Object Navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10792–10802. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7114
2024
790
18,618
HR-Pro: Point-Supervised Temporal Action Localization via Hierarchical Reliability Propagation Huaxin Zhang1, Xiang Wang1, Xiaohao Xu2, Zhiwu Qing1, Changxin Gao1, Nong Sang1* 1 Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology 2 University of Michigan, Ann Arbor {zhanghuaxin, wxiang, qzw, cgao, nsang}@hust.edu.cn, {xiaohaox}@umich.edu Abstract Point-supervised Temporal Action Localization (PSTAL) is an emerging research direction for label-efficient learning. However, current methods mainly focus on optimizing the network either at the snippet-level or the instance-level, neglecting the inherent reliability of point annotations at both levels. In this paper, we propose a Hierarchical Reliability Propagation (HR-Pro) framework, which consists of two reliability-aware stages: Snippet-level Discrimination Learning and Instance-level Completeness Learning, both stages explore the efficient propagation of high-confidence cues in point annotations. For snippet-level learning, we introduce an online-updated memory to store reliable snippet prototypes for each class. We then employ a Reliability-aware Attention Block to capture both intra-video and inter-video dependencies of snippets, resulting in more discriminative and robust snippet representation. For instance-level learning, we propose a point-based proposal generation approach as a means of connecting snippets and instances, which produces high-confidence proposals for further optimization at the instance level. Through multi-level reliability-aware learning, we obtain more reliable confidence scores and more accurate temporal boundaries of predicted proposals. Our HRPro achieves state-of-the-art performance on multiple challenging benchmarks, including an impressive average mAP of 60.3% on THUMOS14. Notably, our HR-Pro largely surpasses all previous point-supervised methods, and even outperforms several competitive fully-supervised methods. Code will be available at https://github.com/pipixin321/HR-Pro. Introduction Temporal action localization is a fundamental task in video understanding field, which attempts to temporally localize and classify action instances in the untrimmed video, and has attracted increasing attention due to its potential application in various fields (Lee, Ghosh, and Grauman 2012; Vishwakarma and Agrawal 2013). However, traditional fullysupervised methods (Lin et al. 2018, 2019; Xu et al. 2020; Qing et al. 2021; Wang et al. 2022b,a; Nag et al. 2022) require accurate temporal annotations, which are extremely time-consuming and labor-demanding, hindering the practical applications. Therefore, many researchers (Wang et al. *indicates corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. T point annotations … Reliable Snippets … Reliable Proposals … Snippet-level Reliability … Instance-level GT (for reference) T Reliability reliable prototype construction reliability propagation Figure 1: Motivation illustration. Given the point-level annotation (in purple), we consider the action reliability prior both at the snippet level and instance proposal level to enable reliability-aware action representation learning. In specific, our insight is to propagate reliable prototypes to produce more discriminative snippet-level scores and more reliable and complete instance-level scores. Darker color (greener or more orange) indicates higher reliability. Here, a case with one action class is shown for brevity. 2017; Shou et al. 2018; Wang et al. 2021b) start to pay attention to weakly-supervised temporal action localization (WSTAL) where only video-level labels are available. Although significant progress in WSTAL has been made, the lack of action boundary information imposes a great challenge for models to distinguish actions and backgrounds, resulting in unsatisfactory performance compared to fullysupervised methods. Under the WSTAL setting, to balance labeling cost and detection performance, Ma et al. (Ma et al. 2020) introduce the point-supervised temporal action localization (PSTAL) task which provides only a timestamp label for each action instance. Their pioneering research indicates that pointlevel annotations consume almost comparable labor costs as video-level annotations while providing richer guidance information. Subsequently, many works start to follow this setting and propose various customized solutions. Typically, LACP (Lee and Byun 2021) proposes to learn completeness of action instances by searching the optimal pseudo sequence with a greedy algorithm. Ju et al. (Ju et al. 2021) proposes a seed frame detector to generate proposals and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7115 then performs regression and classification on the proposals. Although previous methods have achieved impressive results, they are still limited to optimizing the network at either the snippet level or instance level. Snippet-level approaches (Ma et al. 2020; Lee and Byun 2021) may produce many unreliable (e.g., overcomplete or false positive) detections because they only consider individual snippets, which ignore the overall action instance. On the other hand, instance-level approach (Ju et al. 2021) cannot achieve satisfactory optimization due to the absence of reliable proposals generated from snippet scores. We propose that the high reliability of point annotations can be propagated at both snippet and instance levels. To accomplish this, we derive reliable prototypes at different levels by considering their confidence scores and relative positions to point annotations. Leveraging these reliable prototypes through high-confidence information propagation enables the network to learn more discriminative snippet-level representation and more reliable instance-level proposals. Building upon these insights, we present a Hierarchical Reliability Propagation method that consists of two reliability-aware stages: Snippet-level Action Discrimination Learning and Instance-level Action Completeness Learning. These stages are illustrated in Fig. 1. (1) In the Snippet-level Action Discrimination Learning stage, our objective is to obtain discriminative snippet-level scores for generating more reliable proposals. To achieve this, we introduce an online-updated memory to store reliable prototypes for each class. Additionally, we propose a Reliabilityaware Attention Block to propagate high-confidence cues from these reliable prototypes to other snippets. Through contrastive optimization of the memory and snippet features, we derive a more discriminative action representation. (2) In the Instance-level Action Completeness Learning stage, we refine the confidence score and boundary of the proposals through instance-level feature learning. We propose a point-based proposal generation method that produces reliable instance-level prototype proposals, along with highconfidence positive and negative proposals. These proposals’ features are then fed into a Score Head and a Regression Head to predict the completeness score and refined boundary. This prediction process is guided by reliable instance prototypes. As a result, the network can estimate more reliable instance-level scores and achieve more accurate temporal boundaries. To summarize, our contributions are as follows: • Our proposed method, i.e., HR-Pro, is the first to leverage inherent reliability of point annotations at both snippet and instance level optimization in the PSTAL domain. • At the snippet level, we propose a reliability-aware attention module and reliable-memory-based contrastive loss to acquire discriminative snippet-level representation. • At the instance level, we propose reliability-based proposal generation and ranking method to produces highconfidence proposals for further optimization at the instance level. • Our HR-Pro achieves state-of-the-art performance on four standard temporal action localization benchmarks, including an impressive average mAP of 60.3% on THUMOS14, which even surpasses several competitive fully-supervised methods. Related Work Fully-supervised temporal action localization. Mainstream fully-supervised methods can be divided into two categories, i.e., one-stage and two-stage. The one-stage methods (Xu et al. 2020; Zhang, Wu, and Li 2022) simultaneously predicts the boundary and category of action as the final detection result. The two-stage methods (Lin et al. 2018, 2019; Qing et al. 2021; Wang et al. 2022b, 2021a) first generate numerous proposals and then classify the proposals. Despite the significant progress in recent years, these fully-supervised methods require expensive annotation costs, which limits their application. Weakly-supervised temporal action localization. To reduce the labeling cost, many weakly-supervised temporal action localization (WSTAL) methods (Wang et al. 2017; Shou et al. 2018; Liu et al. 2019) have been proposed where only video-level labels are available. Most recent WSTAL methods follow the localization-by-classification mode. They first use a snippets classifier to evaluate the class probability of each video snippet, i.e., Class Activation Sequence (CAS), and then locate the temporal boundary using multiple predefined thresholds. Recently, many attempts have been made to enhance the performance of the model. BaS-Net (Lee, Uh, and Byun 2020) introduces background classes and background branch to suppress class activation values of background snippets. ACM-Net (Qu et al. 2021) proposes three attention branches to separate foreground, background, and context. CoLA (Zhang et al. 2021) proposes a hard snippet mining algorithm and a snippet contrastive loss to refine the hard snippet representation in feature space. ACG-Net (Yang, Qin, and Huang 2022) and DGCNN (Shi et al. 2022) adopt graph networks to enhance feature embedding and model relationships between action snippets. ASM-Loc (He et al. 2022) proposes to use intraand inter-segment attention for modeling action dynamics and capturing temporal dependencies. Due to the absence of frame-wise annotations, the perfomance of these models falls largely behind the fully-supervised methods. Point-supervised temporal action localization. To balance labeling cost and model performance, point-supervised temporal action localization (PSTAL) task is proposed by (Ma et al. 2020), which provides a timestamp label for each action instance. To explore the guidance information provided by point annotations, SF-Net (Ma et al. 2020) uses the single-frame label to mine its adjacent pseudo label for training classifiers. Ju et.al (Ju et al. 2021) uses a two-stage approach, which proposes a seed frame detector to generate proposals and then performs regression and classification on the proposals. LACP (Lee and Byun 2021) searches the optimal pseudo sequence through a greedy algorithm which is used to guide the network to learn the completeness of action instances. However, these methods are limited in optimizing the network either at snippet-level or at instance-level, leading to less effective discriminative representations at the snippet-level and unreliable scores at the instance-level. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7116 Input Video RGB Flow T C T T C Update Stage-1: Snippet-level Discrimination Learning ℒ𝑏𝑎𝑠𝑒 Reliability-based Ranking S A P A Pc 𝜃𝑃 𝜃𝐴 ℒ𝑐𝑜𝑛𝑡𝑟𝑎 Reliable Memory … Feature Extractor Reliability-aware Attention Block Stage-2: Instance-level Completeness Learning Instance-level Action Proposal Snippet-level Feature (X) GT (for reference) T point1 point2 Reliable Proposals (RP) Positive Proposals (PP) Reliability Score Negtive Proposals (NP) end start D T Is Ie Ic Low-reliability Feature Selection … T D … D T Is, Ic, Ie Regression Head Score Head Is, Ie Reliable Proposals ℒ𝑟𝑒𝑔 ℒ𝑠𝑐𝑜𝑟𝑒 Reliable Snippets Instance-level Reliability-aware Optimization Snippet-level Reliability-aware Optimization Figure 2: Overview of Hierarchical Reliability Propagation (HR-Pro). We propagate reliable prototypes during two-stage action localization learning, i.e., Snippet-level Discrimination Learning and Instance-level Completeness Learning. (1) Snippet level: we aim to obtain snippet representations with good inter-class discrimination and action-background discrimination. (2) Instance level: we aim to refine the confidence score and boundary of the coarse proposals generated from snippet-level output. Preliminaries Problem Definition. For point-supervised temporal action localization (PSTAL), models are trained with a singleframe annotation for each untrimmed video. Each action instance is annotated with a temporal point pi and a one-hot vector ypi indicating the action category c with ypi[c] = 1. The video contains a total of N action instances. During inference, we generate predicted results for each test video using (sm, em, cm, pm)M m , where sm and em are the start and end times of the m-th predicted action instance, cm is the predicted category, and pm is the confidence score. M is the total number of predicted action instances. Baseline Architecture. The input video is first divided into multi-frame snippets, then we use a pre-trained video classification model to extract RGB and optical flow features of each snippet and concatenate them along the channel dimension. The features of the input video are formulated as X ∈RT ×D, where T and D indicate the number of snippets and the dimension of features, respectively. The features are then fed into a feature embedding layer get task-specific embedded features Xe ∈RT ×D. Following previous work (Lee and Byun 2021), we first input the embedded features into a snippet-level classifier to obtain class-specific activation sequence (CAS) S ∈RT ×C, where C denotes the class number. To reduce the noise from background snippets, we use a convolutional layer to generate a class-agnostic attention sequence A ∈RT . Then, we fuse them by element-wise production to get the final snippet-level predictions P ∈RT ×C where P = S · A. Baseline Optimization Loss. Based on the characteristic that each action instance contains a point annotation and the adjacent point annotations are in different action instances, we select pseudo action snippets T + = {ti}Nact i=1 and background snippets T −= {tj}Nbkg j=1 based on point annotations and class-agnostic attention sequence. Specifically, snippets near a point annotation with higher class-agnostic attention than a given threshold are labeled as pseudo-action snippets, which share the same action category as the point annotation. Conversely, snippets located between two adjacent point annotations with the lowest class-agnostic attention or lower class-agnostic attention than a given threshold are labeled as pseudo-background snippets. We use these pseudo snippet samples for supervision: Lbase = 1 Nact C X c=1 Nact X t∈T + FL(Pt,c)+ 1 Nbkg Nbkg X t∈T − FL(1−At) (1) where Nact and Nbkg is the total number of pseudo action snippets and background snippets respectively, FL represents the focal loss function (Lin et al. 2017). Method: Hierarchical Reliability Propagation Reliability can help the network mine more pseudo samples, which can alleviate the sparsity of guidance in pointsupervised setting. We argue that the inherent reliability of point annotations can be propagated at both snippet The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7117 and instance level optimization. Therefore, we propose a Hierarchical Reliability Propagation framework, which divides action localization learning into two cascaded stages: (1) Snippet-level Action Discrimination Learning and (2) Instance-level Action Completeness Learning. Snippet-level Action Discrimination Learning Previous works have primarily focused on estimating temporal pseudo-labels to expand training samples, which restricts the propagation of high-confidence snippet information within a single video. Thus, we introduce Reliabilityaware Snippet-level Action Discrimination Learning, which proposes to store the reliable prototypes for each class and propagate high-confidence cues from these prototypes to other snippets via intra-video and inter-video ways. Reliable Prototype Construction. As the snippet-level action representation, i.e., snippet features, only captures short-term and partial action states, the feature can be noisy and unreliable. Thus, our insight is to construct reliable snippet prototypes via a de-noising mechanism for further reliability-guided optimization. Specifically, we create an online-updated prototype memory to store reliable prototypes for each class during representation learning, enabling us to leverage the feature information from the entire dataset to mitigate the noise of each feature. Formally, we denote the item in memory by mc ∈RD(c = 1, 2, ..., C). Under the PSTAL setting, we initialize the prototype pool by selecting features with point annotations for each class. This is done by computing the average of snippet features xpi corresponding to the point annotations pi for class c. We normalize the sum by the total number of point annotations Nc for class c across all training videos. The initial prototype memory is defined as: m0 c = 1 Nc Nc X i xpi(ypi[c] = 1) (2) Next, we update the prototypes for each class using the features of pseudo-action snippets, which is formulated as: mt c = µm(t−1) c + (1 −µ)x(t) ti (3) Here, µ denotes the momentum coefficient for an update. As is shown in Fig. 3, to derive the prototype, we input the snippet-level features extracted by the feature extractor into a Reliability-aware Attention Block (RAB). The RAB is specifically designed to capture both intra-video and intervideo dependencies of snippets, enabling the modeling of complementary temporal relationships. Long-term temporal dependency modeling is crucial for long videos, as supported by previous works (Zhang, Wu, and Li 2022; Wang et al. 2022c; Xu et al. 2022; Wang et al. 2023). However, attention tends to become sparse and focus mainly on discriminative snippets within the same video, resulting in limited information interaction. Therefore, the RAB incorporates the insight of propagating global class information from a reliable prototype (i.e., snippet) memory, thereby enhancing the robustness of snippet features and increasing the attention on less discriminative snippets. … … … 𝑓𝑞 𝑓𝑣 𝑓𝑘 (T+C) x D T x D T x (T+C) softmax LN Feed Forward Layer LN … T x D Reliability-aware Attention Block (RAB) Reliable Memory Figure 3: Architecture detail of Reliability-aware Attention Block (RAB). Reliable prototype memory (in green) is injected into the original snippet features (in grey) to introduce reliable cues via the attention mechanism. Technically, we employ a linear layer fq to project the video features onto the corresponding query. Subsequently, we concatenate ([; ] denotes concatenation) the video features X with the prototype features mi stored in the reliable memory bank. Then, we use separate linear layers, fk and fv, to project the concatenated features into key and value, respectively: Q = fq(X) (4) K = fk([X; m1; ...; mC]), V = fv([X; m1; ...; mC]) (5) Next, we multiply the query and the transposed key to obtain the non-local attention attn ∈RT ×(T +C). attn = softmax(Q · KT / √ D) (6) Furthermore, we multiply the attention with the value, and pass it through a Feed Forward Layer (FFL) composed of cascaded FC-GELU-FC layers. Here, LayerNorm (LN) and residual connection are used for the normalization and retention of the original information. The output reliability-aware features are fed into subsequent network layers for temporal action localization. Reliability-aware Optimization. To push away the features of pseudo-action snippets and prototypes from different classes in the reliable prototype pool and push away the features of pseudo-action and background snippets in the same video, we follow a contrastive learning manner and propose a reliability-aware snippet-contrastive loss (Lcontra): Lcontra = −1 C C X c=1 X tc i log( s(xtc i , mc) s(xtc i , mc) + P ∀k̸=c s(xtc i , mk) + s(xtc i , mc) s(xtc i , mc) + P ∀tj∈T −s(xtj, mc)) (7) where tc i indicates the pseudo action snippet of class c; s(·, ·) is the similarity function formulated as s(x1, x2) = exp(¯x1 · ¯x2/τ) with a temperature parameter τ, ¯x represent the normalized features of x, Finally, the overall training objective for Snippet-level Action Discrimination Learning includes both the baseline loss Lbase and our reliability-aware snippet optimization loss Lcontra weighted by a parameter λ1: Lsnippet = Lbase + λ1Lcontra (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7118 Instance-level Action Completeness Learning Snippet-level Representation Learning empowers our model with robust snippet-level action discrimination capabilities. However, a snippet-level-based pipeline can produce numerous unsatisfactory detections despite discriminative snippet representation, because the proposal score is unreliable without considering the whole instance (e.g., running in background frames has a high snippet score in the long jump category, but it is not a complete long jump action). To fully explore the temporal structure of actions at the instance level and optimize the score ranking of proposals, we introduce Instance-level Action Completeness Learning. This method is aimed at refining the proposals’ confidence score and boundary via instance-level feature learning with the guidance of reliable instance prototypes. Reliable Prototype Construction. To leverage instancelevel priors of point annotations during training, we propose a point-based proposal generation method that yields reliable instance-level prototype proposals, along with highconfidence positive and negative proposals. Initially, we produce candidate proposals for each predicted class by selecting snippets with class-specific activation scores higher than the threshold θP . (we use multiple thresholds in implementation.) The OIC (outer-inner-contrast) score (Shou et al. 2018) is calculated for each candidate proposal to gauge its reliability score, represented as pOIC. Lower reliability scores indicate incomplete or over-complete predictions. We formulate each candidate proposal as (si, ei, ci, pOIC), these proposals are then ranked into two types based on their reliability score and temporal position: (1) Reliable Proposals (RP): for each point in each class, the proposal contained this point, and has the highest reliability (i.e., OIC score); (2) Positive Proposals (PP): all remaining candidate proposals. To ensure a balanced number of positive and negative samples, we group snippets with class-agnostic attention scores lower than pre-defined threshold θA, which derives Negative Proposals (NP). Reliability-aware Optimization. For each proposal, we select all snippet features within the proposal region as its center features Ic, then we expand the boundary of the proposal with ratio ε to get starting region and ending region, which derives starting feature Is and ending features Ie of the proposal, ε is set to 0.25 in practice. (1) To predict the completeness score of each proposal, we use the boundary-sensitive proposal features following (Ren et al. 2023) as input to the Score Head ϕs. ˆpcomp = ϕs([Ic −Is; Ic; Ic −Ie]) (9) where Is, Ic, Ie is the max-pooling feature of Is, Ic, and Ie along the temporal dimension, respectively. Then, the reliability-aware supervision for instance level completeness score can be formulated as: Lscore = 1 Np + Nn Np+Nn X i=1 SmoothL1(ˆpcomp, gcomp) (10) where Np, Nn are the total number of positive proposals and negative proposals, respectively, gcomp represents the Intersection over Union (IoU) between the proposal and the most reliable proposal (RP) that matches it. (2) To obtain more accurate action proposal boundaries, we input the starting features and ending features of each proposal in PP into the Regression Head ϕr to predict the offset of the start time and the end time, i.e., ∆ˆs and ∆ˆe, {∆ˆs, ∆ˆe} = ϕr([Is; Ie]) (11) then, the refined proposal can be obtained: ˆsr = sp −∆ˆswp, ˆer = ep −∆ˆewp (12) where wp = ep −sp is the length of the proposal. Then, the reliability-aware supervision for instance level boundary regression can be formulated as: Lreg = 1 Np Np X i=1 SmoothL1(ˆrcomp, 1) (13) where ˆrcomp represents the IoU between the refined proposal and the most reliable proposal (RP) that matches it. Finally, the reliability-aware instance-level completeness learning has an overall objective function that consists of both regression and score losses weighted by a parameter λ2, formulated as: Linstance = Lscore + λ2Lreg (14) Temporal Action Localization Inference We first extract the snippet-level prediction of predicted class Pc and class-agnostic attention A of each video, which is used to generate candidate proposals,represented as (si, ei, ci, pOIC). Then, we input the instance-level feature of each proposal to the Score and Regression heads, which derives two parts of predicted proposals: the score refined part (si, ei, ci, pOIC + ˆpcomp) and the boundary refined part ( ˆsr, ˆer, ci, pOIC + ˆpr), ˆpr is the completeness score of refined proposal estimated by the trained Score Head. Finally, we combine them and employ class-wise soft-NMS (Bodla et al. 2017) to remove duplicate proposals. Experiments Experimental Setup Datasets. We conduct our experiments on four popular action localization datasets, with only point-level annotations used for training. In our experiments, we utilize the pointlevel annotations provided in (Lee and Byun 2021) for fair comparison. (1) THUMOS14 (Idrees et al. 2017) provides 413 untrimmed sports videos for 20 action categories, including 200 videos for training and 213 videos for testing, and each video contains an average of 15 action instances. Action instance lengths and video lengths vary widely, which makes this dataset challenging. (2) GTEA (Fathi, Ren, and Rehg 2011) provides 28 videos of 7 fine-grained daily activities in a kitchen. Four different subjects perform an activity, and each video contains about 1,800 frames. (3) BEOID (Damen et al. 2014) provides 58 video samples with 30 action classes with an average duration of 60s. There is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7119 Supervision Method mAP@IoU (%) AVG AVG AVG 0.1 0.2 0.3 0.4 0.5 0.6 0.7 (0.1:0.5) (0.3:0.7) (0.1:0.7) Frame-level (Full) P-GCN (ICCV’19) 69.5 67.8 63.6 57.8 49.1 61.6 TCANet (CVPR’21) 60.6 53.2 44.6 36.8 26.7 44.3 AFSD (CVPR’21) 67.3 62.4 55.5 43.7 31.1 52.0 Video-level (Weak) CoLA (CVPR’21) 66.2 59.5 51.5 41.9 32.2 22.0 13.1 50.3 32.1 40.9 TS-PCA (CVPR’21) 67.6 61.1 53.4 43.4 34.3 24.7 13.7 52.0 33.9 42.6 UGCT (CVPR’21) 69.2 62.9 55.5 46.5 35.9 23.8 11.4 54.0 34.6 43.6 FAC-Net (ICCV’21) 67.6 62.1 52.6 44.3 33.4 22.5 12.7 52.0 33.1 42.2 ACG-Net (AAAI’22) 68.1 62.6 53.1 44.6 34.7 22.6 12.0 52.6 33.4 42.5 RSKP (CVPR’22) 71.3 65.3 55.8 47.5 38.2 25.4 12.5 55.6 35.9 45.1 DELU (ECCV’22) 71.5 66.2 56.5 47.7 40.5 27.2 15.3 56.5 37.4 46.4 Li et al. (CVPR’23) 56.2 47.8 39.3 27.5 15.2 37.2 Zhou et al.(CVPR’23) 74.0 69.4 60.7 51.8 42.7 26.2 13.1 59.7 38.9 48.3 Point-level (Weak) SF-Net (ECCV’20) 68.3 62.3 52.8 42.2 30.5 20.6 12.0 51.2 31.6 41.2 Ju et al. (ICCV’21) 72.3 64.7 58.2 47.1 35.9 23.0 12.8 55.6 35.4 44.9 LACP (ICCV’21) 75.7 71.4 64.6 56.5 45.3 34.5 21.8 62.7 44.5 52.8 CRRC-Net (TIP’22) 77.8 73.5 67.1 57.9 46.6 33.7 19.8 64.6 45.1 53.8 HR-Pro (Ours) 85.6 81.6 74.3 64.3 52.2 39.8 24.8 71.6↑7.0 51.1↑6.0 60.3↑6.5 Table 1: Comparisons of detection performance on THUMOS14. We include the methods under video-level and frame-level supervision for reference. We utilize the same annotations under the point-level supervision as (Lee and Byun 2021). ↑denotes the relative performance gain between our method (the best) and the second-best method under point-level supervision setting. GTEA BEOID ActivityNet1.3 Method mAP@IoU (%) mAP@IoU (%) mAP@IoU (%) 0.1 0.3 0.5 0.7 AVG[0.1:0.7] 0.1 0.3 0.5 0.7 AVG[0.1:0.7] 0.5 0.75 0.95 AVG[0.5:0.95] SF-Net (ECCV’20) 58.0 37.9 19.3 11.9 31.0 62.9 40.6 16.7 3.5 30.9 Ju et al. (ICCV’21) 59.7 38.3 21.9 18.1 33.7 63.2 46.8 20.9 5.8 34.9 Li et al. (CVPR’21) 60.2 44.7 28.8 12.2 36.4 71.5 40.3 20.3 5.5 34.4 LACP (ICCV’21) 63.9 55.7 33.9 20.8 43.5 76.9 61.4 42.7 25.1 51.8 40.4 24.6 5.7 25.1 CRRC-Net (TIP’22) 39.8 24.1 5.9 24.0 HR-Pro (Ours) 72.6 61.1 37.3 17.5 47.3↑3.8 78.5 72.1 55.3 26.1 59.4↑7.6 42.8 27.2 8.0 27.1↑2.0 Table 2: Comparisons of detection performance on GTEA, BEOID, and ActivityNet1.3 datasets. Snippet-level Instance-level AVG Lbase Lcontra RAB Lreg Lscore RP NP [0.1:0.7] ✓ 49.4 ✓ ✓ 51.5 ✓ ✓ ✓ 54.7↑5.3 ✓ ✓ ✓ ✓ 56.1 ✓ ✓ ✓ ✓ 57.1 ✓ ✓ ✓ ✓ ✓ 57.8 ✓ ✓ ✓ ✓ ✓ 56.8 ✓ ✓ ✓ ✓ ✓ 58.1 ✓ ✓ ✓ ✓ ✓ ✓ 59.1 ✓ ✓ ✓ ✓ ✓ ✓ ✓ 60.3↑10.9 ✓ ✓ ✓ ✓ ✓ 53.9 ✓ ✓ ✓ ✓ ✓ ✓ 56.2 ✓ ✓ ✓ ✓ ✓ ✓ ✓ 60.3↑10.9 Table 3: Ablation study on THUMOS14. ↑denotes the relative gain between each setting and baseline (Lbase only). an average of 12.5 action instances per video. (4) ActivityNet 1.3 (Caba Heilbron et al. 2015) provides 10,024 training, 4,926 validation, and 5,044 test videos with 200 action classes. Each video includes 1.6 action instances on average. Evaluation metric. We follow the standard protocols to evaluate with mean Average Precision (mAP) under different intersection over union (IoU) thresholds. A proposal is regarded as positive only if both IoU exceeds the set threshold and the category prediction is correct. Implementation Details. For a fair comparison, we follow existing method (Lee and Byun 2021) to divide each video into 16-frame snippets and use two-stream I3D network pretrained on Kinetics-400 (Carreira and Zisserman 2017) as the feature extractor. For THUMOS14, we use the Adam optimizer with a learning rate of 1e-4 and a weight decay of 1e-3, and the batch size is set to 16. The hyper-parameters are set by grid search: τ = 0.1, µ = 0.999, λ1 = λ2 = 1. The video-level threshold is set to 0.5, the θP spans from 0 to 0.25 with a step size of 0.05, the θA spans from 0 to 0.1 with a step size of 0.01. The number of RAB is set to 2. Comparison with State-of-The-Art Methods We evaluate the effectiveness of our proposed method by comparing it against the most recent fully-supervised and weakly-supervised temporal action localization methods. THUMOS14. Our proposed method, i.e., HR-Pro, achieves The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7120 GT CAS (LACP) Loc. (LACP) CAS (Ours) Loc. (Ours) GT CAS (LACP) Loc. (LACP) CAS (Ours) Loc. (Ours) Figure 4: Qualitative results for two action categories, GolfSwing (left) and HammerThrow (right), on THUMOS14. We compare the detection results of HR-Pro and LACP. The orange and blue bars indicate the ground truth and predicted localization results, respectively; Blue curves represent snippet-level prediction. Prediction errors are bound with red bounding boxes. score T score T GT(for reference) Good Predictions Bad Predictions 0 1 0 1 Figure 5: Visualization of detection results on THUMOS14 dataset before (left) and after (right) instance-level completeness learning. The x-axis and y-axis represent time and the reliability score, respectively. We observe that the discrepancy between good and bad predictions is enlarged significantly after instance-level completeness learning. state-of-the-art performance on THUMOS14 testing set for point-level weakly-supervised temporal action localization. Compared to previous state-of-the-art methods in Table 1, HR-Pro has an average mAP of 60.3% for IoU thresholds of 0.1:0.7, outperforming the prior SoTA method (Fu, Gao, and Xu 2022) by 6.5% for the same thresholds. Notably, our point-supervised method is able to achieve comparable performance with competitive fully-supervised methods, such as AFSD (51.1% vs. 52.0% in average mAP for IoU thresholds of 0.3:0.7). Moreover, HR-Pro demonstrates superior detection performance compared to videolevel weakly-supervised methods with similar labeling cost, thanks to the position information provided by point labels. GTEA & BEOID & ActivityNet 1.3. We demonstrate the generality and superiority of HR-Pro on diverse benchmarks in Table 2. Our method significantly outperforms existing methods, achieving improvements of 3.8%, 7.6%, and 2.0%, on GTEA, BEOID, and ActivityNet 1.3, respectively. Ablation Study To further analyze the contribution of model components compared to the baseline setting (with a detection result of 49.4%), we perform a set of ablation studies on THUMOS14. The results are summarized in Table 3. Snippet-level Discrimination Learning. The introduction of contrastive loss increases performance by 2.1%. Contrastive optimization not only reduces classification errors but also improves the model’s ability to distinguish between action and background, thereby improving detection performance. The introduction of Reliability-aware Attention Block (RAB) further improves detection performance by 3.2%. We speculate that the introduction of RAB increases the attention on less reliable action snippets, thus detecting more non-discriminative actions. Instance-level Completeness Learning. We see the introduction of regression loss and score loss significantly increases the detection performance. The introduction of reliable proposals and negative proposals generated based on point annotations further boosts the results. These results demonstrate that the components of instance-level completeness learning complement each other and make the network estimate the proposal score and boundaries more accurately. Qualitative Results Qualitative Comparison. In Fig. 4, we compare our HRPro with LACP for temporal action localization on test videos in THUMOS14. Our model shows more accurate detection of action instances. In specific, for GolfSwing action, our method effectively distinguishes between action and background snippets, mitigating false action predictions that LACP struggles with; for HammerThrow action, our method detects more complete snippets than LACP, which has lower activation values on non-discriminative action snippets. Effect of Instance-level Completeness Learning. Fig. 5 shows that completeness learning helps our method reduce the score of overcomplete and false positive proposals, leading to improved detection results. Conclusion This paper introduces a new framework named HR-Pro for point-supervised temporal action localization. HR-Pro comprises two reliability-aware stages that efficiently propagate high-confidence cues from point annotations at both the snippet and instance level, which enables the network to learn more discriminative snippet representation and more reliable proposals. Extensive experiments on multiple benchmarks demonstrate that HR-Pro significantly outperforms existing methods and achieves state-of-the-art results, which demonstrates the effectiveness of our method and the potential of point annotations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7121 Acknowledgements This work is supported by the National Natural Science Foundation of China under grant U22B2053. References Bodla, N.; Singh, B.; Chellappa, R.; and Davis, L. S. 2017. Soft-NMS–improving object detection with one line of code. In ICCV, 5561–5569. Caba Heilbron, F.; Escorcia, V.; Ghanem, B.; and Carlos Niebles, J. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR, 961–970. Carreira, J.; and Zisserman, A. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 6299–6308. Damen, D.; Leelasawassuk, T.; Haines, O.; Calway, A.; and Mayol-Cuevas, W. W. 2014. You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video. In BMVC, 3. Fathi, A.; Ren, X.; and Rehg, J. M. 2011. Learning to recognize objects in egocentric activities. In CVPR, 3281–3288. Fu, J.; Gao, J.; and Xu, C. 2022. Compact Representation and Reliable Classification Learning for Point-Level Weakly-Supervised Action Localization. IEEE Transactions on Image Processing, 7363–7377. He, B.; Yang, X.; Kang, L.; Cheng, Z.; Zhou, X.; and Shrivastava, A. 2022. ASM-Loc: action-aware segment modeling for weakly-supervised temporal action localization. In CVPR, 13925–13935. Idrees, H.; Zamir, A. R.; Jiang, Y.-G.; Gorban, A.; Laptev, I.; Sukthankar, R.; and Shah, M. 2017. The THUMOS challenge on action recognition for videos “in the wild”. Computer Vision and Image Understanding, 1–23. Ju, C.; Zhao, P.; Chen, S.; Zhang, Y.; Wang, Y.; and Tian, Q. 2021. Divide and conquer for single-frame temporal action localization. In ICCV, 13455–13464. Lee, P.; and Byun, H. 2021. Learning action completeness from points for weakly-supervised temporal action localization. In ICCV, 13648–13657. Lee, P.; Uh, Y.; and Byun, H. 2020. Background suppression network for weakly-supervised temporal action localization. In AAAI, 11320–11327. Lee, Y. J.; Ghosh, J.; and Grauman, K. 2012. Discovering important people and objects for egocentric video summarization. In 2012 IEEE conference on computer vision and pattern recognition, 1346–1353. Lin, T.; Liu, X.; Li, X.; Ding, E.; and Wen, S. 2019. Bmn: Boundary-matching network for temporal action proposal generation. In ICCV, 3889–3898. Lin, T.; Zhao, X.; Su, H.; Wang, C.; and Yang, M. 2018. Bsn: Boundary sensitive network for temporal action proposal generation. In ECCV, 3–19. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. Liu, Z.; Wang, L.; Zhang, Q.; Gao, Z.; Niu, Z.; Zheng, N.; and Hua, G. 2019. Weakly supervised temporal action localization through contrast based evaluation networks. In ICCV, 3899–3908. Ma, F.; Zhu, L.; Yang, Y.; Zha, S.; Kundu, G.; Feiszli, M.; and Shou, Z. 2020. Sf-net: Single-frame supervision for temporal action localization. In ECCV, 420–437. Springer. Nag, S.; Zhu, X.; Song, Y.-Z.; and Xiang, T. 2022. Proposalfree temporal action detection via global segmentation mask learning. In ECCV, 645–662. Qing, Z.; Su, H.; Gan, W.; Wang, D.; Wu, W.; Wang, X.; Qiao, Y.; Yan, J.; Gao, C.; and Sang, N. 2021. Temporal context aggregation network for temporal action proposal refinement. In CVPR, 485–494. Qu, S.; Chen, G.; Li, Z.; Zhang, L.; Lu, F.; and Knoll, A. 2021. Acm-net: Action context modeling network for weakly-supervised temporal action localization. arXiv preprint arXiv:2104.02967. Ren, H.; Yang, W.; Zhang, T.; and Zhang, Y. 2023. ProposalBased Multiple Instance Learning for Weakly-Supervised Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2394–2404. Shi, H.; Zhang, X.-Y.; Li, C.; Gong, L.; Li, Y.; and Bao, Y. 2022. Dynamic Graph Modeling for Weakly-Supervised Temporal Action Localization. In ACMMM, 3820–3828. Shou, Z.; Gao, H.; Zhang, L.; Miyazawa, K.; and Chang, S.-F. 2018. Autoloc: Weakly-supervised temporal action localization in untrimmed videos. In ECCV, 154–171. Vishwakarma, S.; and Agrawal, A. 2013. A survey on activity recognition and behavior understanding in video surveillance. The Visual Computer, 983–1009. Wang, L.; Xiong, Y.; Lin, D.; and Van Gool, L. 2017. Untrimmednets for weakly supervised action recognition and detection. In CVPR, 4325–4334. Wang, Q.; Zhang, Y.; Zheng, Y.; and Pan, P. 2022a. Rcl: Recurrent continuous localization for temporal action detection. In CVPR, 13566–13575. Wang, X.; Qing, Z.; Huang, Z.; Feng, Y.; Zhang, S.; Jiang, J.; Tang, M.; Gao, C.; and Sang, N. 2021a. Proposal relation network for temporal action detection. arXiv preprint arXiv:2106.11812. Wang, X.; Qing, Z.; Huang, Z.; Feng, Y.; Zhang, S.; Jiang, J.; Tang, M.; Shao, Y.; and Sang, N. 2021b. Weaklysupervised temporal action localization through local-global background modeling. arXiv preprint arXiv:2106.11811. Wang, X.; Zhang, H.; Zhang, S.; Gao, C.; Shao, Y.; and Sang, N. 2022b. Context-aware Proposal Network for Temporal Action Detection. arXiv preprint arXiv:2206.09082. Wang, X.; Zhang, S.; Qing, Z.; Gao, C.; Zhang, Y.; Zhao, D.; and Sang, N. 2023. MoLo: Motion-augmented Long-short Contrastive Learning for Few-shot Action Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18011–18021. Wang, X.; Zhang, S.; Qing, Z.; Tang, M.; Zuo, Z.; Gao, C.; Jin, R.; and Sang, N. 2022c. Hybrid Relation Guided Set The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7122 Matching for Few-Shot Action Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19948–19957. Xu, M.; Zhao, C.; Rojas, D. S.; Thabet, A.; and Ghanem, B. 2020. G-tad: Sub-graph localization for temporal action detection. In CVPR, 10156–10165. Xu, X.; Wang, J.; Li, X.; and Lu, Y. 2022. Reliable propagation-correction modulation for video object segmentation. In AAAI, 1171–1196. Yang, Z.; Qin, J.; and Huang, D. 2022. ACGNet: Action complement graph network for weakly-supervised temporal action localization. In AAAI, 3090–3098. Zhang, C.; Cao, M.; Yang, D.; Chen, J.; and Zou, Y. 2021. Cola: Weakly-supervised temporal action localization with snippet contrastive learning. In CVPR, 16010–16019. Zhang, C.-L.; Wu, J.; and Li, Y. 2022. Actionformer: Localizing moments of actions with transformers. In ECCV, 492–510. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7123
2024
791
18,619
AvatarVerse: High-Quality & Stable 3D Avatar Creation from Text and Pose Huichao Zhang1*, Bowen Chen1*, Hao Yang1, Liao Qu1, 2, Xu Wang1 Li Chen1, Chao Long1, Feida Zhu1, Daniel Du1, Min Zheng1 1ByteDance, Beijing, China. 2Carnegie Mellon University, PA, USA {zhanghuichao.hc, chenbowen.cbw, wangxu.ailab, chenli.phd, longchao, zhufeida, dukang.daniel, zhengmin.666}@bytedance.com, liaoq@andrew.cmu.edu, yanghao.alexis@foxmail.com Abstract Creating expressive, diverse and high-quality 3D avatars from highly customized text descriptions and pose guidance is a challenging task, due to the intricacy of modeling and texturing in 3D that ensure details and various styles (realistic, fictional, etc). We present AvatarVerse, a stable pipeline for generating expressive high-quality 3D avatars from nothing but text descriptions and pose guidance. In specific, we introduce a 2D diffusion model conditioned on DensePose signal to establish 3D pose control of avatars through 2D images, which enhances view consistency from partially observed scenarios. It addresses the infamous Janus Problem and significantly stablizes the generation process. Moreover, we propose a progressive high-resolution 3D synthesis strategy, which obtains substantial improvement over the quality of the created 3D avatars. To this end, the proposed AvatarVerse pipeline achieves zero-shot 3D modeling of 3D avatars that are not only more expressive, but also in higher quality and fidelity than previous works. Rigorous qualitative evaluations and user studies showcase AvatarVerse’s superiority in synthesizing high-fidelity 3D avatars, leading to a new standard in high-quality and stable 3D avatar creation. Our project page is: https://avatarverse3d.github.io/ . 1 Introduction The creation of high-quality 3D avatars has garnered significant interest due to their widespread applications in domains such as game production, social media and communication, augmented and virtual reality (AR/VR), and human-computer interaction. Traditional manual construction of these intricate 3D models is a labor-intensive and time-consuming process, requiring thousands of hours from skilled artists possessing extensive aesthetic and 3D modeling expertise. Consequently, automating the generation of high-quality 3D avatars using only natural language descriptions holds great research prospects with the potential to save resources, which is also the goal of our work. Recently, great efforts have been made in reconstructing high-fidelity 3D avatars from multi-view videos (Jiang et al. 2022; Li et al. 2023d; Zheng et al. 2023; Wang et al. 2023a; Isik et al. 2023) or reference images (Wang et al. 2021; *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Xiu et al. 2022). These methods primarily rely on limited visual priors from videos or reference images, leading to constrained ability to generate creative avatars with complex text prompts. Diffusion models has been applied to various realms (Ho et al. 2022; Li et al. 2023c; Ruan et al. 2022; Li et al. 2023b; Nie et al. 2022) in recent years. Specifically, in 2D image generation, they (Rombach et al. 2021; Zhang and Agrawala 2023; Saharia et al. 2022) illustrate considerable creativity, primarily due to the availability of large-scale text-image pairs. Nevertheless, the limited diversity of 3D models present challenges to effectively training a 3D diffusion model. Several studies (Poole et al. 2022; Cao et al. 2023; Huang et al. 2023; Kolotouros et al. 2023) have investigated the use of pre-trained text-image diffusion models to optimize Neural Radiance Fields (NeRF) (Mildenhall et al. 2020) for generating high-fidelity 3D models. Yet, stable creation of high-quality 3D avatars exhibiting various poses, appearances, and shapes remains a difficult task. Employing common score distillation sampling (SDS) (Poole et al. 2022) to guide NeRF optimization without additional control tends to bring in the Janus (multi-face) problem. The avatars produced by current approaches tend to exhibit noticeable blurriness and coarseness, leading to the absence of high-resolution local texture details, accessories, and other relevant features. To cope with these weaknesses, we propose AvatarVerse, a novel framework designed for generating high-quality and stable 3D avatars from textual descriptions and pose guidances. We first train a new ControlNet with human DensePose condition (G¨uler, Neverova, and Kokkinos 2018) over 800K images. SDS loss conditinal on the 2D DensePose signal is then implemented on top of the ControlNet. Through this way, we obtain precise view correspondence between different 2D views as well as between every 2D view and the 3D space. Our approach not only enables pose control of the generated avatars, but also eliminates the Janus Problem suffered by most existing methods. It thus ensures a more stable and view-consistent avatar creation process. Additionally, benefiting from the accurate and flexible supervision signals provided by DensePose, the generated avatars can be highly aligned with the joints of the SMPL model, enabling simple and effective skeletal binding and control. While relying solely on DensePose-conditioned ControlNet may result in local artifacts and blurry details, we inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7124 Elsa in Frozen Disney Woody in Toy Story Captain America Buzz Lightyear Nick Wilde from film Zootopia Simba from The Lion King a Viking a body builder wearing a tanktop a person dresed at the Venice Carnival a man wearing a white tanktop and shorts Master Chief in Halo Series Jake Sully in Avatar series The Flash Deadpool Super Saiyan Goku a security guard a karate master wearing a black belt Figure 1: High-quality 3D avatars generated by AvatarVerse based on a simple text description. troduce a progressive high-resolution generation strategy to enhance the fidelity and the detail of local geometry. To alleviate the coarseness of the generated avatar, we incorporate a smoothness loss, which regularizes the synthesis procedure by encouraging a smoother gradient of the density voxel grid within our computationally efficient explicit Neural Radiance Fields (NeRF). The overall contributions are as follows: • We present AvatarVerse, a method that can automatically create a high-quality 3D avatar according to nothing but a text description and a reference human pose. • We present the DensePose-Conditioned Score Distillation Sampling Loss, an approach that facilitates poseaware 3D avatar synthesis and effectively mitigates the Janus problem, thereby enhancing system stability. • We bolster the quality of the produced 3D avatars via a progressive high-resolution generation strategy. This method, through a meticulous coarse-to-fine refining process, synthesizes 3D avatars with superior detail, encompassing elements like hands, accessories, and beyond. • AvatarVerse delivers exceptional performance, excelling in both quality and stability. Rigorous qualitative evaluations, complemented by comprehensive user studies, underscore AvatarVerse’s supremacy in crafting highfidelity 3D avatars, thereby setting a new benchmark in stable, zero-shot 3D avatar creation of the highest quality. 2 Related Work 2.1 Text-Guided 3D Content Generation The success in text-guided 2D image generation has paved the way for the development of text-guided 3D content generation methods. DreamFields (Jain et al. 2021), and CLIPMesh (Khalid et al. 2022) utilize the CLIP model (Radford et al. 2021) to optimize underlying 3D representations such as meshes and NeRF. DreamFusion (Poole et al. 2022) first proposes score distillation sampling (SDS) loss to get supervision from a pre-trained diffusion model during the 3D generation. Latent-NeRF (Metzer et al. 2022) improves upon DreamFusion by optimizing a NeRF that operates the diffusion process in a latent space. ProlificDreamer (Wang et al. 2023b) proposes variational score distillation and produces high-resolution and high-fidelity results. Despite their promising performance in 3D general content generation, these methods often produce suboptimal results when generThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7125 ating avatars, exhibiting issues like low quality, Janus (multiface) problem, and incorrect body parts. In contrast, our AvatarVerse enables an accurate and high-quality generation of 3D avatars from text prompts. 2.2 Text-Guided 3D Avatar Generation Avatar-CLIP (Hong et al. 2022) first initializes 3D human geometry with a shape VAE network and utilizes CLIP (Radford et al. 2021) to facilitate geometry sculpting and texture generation. DreamAvatar (Cao et al. 2023) and AvatarCraft (Jiang et al. 2023) employ the SMPL model as a shape prior and utilize pretrained text-to-image diffusion models to generate 3D avatars. DreamFace (Zhang et al. 2023) introduces a coarse-to-fine scheme to create personalized 3D facial structures. HeadSculpt (Han et al. 2023) generates 3D head avatars by leveraging landmark-based control and a learned textual embedding representing the back view appearance of heads. Concurrent with our work, DreamWaltz (Huang et al. 2023) presents 3D-consistent occlusion-aware score distillation sampling, which incorporates 3D-aware skeleton conditioning for view-aligned supervision. Constrained by the original training data, the skeleton-conditioned diffusion model may still exhibit view inconsistencies such as failing to generate the backside of desired avatars or struggling to generate specific body parts when provided with partial skeleton information. Furthermore, the sparse nature of the skeleton makes it challenging for the model to determine avatar contours and edges, leading to low-quality results. On the contrary, our proposed DensePose-conditioned ControlNet ensures highquality, view-consistent image generation of various viewpoints and body parts, including full body, legs, head, and more, guaranteeing superior avatar quality. 2.3 High-Quality 3D Avatar Generation Recently, there has been a growing focus on achieving highquality or high-fidelity 3D generation and reconstruction. Some methods attempt to generate high-fidelity 3D human avatars from multi-view RGB videos (Jiang et al. 2022; Li et al. 2023d; Zheng et al. 2023; Wang et al. 2023a; Isik et al. 2023). There has also been work (Lin et al. 2022) explored a coarse-to-fine methodology, specifically by optimizing a high-resolution latent diffusion model to refine a textured 3D mesh model. In parallel to our work, DreamHuman (Kolotouros et al. 2023) zooms in and renders a 64 × 64 image for 6 important body regions during optimization. However, limited by the computation needs of MipNeRF-360, it can only produce low-resolution avatars without high-resolution details. Also, DreamHuman use SMPL shape for direct geometric supervision, which tends to provide skin-tight avatars. Our method, on the other hand, is more controllable and flexible, allowing for the creation of a wider range of accessories, clothing, and other features. Our AvatarVerse introduces a progressive high-resolution generation strategy. This involves gradually decreasing the camera’s radius and focusing on distinct body parts, which facilitates the creation of a diverse range of accessories, clothing, and other elements. Our use of progressive grid also ensures a fine-grained generation. 3 Methodology In this section, we present AvatarVerse, a fully automatic pipeline that can make a realistic 3D avatar from nothing but a text description and a body pose. After introducing some preliminaries, we first explain the DensePose-conditioned SDS loss, which facilitates pose-aware 3D avatar synthesis and effectively mitigates the Janus problem. We then introduce novel strategies that enhance the synthesis quality: the progressive high-resolution generation strategy and the avatar surface smoothing strategy. 3.1 DensePose SDS Loss Prior research (Poole et al. 2022; Lin et al. 2022) predominantly employs supplementary text prompts, such as “front view” or “overhead view”, to enhance view consistency. However, reliance solely on text prompts proves inadequate for accurately conditioning a 2D diffusion model on arbitrary views. This inadequacy engenders instability in 3D model synthesis, giving rise to issues like the Janus problem. As a solution, we propose the utilization of DensePose (G¨uler, Neverova, and Kokkinos 2018) as a more robust control signal, as depicted in Figure 2. We choose DensePose as the condition because it delivers precise localization of 3D body parts in 2D images, affording intricate details and boundary conditions that may be overlooked by skeletal or other types of conditions. Notably, it exhibits resilience in challenging scenarios, facilitating accurate control even when body parts are partially concealed. We first train a ControlNet (Zhang and Agrawala 2023) conditioned by DensePose part-labeled annotations using the DeepFashion (Liu et al. 2016) dataset. Figure 3 illustrates the capabilities of our ControlNet in generating highquality view-consistent images, including various viewpoints and body parts such as full body, legs, head, and more. Given a specific camera viewpoint and pose P, we generate the DensePose condition image c by rendering the partlabeled SMPL model with the corresponding pose P. The conditioned SDS loss is shown in the following equation: ∇θLP−SDS (ϕ, x = g(θ, P)) = Et,ϵ h w(t) (ˆϵ −ϵ) ∂x ∂θ i (1) ˆϵ = ϵϕ (zt; y, t, c = h(SMPL, P)) (2) Here, g and h represent the NeRF render function and SMPL render function, respectively. The NeRF model and the SMPL pose model share identical camera viewpoints. This alignment of viewpoints enables coherent and consistent representations between the scene captured by NeRF and the corresponding human pose modeled by SMPL, allowing for better avatar generation. Our DensePoseconditioned ControlNet can generate various non-skin-tight realistic and fictional avatars as shown in Figure 3 (c). 3.2 Progressive High-Resolution Generation Previous studies commonly apply SDS loss over the entire body, such global guidance often fails to produce highquality details, especially for areas like hands, face, etc. These approaches lack effective guidance mechanisms to ensure the generation of high-quality, detailed geometry and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7126 (b) Progressive High-Resolution Generation (a) Avatar Generation densepose render shared viewpoint A DLSR photo of Caption America ControlNet ℒ!"! (1) progressive grid (2) bbox tightening 𝑽(𝒅𝒆𝒏𝒔𝒊𝒕𝒚) 𝑽(𝒄𝒐𝒍𝒐𝒓) volume render Shallow MLP densepose condition explicit NeRF (3) progressive radius (4) focus mode ... Figure 2: The overview of AvatarVerse. Our network takes a text prompt and DensePose signal as input to optimize an explicit NeRF via a DensePose-COCO pre-trained ControlNet. We use strategies including progressive grid, progressive radius, and focus mode to generate high-resolution and high-quality 3D avatars. textures. To address this limitation, we propose a variety of guidance strategies aimed at promoting the generation of accurate and detailed representations, including progressive grid, focus mode, and progressive radius. Progressive Grid Progressive training strategy is commonly used in 2d generation and 3d reconstruction method (Karras et al. 2019; Liu et al. 2020; Sun, Sun, and Chen 2021), while we find it critical in our method for neat and efficient 3d avatar generation. We set a predetermined number of voxels Nv as the final model resolution and double the voxel number after certain steps of optimization. The voxel size sv is updated accordingly. During the early stage of training, we only need to generate a rough avatar shape. By allocating fewer grids, we can reduce the learning space and minimize floating artifacts. This strategy enables a gradual refinement of the avatar throughout the optimization process, allowing the model to adaptively allocate computational resources. Also, the early stage of NeRF optimization is dominated by free space (i.e., space with low density). Motivated by this fact, we aim to find the areas of coarse avatar and allocate computational and memory resources to these important regions. To delineate the targeted area, we employ a density threshold to filter the scene and use a bounding box (bbox) to tightly enclose this area. Let dx, dy, dz represent the lengths of the tightened bbox, he voxel size can be computed as sv = 3q dx×dy×dz Nv . By shrinking the lengths of the bbox, the voxel size decreases, enabling high-resolution and more voxel around the avatar. This would enhance the model’s ability to capture and model intricate details, such as fine-grained body contours, facial features, and clothing folds. Progressive Radius Let pg ckpt be the set of checkpoint steps. When reaching the training step in pg ckpt, we decrease the radius of the camera by 20%. This allows for gradual rendering of finer details stage by stage. By applying the conditioned SDS loss to smaller regions of the avatar, the model can capture and emphasize intricate features, ultimately producing more realistic and visually appealing outputs. Focus Mode Similarly, to generate better intricacy in specific body parts, we introduce a focus mode (as illustrated in Fig. 2 (b)) during both the coarse/fine stage. Thanks to the SMPL prior, we can easily compute the raw body parts positions for any given pose. By placing the camera close to important body parts, loss calculation can be performed in a very small avatar region with 512 × 512 resolution. Owing to the stable performance of our DensePose ControlNet, as shown in Fig. 2, partial body can be generated without additional computational resources. Focus mode thus facilitates the creation of high-quality avatar details. Mesh Refinement To render fine-grained high-resolution avatars within reasonable memory constraints and computation budgets, we further incorporate deformable tetrahedral grids (Lin et al. 2022; Shen et al. 2021) to learn textured 3D meshes of the generated avatars. Similar to (Lin et al. 2022), we use the trained explicit NeRF as the initialization for the mesh geometry, and optimize the mesh via backpropagation using the DensePose conditioned SDS gradient (Eq. 1). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7127 Figure 3: Qualitative results of our DensePose-conditioned ControlNet. (a) 10 generated images controlled by DensePose with varying viewpoints and body parts. (b) 10 corresponding images with the same viewpoints controlled by human pose (Openpose) signals. It often fails to generate the backside of the avatar (4-th (b)) and struggles with part generation (the last two columns). (c) non-skin-tight generation results in both realistic and fictional avatars. DreamHuman Ours Ours DreamWaltz DreamAvatar DreamFusion (b) (c) (a) Figure 4: Qualitative comparisons with four SOTA methods. We show several non-cherry-picked results generated by AvatarVerse. Our method generates higher-resolution details and maintains a fine-grained geometry compared with other methods. (a): ”Spiderman”; ” a man wearing a white tanktop and shorts”, (b): ”Joker”; ”a karate master wearing a Black belt”, (c): ”Stormtrooper”; ”a Roman soldier wearing his armor”. 3.3 Avatar Surface Smoothing Maintaining a globally coherent avatar shape for explicit grids is challenging due to high degree of freedom and lack of spatial coherence. Individual optimization of each voxel limits information sharing across grids, resulting in a less smooth surface for the generated avatar and some local minima. To address this problem, we follow the definition of the Gaussian convolution G in (Wu et al. 2022) and include a modified smoothness regularization formulated as: Ls(V ) = ∥G (V, kg, σg) −V ∥2 2 (3) L = LP−SDS + λ ∗Ls(V ) (4) Here, kg and σg represent the kernel size and the standard deviation. We apply this smoothness term to the gradient of the density voxel grid V , yielding a gradient smoothness loss Ls(∇V (density)). This encourages a smoother surface and mitigates the presence of noisy points. The overall loss is defined in (4), where λ denotes the smooth coefficient. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7128 4 Experiments In this section, we illustrate the effectiveness of our proposed method. We demonstrate the efficacy of each proposed strategy and provide a detailed comparison against recent state-of-the-art methods. 4.1 Implementation Details We follow (Sun, Sun, and Chen 2021) to implement the explicit NeRF in our method. For each text prompt, we train AvatarVerse for 5000 and 4000 iterations in the coarse stage and mesh refinement stage, respectively. The whole generation process takes around 2 hours on one single NVIDIA A100 GPU. We include initialization, densepose training and progressive high-resolution generation details in this section. For more comprehensive experiment details, we refer the reader to our Supplementary Material. Initialization To aid in the early stages of optimization, we adopt a technique inspired by (Poole et al. 2022) and introduce an ellipsoidal density ”blob” around the origin. The dimensions of the ”blob” in the XYZ axes are determined based on the range of coordinates in the SMPL pose model. Furthermore, we incorporate additional SMPL-derived density bias (Cao et al. 2023) to facilitate avatar generation. DensePose Training We annotate the DeepFashion dataset (Liu et al. 2016) using a pretrained DensePose (G¨uler, Neverova, and Kokkinos 2018) model, resulting in over 800K image pairs. The ControlNet is trained using these image pairs with BLIP2-generated text prompt (Li et al. 2023a). The diffusion model employed is SD1.5. Progressive High-Resolution Generation For progressive grid, we double the number of voxels at 500, 1500, and 2000 iterations at the coarse stage. After 3000 steps, we shrink the bounding box to the region where the density exceeds 0.1. Our progressive radius consists of three stages, where the camera radius ranges from 1.4 to 2.1, 1 to 1.5, and 0.8 to 1.2 respectively. We reduce the radius at 1000 and 2000 iterations across both stages. Our focus mode starts from the 1000-th step in the coarse stage and is consistently employed throughout the mesh refinement phase. 4.2 Qualitative Results Comparison with SOTA Methods We present qualitative comparisons with DreamFusion (Poole et al. 2022), DreamAvatar (Cao et al. 2023), DreamWaltz (Huang et al. 2023), and DreamHuman (Kolotouros et al. 2023) in Fig. 4. Our method consistently outperforms them in terms of both geometry and texture quality. The surface of the avatars generated by our method is exceptionally clear, owing to our progressive high-resolution generation strategy. In comparison to DreamHuman, the avatars produced by our method exhibit a richer array of details across all cases, encompassing skin, facial features, clothing, and more. Flexible Avatar Generation In Fig. 5, we demonstrate the capability of our method in generating 3D partial avatars, which is not achievable by other existing methods due to the absence of the DensePose control. Our method enables (a) (b) Figure 5: Flexible Avatar Generation. (a) Partial Generation. Results are generated with the same prompt ”Stormtrooper” and ”Batman”. (b) Arbitrary Pose Generation. the partial generation by directly modifying the input DensePose signal, eliminating the need for extra descriptive information such as ”The head of...” or ”The upper body of...”. This allows us to generate partial avatars of various types thanks to the attached semantics, including full-body, halfbody, head-only, hand-only, etc. Additionally, our AvatarVerse is capable of generating avatars in various poses, showcasing our stable control over view consistency. 20% 0% 60% 40% 100% 80% 85.0% Ours DreamWaltz DreamAvatar DreamFusion 13.0% 1.5% 0.5% Preference between different methods 81.0% Ours DreamHuman 19.0% Figure 6: Quantitative results of user study. 4.3 User Study To further assess the quality of our generated 3D avatars, we conduct user studies comparing the performance of our results with four SOTA methods under the same text prompts. We randomly select 30 generated outcomes (presented as rendered rotating videos) and ask 16 volunteers to vote for their favorite results based on geometry and texture quality. In Fig. 6, we compare AvatarVerse with DreamFusion (Poole et al. 2022), DreamAvatar (Cao et al. 2023), and DreamWaltz (Huang et al. 2023), demonstrating a significant preference for our method over other approaches. We also compare our method with DreamHuman (Kolotouros et al. 2023) in terms of realistic human. A remarkable 81% of volunteers voted in favor of our AvatarVerse. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7129 4.4 Ablation Study Effectiveness of Progressive Strategies To evaluate the design choices of AvatarVerse, we conduct an ablation study on the effectiveness of b) the progressive grid, c) the progressive radius, d) the focus mode, and e) the mesh refinement. We sequentially add these components and report the results in Fig. 7. The initial result lacks detail (e.g., no sword in the back, no armguards) and exhibits numerous floating artifacts. The overall quality is blurry and unclear. Upon incorporating the progressive grid, more voxels are gathered around the avatar region, this introduces more details into the avatar. By progressively narrowing the camera distance, the model can leverage the detail inherent in the latent diffusion, thereby eliminating a large number of floating artifacts and enhancing local details, such as the sword in the back. The focus mode further zooms in and utilizes a resolution of 512 × 512 to target and optimize certain body parts, generating high-definition and intricate local details. The mesh refinement further optimize 3D mesh of the coarse avatar, resulting in finer avatar texture. (a) (b) (c) (d) (e) + prog. grid + prog. rad. + focus mode + mesh refinement Figure 7: Impact of progressive strategies. (a) none progressive strategy; (b) add progressive grid; (c) add progressive radius upon (b); (d) add focus mode upon (c); (e) add mesh refinement, our full method. Effectiveness of DensePose Control Figure 8 illustrates the influence of various control signals. When conditioned by the skeleton, the model can generate avatars that more closely resemble human figures. However, the avatar’s edges appear blurry and still face severe Janus problem. By incorporating DensePose control into our framework, we achieve more precise avatar boundaries, intricate details, and stable avatar control, resulting in a substantial improvement in the overall quality and appearance of the generated avatars. Effectiveness of Surface Smoothing Avatar surface smoothing plays a critical role in the AvatarVerse framework, as it guarantees the generated avatars exhibit compact geometry and smooth surfaces. As shown in Figure 9, by (a) w/o control (b) skeleton (c) DensePose Figure 8: Impact of control signal. (a) without additional control; (b) with skeleton control; (c) with our DensePose control. For each type, we show the RGB, normal, depth, and the corresponding control signal. (a) w/o surface smoothing (b) w/ surface smoothing Figure 9: Impact of surface smoothing strategy. (a) without surface smoothing; (b) with surface smoothing. Results are generated with the same text prompt. finding a balance between the smooth loss and the conditioned SDS loss, the visual quality and realism of the avatars are greatly improved. 5 Conclusion In this paper, we introduce AvatarVerse, a novel framework designed to generate high-quality and stable 3D avatars from textual prompts and poses. By employing our trained DensePose-conditioned ControlNet, we facilitate stable partial or full-body control during explicit NeRF optimization. Our 3D avatar outcomes exhibit superior texture and geometry quality, thanks to our progressive high-resolution generation strategy. Furthermore, the generated avatars are easily animatable through skeletal binding, as they exhibit high alignment with the joints of the SMPL model. Through comprehensive experiments and user studies, we demonstrate that our AvatarVerse significantly outperforms previous and contemporary approaches. We believe that our approach renews the generation of high-quality 3D avatars in the neural and prompt-interaction era. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7130 References Cao, Y.; Cao, Y.-P.; Han, K.; Shan, Y.; and Wong, K.Y. K. 2023. DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models. ArXiv, abs/2304.00916. G¨uler, R. A.; Neverova, N.; and Kokkinos, I. 2018. DensePose: Dense Human Pose Estimation in the Wild. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7297–7306. Han, X.; Cao, Y.; Han, K.; Zhu, X.; Deng, J.; Song, Y.-Z.; Xiang, T.; and Wong, K.-Y. K. 2023. HeadSculpt: Crafting 3D Head Avatars with Text. ArXiv, abs/2306.03038. Ho, J.; Salimans, T.; Gritsenko, A.; Chan, W.; Norouzi, M.; and Fleet, D. J. 2022. Video Diffusion Models. ArXiv, abs/2204.03458. Hong, F.; Zhang, M.; Pan, L.; Cai, Z.; Yang, L.; and Liu, Z. 2022. AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars. ACM Trans. Graph., 41: 161:1– 161:19. Huang, Y.; Wang, J.; Zeng, A.; Cao, H.; Qi, X.; Shi, Y.; Zha, Z.; and Zhang, L. 2023. DreamWaltz: Make a Scene with Complex 3D Animatable Avatars. ArXiv, abs/2305.12529. Isik, M.; R¨unz, M.; Georgopoulos, M.; Khakhulin, T.; Starck, J.; de Agapito, L.; and Nießner, M. 2023. HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion. ArXiv, abs/2305.06356. Jain, A.; Mildenhall, B.; Barron, J. T.; Abbeel, P.; and Poole, B. 2021. Zero-Shot Text-Guided Object Generation with Dream Fields. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 857–866. Jiang, R.; Wang, C.; Zhang, J.; Chai, M.; He, M.; Chen, D.; and Liao, J. 2023. AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control. ArXiv, abs/2303.17606. Jiang, T.; Chen, X.; Song, J.; and Hilliges, O. 2022. InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds. ArXiv, abs/2212.10550. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2019. Analyzing and Improving the Image Quality of StyleGAN. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8107–8116. Khalid, N. M.; Xie, T.; Belilovsky, E.; and Popa, T. 2022. CLIP-Mesh: Generating textured meshes from text using pretrained image-text models. SIGGRAPH Asia 2022 Conference Papers. Kolotouros, N.; Alldieck, T.; Zanfir, A.; Bazavan, E. G.; Fieraru, M.; and Sminchisescu, C. 2023. DreamHuman: Animatable 3D Avatars from Text. ArXiv, abs/2306.09329. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023a. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. In ICML. Li, X.; Chen, Y.; Lin, C.-C.; Singh, R.; Raj, B.; and Liu, Z. 2023b. Completing Visual Objects via Bridging Generation and Segmentation. ArXiv, abs/2310.00808. Li, X.; Wen, Y.; Yang, M.; Wang, J.; Singh, R.; and Raj, B. 2023c. Rethinking Voice-Face Correlation: A Geometry View. Proceedings of the 31st ACM International Conference on Multimedia. Li, Z.; Zheng, Z.; Liu, Y.; Zhou, B.; and Liu, Y. 2023d. PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling. ArXiv, abs/2304.13006. Lin, C.-H.; Gao, J.; Tang, L.; Takikawa, T.; Zeng, X.; Huang, X.; Kreis, K.; Fidler, S.; Liu, M.-Y.; and Lin, T.-Y. 2022. Magic3D: High-Resolution Text-to-3D Content Creation. ArXiv, abs/2211.10440. Liu, L.; Gu, J.; Lin, K. Z.; Chua, T.-S.; and Theobalt, C. 2020. Neural Sparse Voxel Fields. ArXiv, abs/2007.11571. Liu, Z.; Luo, P.; Qiu, S.; Wang, X.; and Tang, X. 2016. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1096– 1104. Metzer, G.; Richardson, E.; Patashnik, O.; Giryes, R.; and Cohen-Or, D. 2022. Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures. arXiv preprint arXiv:2211.07600. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ArXiv, abs/2003.08934. Nie, W.; Guo, B.; Huang, Y.; Xiao, C.; Vahdat, A.; and Anandkumar, A. 2022. Diffusion Models for Adversarial Purification. In International Conference on Machine Learning. Poole, B.; Jain, A.; Barron, J. T.; and Mildenhall, B. 2022. DreamFusion: Text-to-3D using 2D Diffusion. ArXiv, abs/2209.14988. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In International Conference on Machine Learning. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2021. High-Resolution Image Synthesis with Latent Diffusion Models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10674– 10685. Ruan, L.; Ma, Y.; Yang, H.; He, H.; Liu, B.; Fu, J.; Yuan, N. J.; Jin, Q.; and Guo, B. 2022. MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10219–10228. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, S. K. S.; Ayan, B. K.; Mahdavi, S. S.; Lopes, R. G.; Salimans, T.; Ho, J.; Fleet, D. J.; and Norouzi, M. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. ArXiv, abs/2205.11487. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7131 Shen, T.; Gao, J.; Yin, K.; Liu, M.-Y.; and Fidler, S. 2021. Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis. ArXiv, abs/2111.04276. Sun, C.; Sun, M.; and Chen, H.-T. 2021. Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5449–5459. Wang, C.; Chai, M.; He, M.; Chen, D.; and Liao, J. 2021. Cross-Domain and Disentangled Face Manipulation With 3D Guidance. IEEE Transactions on Visualization and Computer Graphics, 29: 2053–2066. Wang, L.; Zhao, X.; Sun, J.; Zhang, Y.; Zhang, H.; Yu, T.; and Liu, Y. 2023a. StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video. ArXiv, abs/2305.00942. Wang, Z.; Lu, C.; Wang, Y.; Bao, F.; Li, C.; Su, H.; and Zhu, J. 2023b. ProlificDreamer: High-Fidelity and Diverse Textto-3D Generation with Variational Score Distillation. ArXiv, abs/2305.16213. Wu, T.; Wang, J.; Pan, X.; Xu, X.; Theobalt, C.; Liu, Z.; and Lin, D. 2022. Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction. ArXiv, abs/2208.12697. Xiu, Y.; Yang, J.; Cao, X.; Tzionas, D.; and Black, M. J. 2022. ECON: Explicit Clothed humans Obtained from Normals. ArXiv, abs/2212.07422. Zhang, L.; and Agrawala, M. 2023. Adding Conditional Control to Text-to-Image Diffusion Models. ArXiv, abs/2302.05543. Zhang, L.; Qiu, Q.; Lin, H.; Zhang, Q.; Shi, C.; Yang, W.; Shi, Y.; Yang, S.; Xu, L.; and Yu, J. 2023. DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance. ArXiv, abs/2304.03117. Zheng, Z.; Zhao, X.; Zhang, H.; Liu, B.; and Liu, Y. 2023. AvatarReX: Real-time Expressive Full-body Avatars. ArXiv, abs/2305.04789. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7132
2024
792
18,620
Improving the Adversarial Transferability of Vision Transformers with Virtual Dense Connection Jianping Zhang1, Yizhan Huang1, Zhuoer Xu2, Weibin Wu3*, Michael R. Lyu1 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Tiansuan Lab, Antgroup 3School of Software Engineering, Sun Yat-sen University {jpzhang, yzhuang22, lyu}@cse.cuhk.edu.hk, xuzhuoer.xze@antgroup.com, wuwb36@mail.sysu.edu.cn Abstract With the great achievement of vision transformers (ViTs), transformer-based approaches have become the new paradigm for solving various computer vision tasks. However, recent research shows that similar to convolutional neural networks (CNNs), ViTs are still vulnerable to adversarial attacks. To explore the shared deficiency of models with different structures, researchers begin to analyze the cross-structure adversarial transferability, which is still under-explored. Therefore, in this work, we focus on the ViT attacks to improve the cross-structure transferability between the transformer-based and convolution-based models. Previous studies fail to thoroughly investigate the influence of the components inside the ViT models on adversarial transferability, leading to inferior performance. To overcome the drawback, we launch a motivating study by linearly down-scaling the gradients of components inside the ViT models to analyze their influence on adversarial transferability. Based on the motivating study, we find that the gradient of the skip connection most influences transferability and believe that back-propagating gradients from deeper blocks can enhance transferability. Therefore, we propose the Virtual Dense Connection method (VDC). Specifically, without changing the forward pass, we first recompose the original network to add virtual dense connections. Then we back-propagate gradients of deeper Attention maps and Multi-layer Perceptron (MLP) blocks via virtual dense connections when generating adversarial samples. Extensive experiments confirm the superiority of our proposed method over the state-of-the-art baselines, with an 8.2% improvement in transferability between ViT models and a 7.2% improvement in cross-structure transferability from ViTs to CNNs. Introduction Transformers have become the dominant solutions in the natural language processing field with the state-of-the-art performance on various downstream tasks. Vision transformers (ViTs) (Dosovitskiy et al. 2020) first adapt the selfattention mechanism of the transformers (Vaswani et al. 2017) to the computer vision field for image recognition. Subsequently, diverse transformer-based approaches (Touvron et al. 2021a; Heo et al. 2021) have been proposed to *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. better adapt the transformer structure to the computer vision field. Nowadays, ViTs have become the new paradigm for solving various vision tasks such as object detection (Zhang et al. 2021) and semantic segmentation (Zheng et al. 2021), with competitive performance compared with convolutional neural networks (CNNs). Recent research reveals that both convolution-based and transformer-based models are vulnerable to adversarial attacks (Wu et al. 2020c; Zhang et al. 2023b). Adversarial attacks inject human-imperceptible noise into the original image to mislead the deep neural network (DNN) models with high confidence. This phenomenon raises security concerns with the wide application of deep neural networks (Zhang et al. 2023c,d; Wu et al. 2019). Furthermore, the adversarial examples crafted by the attacking algorithms manifest adversarial transferability. That is, the adversarial examples generated from a local surrogate model have the ability to mislead the target victim model (Wu et al. 2020b; Zhang et al. 2022). Therefore, adversarial transferability provides an efficient way to craft adversarial examples for testing the victim models without any access to the victim model under the black-box setting. Since victim models are usually deployed in the black-box setting, it is imperative to devise transferable attacking algorithms to assess their robustness before their deployment in real-world applications. The transfer-based attacks have achieved high attack success rates against convolution-based models. Nevertheless, recent studies have discovered the robustness of the transformer-based models and the low cross-structure transferability, when we transfer the adversarial examples generated by attacking transformer-based models to mislead convolution-based models or vice versa (Zhang et al. 2023b). Some research believes that the low cross-structure transferability is due to the model structure difference between transformer-based models and convolution-based ones (Naseer et al. 2021). Convolution-based models utilize the convolutional layers to capture the local information of the input features in a small receptive field (Luo et al. 2016). Transformer-based models divide the input image into small patches and feed a sequence of small patches into the network. With the help of the self-attention mechanism, ViT models can learn the global features at each stage of the network, which shows distinct properties to CNN models. Therefore, enhancing the adversarial transferability from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7133 transformer-based models to other transformer-based and convolution-based models is of great significance, which facilitates finding the common defects inside transformerbased and convolution-based models in practice. However, the adversarial transferability of transformerbased models is still under-explored. Although some works have been proposed to improve the adversarial transferability based on the special design of the transformer-based models, they still fail to thoroughly investigate the influence of the components inside the ViT models on adversarial transferability, leading to inferior performance. To overcome the drawback, we launch a motivating study by linearly down-scaling the gradient of selected components inside the transformer-based models to find their influences on adversarial transferability. Based on the motivating study, we find that the skip connections influence transferability the most and believe that back-propagating deeper gradients to generate adversarial samples can boost their transferability. Therefore, we propose the Virtual Dense Connection method (VDC). Specifically, without changing the forward pass, we first recompose the original model to add virtual dense connections. We then densely back-propagate gradients of Attention maps and Multi-layer Perceptron (MLP) blocks via virtual dense connections to generate adversarial samples. Extensive experiments show that our proposed approach significantly outperforms the state-of-the-art baselines by an 8.2% improvement in the transferability between transformer-based models and a 7.2% improvement in the cross-structure transferability. In summary, the contributions of this paper are: • We launch a motivating study to analyze the influence of each component inside the transformer-based models on adversarial transferability. To this end, we linearly down-scale the gradient of each component to observe the transferability changes. We find that the gradient of the skip connection most influences the adversarial transferability. • Based on the motivating study, we believe that backpropagating the gradient from deeper blocks to generate adversarial samples can improve their transferability. Therefore, we propose the Virtual Dense Connection method (VDC). VDC recomposes the original network to add virtual dense connections and then back-propagates gradients via virtual dense connections to generate transferable adversarial samples. • Extensive experiments confirm that our method can outperform the state-of-the-art attacking approaches by a margin of 8.2% on the transferability between ViT models, and 7.2% on the cross-structure transferability from ViT models to CNN models. Related Work Transfer-based Adversarial Attacks The transfer-based adversarial attack is one category of adversarial attacks under the black-box setting, which is built on the transferability of adversarial examples. Transferability is the phenomenon that the adversarial examples crafted by a local surrogate model can also mislead the target victim model. Therefore, black-box attackers can generate adversarial examples of a fully accessible surrogate model by white-box attacking algorithms and directly transfer the examples to the target victim model. Representative white-box attacks include Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) and Project Gradient Descent (PGD) (Madry et al. 2017). However, those white-box approaches reveal limited transferability, because the adversarial examples are model-specific and fail to mislead other models. Therefore, researchers begin to boost the transferability of adversarial examples. The current state-of-the-art transferbased attacks can be roughly classified into two trends: gradient-based approaches, and input transformation-based approaches. The gradient-based approaches utilize advanced optimizers (Dong et al. 2018), or model structures (Wu et al. 2020a; Xu et al. 2023; Deng et al. 2023) to modify the gradient to escape from the local optima and stabilize the update gradient. Momentum Iterative Method (MIM) (Dong et al. 2018) combines the momentum optimizer with the BIM to improve the adversarial transferability. Skip Gradient Method (SGM) (Wu et al. 2020a) utilizes the skip connection in the model structure to improve the transferability. SGM uses a decay factor to reduce the gradient from the residual module and focuses on the transferable low-level information to regularize the gradient. Input transformation-based approaches combine the gradients of the transformed images for generating transferable perturbation (Wu et al. 2021; Dong et al. 2019; Lin et al. 2019; Zhang et al. 2023a). Although those approaches have achieved state-of-the-art performance on boosting the transferability of convolution-based models, their performance drops dramatically on increasing the transferability of transformerbased models, because of the model structure difference between convolution-based models and transformer-based models. Another category of black-box attacks is query-based attacks (Andriushchenko et al. 2020; Bai et al. 2020; Wu et al. 2023). However, query-based attacks require additional queries to the victim model, which lacks in efficiency in the real-world scenarios. Therefore, we focus on transfer-based attacks in this paper. Transformer-based Models The transformer is a neural network architecture utilizing the self-attention mechanism originating from the natural language processing field. Recently, the transformer design has been adapted into the computer vision field. Vision transformers (ViTs) (Dosovitskiy et al. 2020) divide the input image into a sequence of small image patches similar to a sequence of tokens for the language model. ViTs capture the relationship between image patches based on the multi-head self-attention mechanism. Besides the basic version of the ViT, advanced ViTs have been proposed to enhance the performance of ViTs on computer vision tasks. The poolingbased vision transformer (PiT) (Heo et al. 2021) decreases the spatial dimension and increases the channel dimension with pooling to improve the model capability. The dataThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7134 Figure 1: Illustration of the down-scaled gradients in the transformer block. All the dashed lines (red & black) depict the normally back-propagated gradients. The red dashed lines represent the selected gradient to analyze their influence on adversarial transferability. efficient Vision Transformer (DeiT) (Touvron et al. 2021a) deploys a distillation token to learn knowledge from CNNs. The vision-friendly transformer (Visformer) (Chen et al. 2021) transits a transformer-based model to a convolutionbased model. With the development of transformer-based models, some researchers (Bhojanapalli et al. 2021; Shao et al. 2021) assess the robustness of ViTs based on white-box and blackbox attacks. Other research (Mahmood, Mahmood, and Van Dijk 2021) also finds that the cross-structure transferability from transformer-based to convolution-based models is low. To understand the influence of the components in the network on adversarial transferability, we launch a motivating study to explore the influence of the gradient from each network component. Attacks on Transformer-based Models Researchers aim to improve transferability by exploring the unique structure of transformer-based models. Naseer et al. proposed Self-Ensemble (SE) to utilize the class token on each layer of ViTs with a shared classification head for the gradient ensemble and Token Refinement module (TR) to refine the class token with fine-tuning (Naseer et al. 2021). The Pay No Attention (PNA) (Wei et al. 2022) method explores the attention mechanism and skips the gradient of the attention during back-propagation to improve the transferability of adversarial examples. Although previous approaches utilize the special architecture of transformer-based models for transferable adversarial attacks, they fail to thoroughly explore the influence of each component on adversarial transferability, leading to limited improvement of transferability. Unlike previous methods, we analyze the influence of the gradient from each component in the transformerbased models on adversarial transferability with a motivating study, and then design an effective attacking method accordingly. Motivating Study In this motivating study, we analyze the influence of the gradient from each component in the transformer-based models on adversarial transferability. We select a representative transformer-based model, ViT-B/16 (Dosovitskiy et al. 2020), as the source model to craft adversarial examples. Block Component ViT CNN Adv-CNN Attention QKV 45.4 24.1 16.6 Attention Map 64.9 35.8 24.5 skip Connection 19.0 13.3 8.8 MLP MLP Layer 44.6 24.5 17.3 Skip Connection 17.5 11.7 7.1 Table 1: The average adversarial transferability (%) against ViTs, CNNs, and adversarially-trained CNNs by scaling the gradients of different components in ViT-B/16. We then measure the average transferability of the generated adversarial examples to multiple transformer-based and convolution-based models. The details of the target models are in Section . In order to reflect the influence of each component’s gradient on transferability, we follow the idea of attribution (Sundararajan, Taly, and Yan 2017). Therefore, we gradually down-scale the gradient and compute the average adversarial transferability during the down-scaling process. Specifically, we down-scale the gradient from each component using a linear sampling strategy, where we downscale the gradient from 1 to 0 with a step size of 0.25. The transformer-based models consist of several transformer blocks. Each transformer block contains an Attention block and a MLP block. The Attention block first computes the QKV values and the attention map by the product of the query and key. Then the Attention block outputs the multiplication of the attention map and the QKV value. The MLP block passes the input through fully connected layers. Both the input and the output of the Attention block and MLP block are connected with a skip connection. Therefore, the components we select are QKV, attention map, the skip connection from the Attention block, MLP layers, and the skip connection from the MLP block, as shown in Figure 1. We gradually down-scale the gradient of a selected component, fixing the other back-propagated gradients and computing the average transferability during the down-scaling process. As we can see from Table 1, the adversarial transferability drops dramatically with the reduction of the gradient from skip connections in the Attention block or the MLP block. Thus, we believe that the skip connection inside the transformer-based models influences the adversarial transferability the most. This phenomenon implies that the gradient from the deeper block through the skip connection enhances the adversarial transferability, which motivates us to back-propagate more gradients from deeper blocks to improve the adversarial transferability. Method Preliminary We first set up some notations adopted in this paper. We regard a DNN image classifier as f(·). Given a sequence of image patches xp = {x1 p, x1 p, · · · , xN p } divided from the original image x with a shape of H × W × C, f(x) is the output of the image classifier. H, W, and C are the original image’s height, width, and channel number, respectively. xi p denotes the i-th patch of the original image. The patch shape is P × P × C, where P is the predefined patch size. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7135 Figure 2: Illustration of model reparametrization by adding virtual dense connections. The inputs of the current blocks are propagated to all later blocks via virtual dense connections. We also modify the weights of the original block to guarantee that the input to the next block remains the same, keeping the forward pass of the original model. Moreover, the total patch number N of an image is H·W P 2 . We set xadv as the adversarial example of image x with true label y. Adversarial examples satisfy two conditions: f(xadv) ̸= f(x) and x −xadv p < ϵ. The first condition implies that adversarial examples can mislead the image classifier with a wrong prediction. The second condition guarantees the difference between the adversarial example and the original image is smaller than a budget ϵ, so it is hard for a human to detect the distortion. ∥·∥p represents the Lp norm, and we measure the distortion by L∞norm in this paper, which is widely adopted in the literature (Dong et al. 2018). Model Recomposition In order to utilize the gradient from deeper blocks, one intuitive idea is to directly back-propagate the gradient through skip connections. However, there are no long-range connections in transformer-based models. Thus, we propose to recompose the original model to add additional virtual connections. As shown in the upper part of Figure 2, we suppose there are n blocks in the network, and the output of blocki is zi = fi(zi−1) with input zi−1. Therefore, the output of the original network is zn = fn(zn−1) = · · · = fn(fn−1(· · · f2(f1(x)) · · · )). Then, without changing the forward pass of the network, we aim to recompose the original model to add virtual dense connections. As shown in the lower part of Figure 2, we add virtual dense connections so that the output of each block is densely connected to the input of all the later blocks. Therefore, the additional input to blocki+1 through virtual dense connections is x+z′ 1+· · ·+z′ i−1, because we densely propagate all the outputs of previous blocks (block1 - blocki−1) and input x to the input of blocki+1. Since we should keep the original forward pass of the model, we need to guarantee the input to each block of the recomposed model remains the same. Therefore, the function of blocki is changed from fi(zi−1) to f ′ i(zi−1) = fi(zi−1) −x −Pi−1 k=1 z′ k. As a result, we recompose the original model to add virtual dense connections without changing the functionality of the original model. The transformation facilitates backpropagating more gradients from deep blocks to shallow blocks. Virtual Dense Connection Method Based on the observation in the motivating study, we think that back-propagating more gradients from deeper blocks in the network can enhance adversarial transferability. Therefore, our Virtual Dense Connection method (VDC) backpropagates more gradients through virtual dense connections after model recomposition. First, we denote the gradient of blocki as gi = ∂fi(zi−1) ∂zi−1 . Then the gradient of the recomposed blocki is: g′ i = ∂f ′ i(zi−1) ∂zi−1 = ∂(fi(zi−1) −x −Pi−1 k=1 z′ k) ∂zi−1 (1) z′ i−1 is the output of the recomposed blocki−1, and zi−1 is the input to blocki. In the recomposed model with virtual dense connections, we have: zi−1 = x + i−1 X k=1 z′ k. (2) Therefore, the gradient g′ i = ∂(fi(zi−1)−zi−1) ∂zi−1 = gi −1, where 1 is the identity matrix. To craft adversarial perturbation, we compute the gradient of the loss to the input x of the recomposed model: ∂loss ∂x = ∂loss ∂zn ∂zn ∂x = ∂loss ∂zn ∂(z′ n + x + Pn−1 k=1 z′ k) ∂x = ∂loss ∂zn ∂(f ′ n(zn−1) + x + Pn−1 k=1 z′ k) ∂x = ∂loss ∂zn ∂(f ′ n(zn−1) + zn−1) ∂x = ∂loss ∂zn ∂(f ′ n(zn−1) + zn−1) ∂zn−1 ∂zn−1 ∂x = ∂loss ∂zn (g′ i + 1)∂zn−1 ∂x = · · · = ∂loss ∂zn n Y k=1 (g′ k + 1). (3) To back-propagate more gradients from deeper blocks, VDC reduces the gradient inside recomposed blocks to back-propagate more gradients from deeper blocks via virtual dense connections. We utilize a factor 0 < λ < 1 to reduce the gradient of each recomposed block. Therefore, the updated gradient on the input is: Grad = ∂loss ∂zn n Y k=1 (λg′ k + 1) = ∂loss ∂zn n Y k=1 (λ(gk −1) + 1) = ∂loss ∂zn n Y k=1 (λgk + (1 −λ)1). (4) We divide Grad by λn for simplicity and denote γ = 1−λ λ . Then the gradient is simplified to: Grad = ∂loss ∂zn n Y k=1 (gk + γ1). (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7136 Nevertheless, it is hard to compute Grad because we cannot directly obtain the gradient gk inside each block. The computation of gk is expensive, which requires O(H ×W ×C). Instead, we could acquire the gradient of the loss to the input of each block in O(1), which we denote as Gradi = ∂loss ∂zi−1 . We expand the terms in Grad and denote their patterns by the expansion of gk or 1. For example, we denote the term ∂loss ∂zn (gk)(γ1) · · · (γ1) by the pattern [gk, 1, · · · , 1]. For the purpose of computing Grad in O(1), we only consider the terms in Grad with one consecutively skip, which means that there is only one consecutive substring of 1 in the pattern, and the previous example is one consecutively skip term. Under one consecutive skip assumption, Grad can be approximated by fusing the Gradi with all Gradj, when i < j ≤n. Therefore, the combined gradient ConGradi of blocki can be expressed as follows: ConGradi = Gradi + s · n X j=i+1 Gradj · γn−j+1, (6) where we set a scaling factor 0 < s < 1 to control the ratio of the back-propagated gradients from virtually connected deeper blocks. As a result, under the approximation assumption, the gradient from deeper blocks can be easily back-propagated in the backward pass and the computation of Grad in O(1). Finally, VDC updates the target image with the sign of Grad by a small step size ϵ′ = ϵ T in each iteration, where T is the iteration number. The update rule is formulated as: xadv t+1 = xadv t + ϵ′ · sgn{Grad}. (7) Implementation We aim to implement our proposed VDC on transformerbased models, taking the special design of ViTs into consideration. We demonstrate the components we select for utilizing VDC. The illustration of implementing VDC on transformer-based models is shown in Figure 3. Attention Map. The Attention map is the core functionality of the transformer-based models, which computes the relationship between image patches. Although the receptive field of the transformer-based model is the whole image, the deep blocks capture more high-level semantics compared with shallow blocks (Dosovitskiy et al. 2020). The gradients of the attention map from deep blocks are meaningful because the gradients from deep blocks avoid overfitting to the model. Therefore, we deploy VDC on Attention block to densely connect the attention map in the Attention block. MLP Block. The MLP block is another indispensable component in transformer-based models. Unlike the Attention block, the MLP block aggregates the channel-wise information of each patch. The skip connection of the MLP block also shows the most influence on adversarial transferability in the motivating study. Therefore, we also apply VDC to the MLP block. Figure 3: Illustration of Virtual Dense Connection method. The dark dash lines are the backward gradient through recomposed virtual connections on Attention maps and MLP blocks, which are in red and green dash lines to backpropagate more gradient from the deeper blocks. Comparison with Previous Approaches We recompose the original model without changing the forward functionality of the original transformer-based models and only modify the backward path through virtual dense connections to the Attention maps and MLP blocks. Previous ViT attacking methods explore the structure of a transformer-based model for boosting adversarial transferability (Wei et al. 2022). Nevertheless, they fail to investigate the advantages of each component in transformer-based models thoroughly. We do a motivating study to explore the benefit of the skip connection and the gradient from deeper blocks. SGM (Wu et al. 2020a) assigns a decay factor on the gradients of residual modules to use more gradients from existing skip connections. In contrast, VDC utilizes model recomposition to construct virtual dense connections without changing the forward pass. Therefore, SGM can only be applied to models with skip connections, while VDC does not rely on such specific model structures. Moreover, VDC can back-propagate more gradients from deeper blocks through virtual dense connections. For efficiency, we implement VDC under the one consecutive skip approximation to compute the update gradient in O(1). Experiments In this section, we first explain our experimental setup. Then we compare our approach with state-of-the-art adversarial attacks against transformer-based models and convolution-based models to demonstrate the effectiveness of our approach on improving the transferability between transformer-based models and the cross-structure transferability. Finally, we do an ablation study on the two components in our VDC as well as the hyper-parameters to understand our proposed approach better. Experimental Setup Our experiments mainly focus on the ImageNet dataset (Russakovsky et al. 2015) to attack image classification models, including transformer-based and convolution-based models. For fair comparisons, we follow the protocol (Wei et al. 2022) in the literature for the model and dataset. Dataset. To align with the previous work, we follow the baseline method (Wei et al. 2022) to randomly sample 1000 images of different classes from the ILSVRC 2012 validation set (Russakovsky et al. 2015). We ensure that almost The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7137 Model Attack ViT-B/16 PiT-B DeiT-B Visformer-S CaiT-S/24 TNT-S LeViT-256 ConViT-B ViT-B/16 MIM 100.0 34.5 64.3 36.5 64.1 50.2 33.8 66.0 SE 99.9 31.8 68.3 40.5 67.4 59.3 43.8 63.7 SGM 100.0 34.3 72.8 38.3 72.2 59.4 39.8 75.0 PNA 100.0 45.2 78.6 47.7 78.6 62.8 47.1 79.5 VDC 100.0 54.8 85.8 57.4 84.1 74.8 58.1 85.9 PiT-B MIM 24.7 100.0 33.9 44.5 34.7 43.0 38.3 37.8 SE 31.7 99.8 40.9 52.1 42.2 52.6 47.3 44.9 SGM 30.3 100.0 44.3 62.3 47.7 62.6 56.4 47.1 PNA 47.9 100.0 62.4 74.6 62.6 70.6 67.3 61.7 VDC 57.7 100.0 74.4 83.1 72.8 83.4 79.4 75.1 DeiT-B MIM 86.3 68.4 100.0 71.9 97.7 89.8 68.3 98.3 SE 91.6 93.7 99.9 82.7 98.4 94.6 80.7 97.8 SGM 88.3 65.7 100.0 73.1 97.7 92.3 74.3 97.4 PNA 91.0 74.2 100.0 82.5 98.1 94.4 80.1 98.4 VDC 91.8 79.9 100.0 84.9 98.6 95.5 85.5 98.8 Visformer-S MIM 28.1 50.3 36.9 99.9 41.0 51.9 49.4 39.6 SE 35.2 57.0 46.2 99.6 49.4 59.1 56.4 45.4 SGM 15.5 39.6 25.9 100.0 29.5 45.4 41.3 26.3 PNA 35.4 61.5 51.0 100.0 54.7 66.3 64.6 50.7 VDC 43.2 72.7 63.9 100.0 65.6 76.9 77.1 58.3 Table 2: The attack success rates (%) against eight models by various transfer-based attacks. The best results are in bold. all of the selected images can be correctly classified by the target models. Models. We evaluate the transferability of adversarial examples of ViTs from two perspectives. We first assess the transferability between transformer-based models. We select four different transformer-based models as the local surrogate models to attack eight target transformer-based models, including the four surrogate models. The four surrogate models are ViT-B/16 (Dosovitskiy et al. 2020), PiT-B (Heo et al. 2021), DeiT-B (Touvron et al. 2021a) , and VisformerS (Chen et al. 2021). The additional four target models are CaiT-S/24 (Touvron et al. 2021b), TNT-S (Han et al. 2021), LeViT-256 (Graham et al. 2021), and ConViT-B (d’Ascoli et al. 2021). We then evaluate the cross-structure transferability between transformer-based and convolution-based models. We choose two kinds of convolution-based models as the target models: normally trained undefended models and adversarially trained defended models. We select four undefended convolution-based models, including Inceptionv3 (Inc-v3) (Szegedy et al. 2016), Inception-v4 (Inc-v4) (Szegedy et al. 2017), Inception-Resnet-v2 (IncRes-v2) (Szegedy et al. 2017), and Resnet-v2-152 (Res-v2) (He et al. 2016a,b). We test three adversarially trained models (Tram`er et al. 2017), including Inc-v3ens3, Inc-v3ens4, and IncResv2adv. Besides, we evaluate the cross-transferability from convolution-based models to transformer-based models. We select Resnet-v2, Densenet121 (Dense-121) (Huang et al. 2017), and Mobilenetv3-small-075 (Mobile-v3) (Howard et al. 2019) as the surrogate models and test the attack success rate on the eight transformer-based models. Baseline Methods. We choose MIM as our baseline approach, because all the baseline methods utilize the momentum optimizer to enhance the transferability (Dong et al. 2018). In order to show the advantages of our proposed VDC, we select SGM (Wu et al. 2020a) as our competitive baseline, which utilizes the skip connection structure inside the network with a decay factor to reduce the gradient from the residual module. To show the state-of-the-art performance of our method, we compare our method with two state-of-the-art attacking algorithms against transformerbased models: PNA (Wei et al. 2022) and SE (Naseer et al. 2021). PNA leverages the attention structure in the transformer block to craft transferable adversarial examples, and SE deploys the self-ensemble mechanism to augment the gradient. We do not compare VDC with TR (Naseer et al. 2021), because TR requires more training data and computation resources for fine-tuning on the Imagenet, which is unfair for performance comparison. Evaluation Metric. We measure the adversarial transferability based on the attack success rate. We compute the ratio of the adversarial examples that successfully mislead the target model among all the generated adversarial examples. Hyper-parameters. We follow the hyper-parameter setting of the baseline approaches in their implementations for a fair comparison. Following the previous setting in the literature (Wei et al. 2022), we set the budget ϵ = 16, with the image pixel value ranging from 0 to 255. We pick the number of the iteration T = 10, so the step length α = ϵ T = 1.6. Since all the baselines utilize the momentum optimizer, we set the decay factor µ = 1.0. We resize all images to 224×224 as the input and pick the patch size to be 16 for the inputs of transformer-based models. For our proposed VDC, we set the scaling factor and the decay factor to be 0.1 and 0.5, respectively. Some transformer-based models have the same resolution in the whole network, while the others have different resolutions in different stages. Therefore, for the networks keeping the same resolution, we virtually connect all the blocks during the back-propagation. Otherwise, we only virtually connect the blocks with the same resolution. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7138 Model Attack Inc-v3 Inc-v4 IncRes-v2 Res-v2 Inc-v3ens3 Inc-v3ens4 IncRes-v2adv ViT-B/16 MIM 31.7 28.6 26.1 29.4 22.3 19.8 16.5 SE 40.8 40.0 31.5 38.8 31.0 30.5 23.8 SGM 29.5 25.9 21.6 26.0 17.6 17.0 13.9 PNA 42.7 37.5 35.3 39.5 29.0 27.3 22.6 VDC 49.3 44.4 39.3 44.8 33.8 33.8 27.8 PiT-B MIM 36.3 34.8 27.4 29.6 19.0 18.3 14.1 SE 46.4 41.2 35.0 39.4 25.3 22.3 19.5 SGM 39.8 35.4 29.8 30.8 18.1 16.4 11.5 PNA 59.3 56.3 49.8 53.0 33.3 32.0 25.5 VDC 68.2 60.3 57.0 59.5 42.2 39.8 32.3 DeiT-B MIM 56.1 50.9 47.9 52.9 40.8 38.7 32.6 SE 63.2 57.6 59.7 63.1 48.5 44.3 38.6 SGM 52.1 45.8 43.2 46.9 31.8 31.5 27.2 PNA 66.5 60.7 60.9 64.0 49.3 46.1 40.8 VDC 69.9 63.0 63.8 65.8 53.3 52.0 45.3 Visformer-S MIM 44.5 42.5 36.6 39.6 24.4 20.5 16.6 SE 55.5 55.0 44.9 48.3 30.9 26.6 24.4 SGM 33.1 32.7 24.6 26.2 11.7 9.4 6.9 PNA 55.9 54.6 46.0 51.7 29.3 26.2 21.1 VDC 71.9 69.8 60.9 65.0 40.8 34.8 28.3 Table 3: The attack success rates (%) against seven models by various transfer-based attacks. The best results are in bold. Experimental Results We present the experimental results of the adversarial transferability of our approach compared with baselines on different attacking settings. We craft adversarial examples by our approach and other baselines on the surrogate models and transfer the adversarial examples to target models. We measure the transferability between transformer-based models and the cross-structure transferability from transformerbased models to convolution-based models. We first assess the transferability between transformerbased models. We observe from Table 2 that, our proposed VDC achieves a 100% white-box attack success rate and outperforms all the baselines with a large margin of 8.2% on average under the black-box setting. Compared with SGM, which utilizes the skip connection structure in the network, our approach deploys virtual dense connections to the deeper blocks exerting significant improvement on the transferability. This result validates our assumption in the motivating study that back-propagating more gradients from deeper blocks can boost transferability and shows the advantages of adding virtual dense connections in the backward path. Compared with PNA and SE, which use different architectures of the transformer-based models to enhance transferability, our approach adds more connections virtually based on model recomposition. The remarkable performance also confirms the effectiveness of our selected architecture components. Moreover, we evaluate the cross-structure transferability from transformer-based to convolution-based models. As shown in Table 3, the cross-structure transferability drops compared with the transferability between transformerbased models, due to the structure difference of models. Compared with baselines, our proposed VDC still enhances the cross-structure transferability by over 7.2% on average, validating the superiority of the proposed VDC. Component ViT CNN Adv-CNN None 56.2 29.0 19.5 MLP 66.0 34.4 24.1 Attention 66.8 37.0 24.6 Attention + MLP (VDC) 75.1 44.5 31.8 Table 4: The average adversarial transferability (%) against ViTs, CNNs, and adversarially trained CNNs by using dense connection on different components in ViT-B/16. Ablation Study We do an ablation study to explore the contribution of the components in VDC by attacking the ViT-B/16 model. We generate adversarial examples by different combinations of the components in VDC and measure the transferability. The experimental result is shown in Table 4. We can see that both densely connecting the MLP blocks and the Attention maps can enhance adversarial transferability. The transferability improvement by densely connecting the MLP block is a little bit inferior than the Attention map, because the Attention map is the core functionality in transformer-based models. Conclusion In this paper, we start with a motivating study to conclude that back-propagating gradients from deeper blocks can enhance transferability. We propose the Virtual Dense Connection method (VDC) to back-propagate more gradients from deeper blocks. Specifically, we recompose the original model to add virtual dense connections without changing the forward pass. Then we back-propagate gradients of deeper Attention maps and MLP blocks via virtual dense connections when generating adversarial samples. Extensive experiments validate the superiority of our approach over the state-of-the-art approaches. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7139 Acknowledgments The work described in this paper was supported by the National Natural Science Foundation of China (Grant No. 62206318) and the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14206921 of the General Research Fund). References Andriushchenko, M.; Croce, F.; Flammarion, N.; and Hein, M. 2020. Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, 484–501. Springer. Bai, Y.; Zeng, Y.; Jiang, Y.; Wang, Y.; Xia, S.-T.; and Guo, W. 2020. Improving query efficiency of black-box adversarial attack. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, 101–116. Springer. Bhojanapalli, S.; Chakrabarti, A.; Glasner, D.; Li, D.; Unterthiner, T.; and Veit, A. 2021. Understanding robustness of transformers for image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10231–10241. Chen, Z.; Xie, L.; Niu, J.; Liu, X.; Wei, L.; and Tian, Q. 2021. Visformer: The vision-friendly transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 589–598. Deng, Y.; Wu, W.; Zhang, J.; and Zheng, Z. 2023. BlurredDilated Method for Adversarial Attacks. In Thirty-seventh Conference on Neural Information Processing Systems. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9185–9193. Dong, Y.; Pang, T.; Su, H.; and Zhu, J. 2019. Evading defenses to transferable adversarial examples by translationinvariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4312– 4321. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. d’Ascoli, S.; Touvron, H.; Leavitt, M. L.; Morcos, A. S.; Biroli, G.; and Sagun, L. 2021. Convit: Improving vision transformers with soft convolutional inductive biases. In International Conference on Machine Learning, 2286–2296. PMLR. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Graham, B.; El-Nouby, A.; Touvron, H.; Stock, P.; Joulin, A.; J´egou, H.; and Douze, M. 2021. Levit: a vision transformer in convnet’s clothing for faster inference. In Proceedings of the IEEE/CVF international conference on computer vision, 12259–12269. Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; and Wang, Y. 2021. Transformer in transformer. Advances in Neural Information Processing Systems, 34: 15908–15919. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016a. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016b. Identity mappings in deep residual networks. In European conference on computer vision, 630–645. Springer. Heo, B.; Yun, S.; Han, D.; Chun, S.; Choe, J.; and Oh, S. J. 2021. Rethinking spatial dimensions of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11936–11945. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. 2019. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision, 1314–1324. Huang, G.; Liu, Z.; Van Der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4700–4708. Lin, J.; Song, C.; He, K.; Wang, L.; and Hopcroft, J. E. 2019. Nesterov accelerated gradient and scale invariance for adversarial attacks. arXiv preprint arXiv:1908.06281. Luo, W.; Li, Y.; Urtasun, R.; and Zemel, R. 2016. Understanding the effective receptive field in deep convolutional neural networks. Advances in neural information processing systems, 29. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Mahmood, K.; Mahmood, R.; and Van Dijk, M. 2021. On the robustness of vision transformers to adversarial examples. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7838–7847. Naseer, M.; Ranasinghe, K.; Khan, S.; Khan, F. S.; and Porikli, F. 2021. On improving adversarial transferability of vision transformers. arXiv preprint arXiv:2106.04169. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3): 211–252. Shao, R.; Shi, Z.; Yi, J.; Chen, P.-Y.; and Hsieh, C.-J. 2021. On the adversarial robustness of vision transformers. arXiv preprint arXiv:2103.15670. Sundararajan, M.; Taly, A.; and Yan, Q. 2017. Axiomatic attribution for deep networks. In International conference on machine learning, 3319–3328. PMLR. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; and Alemi, A. A. 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI conference on artificial intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7140 Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2818–2826. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and J´egou, H. 2021a. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, 10347–10357. PMLR. Touvron, H.; Cord, M.; Sablayrolles, A.; Synnaeve, G.; and J´egou, H. 2021b. Going deeper with image transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 32–42. Tram`er, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; and McDaniel, P. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems. Wei, Z.; Chen, J.; Goldblum, M.; Wu, Z.; Goldstein, T.; and Jiang, Y.-G. 2022. Towards transferable adversarial attacks on vision transformers. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2668–2676. Wu, D.; Wang, Y.; Xia, S.-T.; Bailey, J.; and Ma, X. 2020a. Skip connections matter: On the transferability of adversarial examples generated with resnets. arXiv preprint arXiv:2002.05990. Wu, W.; Su, Y.; Chen, X.; Zhao, S.; King, I.; Lyu, M. R.; and Tai, Y.-W. 2020b. Boosting the transferability of adversarial samples via attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1161–1170. Wu, W.; Su, Y.; Chen, X.; Zhao, S.; King, I.; Lyu, M. R.; and Tai, Y.-W. 2020c. Towards global explanations of convolutional neural networks with concept attribution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8652–8661. Wu, W.; Su, Y.; Lyu, M. R.; and King, I. 2021. Improving the transferability of adversarial samples with adversarial transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9024–9033. Wu, W.; Xu, H.; Zhong, S.; Lyu, M. R.; and King, I. 2019. Deep Validation: Toward detecting real-world corner cases for deep neural networks. In IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 125– 137. IEEE. Wu, W.; Zhang, J.; Wei, V. J.; Chen, X.; Zheng, Z.; King, I.; and Lyu, M. R. 2023. Practical and Efficient Model Extraction of Sentiment Analysis APIs. In IEEE/ACM 45th International Conference on Software Engineering (ICSE), 524–536. IEEE. Xu, Z.; Gu, Z.; Zhang, J.; Cui, S.; Meng, C.; and Wang, W. 2023. Backpropagation Path Search On Adversarial Transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4663–4673. Zhang, J.; Huang, J.-t.; Wang, W.; Li, Y.; Wu, W.; Wang, X.; Su, Y.; and Lyu, M. R. 2023a. Improving the Transferability of Adversarial Samples by Path-Augmented Method. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8173–8182. Zhang, J.; Huang, Y.; Wu, W.; and Lyu, M. R. 2023b. Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16415–16424. Zhang, J.; Huang, Y.-C.; Wu, W.; and Lyu, M. R. 2023c. Towards semantics-and domain-aware adversarial attacks. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 536–544. Zhang, J.; Wu, W.; Huang, J.-t.; Huang, Y.; Wang, W.; Su, Y.; and Lyu, M. R. 2022. Improving Adversarial Transferability via Neuron Attribution-Based Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14993–15002. Zhang, J.; Xu, Z.; Cui, S.; Meng, C.; Wu, W.; and Lyu, M. R. 2023d. On the Robustness of Latent Diffusion Models. arXiv preprint arXiv:2306.08257. Zhang, Z.; Lu, X.; Cao, G.; Yang, Y.; Jiao, L.; and Liu, F. 2021. ViT-YOLO: Transformer-based YOLO for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2799–2808. Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P. H.; et al. 2021. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6881–6890. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7141
2024
793
18,621
Curvature-Invariant Adversarial Attacks for 3D Point Clouds Jianping Zhang1, Wenwei Gu1, Yizhan Huang1, Zhihan Jiang1, Weibin Wu2*, Michael R. Lyu1 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2School of Software Engineering, Sun Yat-sen University {jpzhang, wwgu21, yzhuang22, zhjiang22, lyu}@cse.cuhk.edu.hk, wuwb36@mail.sysu.edu.cn Abstract Imperceptibility is one of the crucial requirements for adversarial examples. Previous adversarial attacks on 3D point cloud recognition suffer from noticeable outliers, resulting in low imperceptibility. We think that the drawbacks can be alleviated via taking the local curvature of the point cloud into consideration. Existing approaches introduce the local geometry distance into the attack objective function. However, their definition of the local geometry distance neglects different perceptibility of distortions along different directions. In this paper, we aim to enhance the imperceptibility of adversarial attacks on 3D point cloud recognition by better preserving the local curvature of the original 3D point clouds. To this end, we propose the Curvature-Invariant Method (CIM), which directly regularizes the back-propagated gradient during the generation of adversarial point clouds based on two assumptions. Specifically, we first decompose the back-propagated gradients into the tangent plane and the normal direction. Then we directly reduce the gradient along the large curvature direction on the tangent plane and only keep the gradient along the negative normal direction. Comprehensive experimental comparisons confirm the superiority of our approach. Notably, our strategy can achieve 7.2% and 14.5% improvements in Hausdorff distance and Gaussian curvature measurements of the imperceptibility. Introduction Deep neural networks (DNNs) dominate state-of-the-art solutions for a variety of computer vision tasks, comprised of image classification (Russakovsky et al. 2015; Wu et al. 2019), object detection (Lin et al. 2014) and 3D point cloud recognition (Yi et al. 2016). 3D point cloud recognition models are widely deployed in lots of safety-critical realworld systems, such as autonomous driving and medical diagnosis systems (Dong, Wang, and Abbas 2021). However, recent research shows that DNNs are vulnerable to adversarial attacks (Zhang et al. 2023b,a, 2022), which inject imperceptible noise into the original input to mislead the DNN models. It raises security issues on the deployment of DNN applications (Zhang et al. 2023c,d; Wu et al. 2023). Similarly, 3D point cloud recognition models are also susceptible to adversarial attacks (Xiang, Qi, and Li 2019). Therefore, it *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Visualization of generated adversarial point cloud by various attacking algorithms on randomly selected five classes. The adversarial examples are generated on the PointNet model. is indispensable to design adversarial attack approaches to detect the deficiencies inside the 3D point cloud recognition models before their deployment in real-world applications. 3D point cloud adversarial attacks work by adding, deleting, or shifting points in the point clouds (Xiang, Qi, and Li 2019). Among the schemes of 3D point cloud adversarial attacks, shifting points, i.e., changing the coordinates of the points, attracts more attention from researchers (Hamdi et al. 2020). Similar to 2D adversarial images (Wu et al. 2021, 2020b), 3D adversarial point clouds are also usually crafted with the guidance of the back-propagated gradients. For example, attackers can adapt the Fast Gradient Sign Method (FGSM) to generate 3D adversarial point clouds (Goodfellow, Shlens, and Szegedy 2014). Similar to adversarial attacks on 2D images, one of the core challenges of 3D point cloud adversarial attacks is imperceptibility, which requires that the modification of the point cloud is unnoticeable for humans. Inspired from the Lp constraint of 2D image adversarial attacks that the p norm of the perturbation is constrained by a budget of ϵ (Deng et al. 2023; Wu et al. 2020a), 3D point cloud adversarial attacks also adopt the similar idea. That is, the modification of the point’s coordinates should satisfy the Lp constraints. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7142 However, satisfying this hard constraint is not enough, since the resultant adversarial point clouds can still contain noisy outliers, which destroy the local geometry properties of the original point clouds and deteriorate the visual quality of the adversarial point clouds in Figure 1. To improve the imperceptibility of adversarial point clouds, some researchers combine the misclassification objective function with the Mean Square Error (MSE) to make small L2 perturbations on the target point clouds. Some (Liu and Hu 2022; Wen et al. 2020) improve the MSE loss with other advanced distance measurements, like Chamfer distance (Fan, Su, and Guibas 2017) and Hausdorff distance (Taha and Hanbury 2015). Others adopt local shape descriptors, like normal direction, to reduce noisy outliers in 3D adversarial point cloud (Wen et al. 2020). Nevertheless, the imperceptibility of existing 3D point cloud adversarial attacks is still unsatisfactory. The reasons are as follows: (1) Though previous approaches take 3D distance measurements and local shape descriptors into consideration, they neglect the influence of the local curvature, which is also a vital surface property along with the normal direction. (2) Some methods (Wen et al. 2020) try to introduce the local curvature distance into the attack objective function. However, their definition of the local curvature distance focuses on the average angle between the normal vector and the vectors starting from a point to each of its neighboring points, which neglects different perceptibility of distortions along different directions. Besides, such a combination of the misclassification objective function and the distance measurements requires extra hyper-parameters to balance the power of different terms during the generation of adversarial point clouds, which are time-consuming to tune. In this paper, we propose the Curvature-Invariant Method (CIM) to overcome the above flaws of previous approaches. To improve imperceptibility, CIM attempts to utilize the information of local curvature to preserve the local surface geometry of 3D point clouds. Instead of incorporating complicated objectives into the attack objective function, we directly rectify the directions of the back-propagated gradients during the search for adversarial point clouds. Specifically, we first decompose the back-propagated gradient of each point in the point cloud into three orthogonal directions: the normal direction, the maximum principal direction, and the minimum principal direction. The maximum and minimum principal directions reside on the tangent plane of the point. Then as shown in Figure 3, to maintain the local geometry of point clouds, we directly modify the directions of the update gradients as follows: (1) On the tangent plane, we reduce the gradient along the large curvature direction. To trade off imperceptibility and attack effectiveness, we achieve this goal by only keeping the gradient along a linear combination of two principal directions with more weights on the minimum principal direction. (2) Along the normal direction, we only keep the gradient along the negative normal direction, having a negative dot product with the normal. We conduct extensive experiments to validate the effectiveness of our proposed CIM. Remarkably, on average, our CIM can not only enhance the Hausdorff distance by over 7.2 % , but also boost the adversarial imperceptibility measured by the Gaussian curvature difference by above 14.5 %. Our contributions are: • To improve the imperceptibility of the generated 3D adversarial point clouds, we propose the CurvatureInvariant Method (CIM). CIM attempts to utilize the information of local curvature to preserve the local surface geometry of 3D point clouds. To this end, we directly regularize the update gradient by reducing the gradient along the large curvature direction and only keeping the gradient along the negative normal direction. • We derive the mathematical proofs of two assumptions and the upper bound of the gradient variation for each point generated by our CIM. • We conduct comprehensive experiments to validate the advantages of CIM, which promotes both the attack success rate and the imperceptibility of 3D adversarial point clouds. Related Work 3D Point Cloud Recognition A 3D point cloud is a discrete set of data points to represent the 3D shape of an object. With the development of deep learning, various deep learning-based approaches have achieved surprising performance on 3D point cloud recognition. PointNet (Qi et al. 2017a) is a representative approach applying a multi-layer perception to point features and deploying a max-pool module for aggregating point features efficiently. PointNet++ (Qi et al. 2017b) extends PointNet with single-scale and multi-scale designs for better extracting local features. DGCNN (Wang et al. 2019) utilizes point neighbors to better extract local geometric features. PointConv (Wu, Qi, and Fuxin 2019) reformulates the convolution operation to efficiently compute the weight functions for scaling up the network. 3D Adversarial Attacks and Defenses Current 3D adversarial attacks can be roughly divided into three categories based on the perturbation schemes: adding points, deleting points, and shifting points. Some researchers utilize the saliency map for deleting important points (Zheng et al. 2019) or add synthetic points to the original point cloud (Xiang, Qi, and Li 2019). More research attention is focused on perturbing the coordinates of the point in the point cloud (Tu et al. 2020; Zhou et al. 2020). Usually, the adversarial point clouds are crafted by employing the gradient of a C&W attack objective function that combines the misclassification loss with different quality measurements, including 3D distance metrics (Liu and Hu 2022) and local shape descriptors (Wen et al. 2020). Our Curvature-Invariant Method (CIM) proposes to consider the local surface property (i.e., the curvature) of the point in the point cloud. Previous approaches try to introduce the local curvature distance into the attack objective function. However, their definition of the local curvature distance neglects different perceptibility of distortions along different The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7143 directions. Besides, such a combination of the misclassification objective function and the distance measurements requires extra hyper-parameters to balance their power, which are time-consuming to tune. Unlike previous methods, we directly regularize the update gradient on each point based on the curvature property to maintain the local geometry during the search for adversarial point clouds. Multiple defense approaches have been proposed to defend against 3D adversarial attacks. Mainstream schemes are based on pre-processing (Zhou et al. 2019), adversarial training (Liu, Yu, and Su 2019; Sun et al. 2021), and gathervectors (Dong et al. 2020). Methodology Preliminary A point cloud is composed of an unordered set of points P = {pi}n i=1 ∈Rn×3 sampled from the surface of an object with the ground truth label c. Each point pi ∈R3 is a vector representing the coordinates (x, y, z) of the point i. n is the number of points in the point cloud. A classifier f(·) takes the point cloud P as the input and outputs the label prediction c′ = f(P ). In the setting of 3D adversarial point cloud attacks, we aim to craft an adversarial point cloud P adv by shifting the original point cloud with ∆∈Rn×3 (i.e., P adv = P + ∆) to mislead the classifier (i.e., f(P adv) ̸= c). With the aim of imperceptibility, the perturbation ∆should satisfy the Lp constraint such that ∥∆∥p < ϵ, where ∥·∥p is the Lp norm. In this paper, we focus on the L∞norm by following the baseline (Huang et al. 2022). Curvature-Invariant Method The motivation behind our Curvature-Invariant Method (CIM) is to improve the adversarial imperceptibility by taking the local curvature into consideration. Based on our two assumptions, we directly regularize the update gradient for each point in the point cloud during the generation of adversarial point clouds. To this end, we first transform the original axis to a proper axis for each point in the point cloud for efficiently regularizing the update gradient. Then we directly regularize the update gradient under the transformed axis based on our two assumptions to generate adversarial point clouds. Coordinate Transformation For a given point cloud P = {pi}n i=1 ∈Rn×3, the local surface property of a point pi can be approximated by its k nearest neighbors Npi on the point cloud (Hoppe et al. 1992). Specifically, we first compute the covariance matrix Cpi of the differences between pi and each of its k nearest neighbors Npi as shown in Equation 1: Cpi = X q∈Npi (q −pi)(q −pi)T . (1) The covariance matrix Cpi is positive semi-definite. We then obtain its three eigenvalues (λ1, λ2, λ3) in descending order and the corresponding eigenvectors (e1, e2, e3). Figure 2: The illustration of coordinate transformation. We project the origin of the original coordinate system to the tangent plane and set the projection point as the origin of the transformed coordinate system. The x’ and y’ are two directions on the tangent plane determined by two parameters a and b, while z’ is the normal direction. Following the basic concept of differential geometry (Do Carmo 2016), the first two eigenvalues (λ1 and λ2) are the principal curvatures of the local surface of pi determined by its neighbors. Specifically, λ1 is the maximum principal curvature, and the corresponding maximum principal direction is e1. Besides, λ2 is the minimum principal curvature, and the corresponding minimum principal direction is e2. The two principal directions define the tangent plane of the point pi. Furthermore, the last eigenvector e3 is the normal vector of the tangent plane, and we denote it to be the normal direction. To conveniently regularize the update gradient based on the local geometry, we introduce a new coordinate system, which sets the normal direction e3 to be its z′ direction. Since the normal direction is perpendicular to the tangent plane, any pair of orthogonal vectors residing on the tangent plane can form the x′ and y′ directions of the new coordinate system, respectively. We note that the two principal directions (e1 and e2) form one basis of the tangent plane. Therefore, we can represent new x′ and y′ axes with the linear combination of the two principal directions. Theorem 1. x′ = a · e1 + b · e2 and y′ = b · e1 −a · e2 such that a, b ∈R and a2 + b2 = 1 form one basis of the tangent plane. After the determination of the three axis directions of the new coordinate system, we compute the new origin O′ of the transformed coordinate system. We take the projection of the origin O from the original coordinate onto the tangent plane as shown in Figure 2 and assign the projected origin as the new origin. We can now formulate the transformation from the original coordinate system O −xyz to the new coordinate system O′ −x′y′z′. As we can observe from Figure 2, the coordinate transformation consists of the translation from O to O′ and the rotation of the axes. Therefore, we utilize a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7144 rotation matrix and a translation matrix to define the coordinate transformation. Theorem 2. Denote Si : R3 7→R3 is the transformation to convert the original coordinate system O −xyz to the new coordinate system O′ −x′y′z′. The transformation Si consists of a rotation matrix Ri and a translation matrix Ti. The coordinate p′ i under the new coordinate system and the coordinate pi under the original coordinate system can be transformed conversely by the following equation: p′ i = Ri(pi + Ti), pi = Ri T p′ i −Ti. (2) The rotation matrix Ri and the translation matrix Ti are given as follows: Ri = a b 0 b −a 0 0 0 1 ! e1 e2 e3 ! , Ti = −(pi T e3)e3. (3) Attacking Algorithm For a given classifier f and the input point cloud P , we first apply the coordinate transformation to all the points in the point cloud: P ′ = {Ri(pi + Ti)}N i=1, P = {Ri T p′ i −Ti}N i=1. (4) We then compute the max-margin logit loss function by following C&W (Carlini and Wagner 2017) as the attack objective function: L(P , c) = max([f(P )]c −max i̸=c [f(P )]i, 0), (5) where c is the ground truth label of the input point cloud, and [f(P )]i is the model’s confidence score of classifying the input point cloud P into the class i. With the objective to obtain the gradient on the transformed coordinate for efficiently regularizing the gradient, we treat the input point cloud P as a function of the point cloud P ′ in the tangent-normal space with the rotation and translation matrices. The coordinate transformation is differentiable, so we can directly take the gradient of the loss function with respect to the transformed point cloud. We denote the gradient of the transformed point cloud as G, where gi = (gi1, gi2, gi3) is the gradient of the attack objective function with respect to the transformed point p′ i: G = {gi}N i=1 = ∇P ′L(P , c) = ∂L({Ri T p′ i −Ti}N i=1, c) ∂{p′ i}N i=1 . (6) After the coordinate transformation, the original coordinate system is transformed into the normal vector and tangent plane coordinate system for each point in the point cloud, which is convenient for regularizing the update gradient. In order to keep the local geometry, we consider two assumptions to regularize the update gradient. The first assumption is to constrain the update gradient along the large (1) (2) Figure 3: Observations for assumptions: (1) Perturbations along the small curvature direction (e2) on the tangent plane keep the local shape. (2) Perturbations along the negative normal direction (−e3) keep the local shape. curvature direction, while the second assumption is to remove the update gradient along the positive normal direction. From Figure 3 (1), we can derive our first assumption. Specifically, if we alter the point on the boundary of the cylinder along the direction e2, the shape of the cylinder will never change. If we perturb the point along the direction e1, the shape will greatly change. Therefore, with the aim of keeping the local geometry, we propose to regularize the update gradient on the tangent plane by reducing the update gradient along the large curvature direction. Assumption 1. The perturbation along smaller curvature directions changes less on the local shape. From Figure 3 (2), we can derive our second assumption. Specifically, if the updating direction on the tangent plane is x′, shifting the point along the negative normal direction (−e3) is consistent with the local shape. If we perturb the point along the positive normal direction, the local shape will greatly change. As a result, to keep the local shape, we propose to regularize the update gradient along the normal direction by only allowing the update gradient along the negative normal direction. Assumption 2. The perturbation along the negative normal direction changes less on the local shape. Based on these two assumptions, we regularize the update gradient to keep the local shape of the original point cloud. We detail our regularization scheme on the tangent plane and along the normal direction as follows. Gradient Regularization on the Tangent Plane. We propose an adaptive gradient regularization scheme on the tangent plane based on the first assumption. To preserve the local shape, we should reduce the perturbation along the large curvature direction on the tangent plane. According to the property of differential geometry, the curvature of any direction on the tangent plane is bounded by the two principal curvatures. Therefore, we should reduce the update gradient along the maximum principal direction. Besides, we note that if the difference between the two principal curvatures is large, removing the update gradient along the maximum principal direction can largely preserve the local shape. However, if the maximum principal curvature is similar to the minimum principal curvature, perturbations along the maximum principal direction achieve similar changes to the local shape with those along the minimum principal direction. Therefore, to trade off imperceptibility and attack efThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7145 fectiveness, we keep the update gradient along a linear combination of two principal directions with more weights on the minimum principal direction. We first detail how to conveniently reduce the update gradient along the large curvature direction. From Section , we can see that the parameters a and b determine the transformed directions. Therefore, we can rectify the update gradient on the tangent plane by tuning the parameters a and b and only keeping the update gradient along the x′ direction. For example, we can set a = 0 and b = 1 to align the new x′-axis with the minimum principal direction (i.e., x′ = e2) and the new y′-axis with the maximum principal direction (i.e., y′ = e3). Afterwards, we can remove the update gradient along the y′-axis to only allow the perturbation along the x′-axis, which is the minimum principal direction. We then detail how to trade off imperceptibility and attack effectiveness. We define the curvature ratio by the following equation: cr = λ2 p λ2 1 + λ2 2 . (7) The curvature ratio satisfies the inequality 0 < cr ≤ 1 √ 2, since 0 < λ2 ≤λ1. To identify the small difference between the two principal curvatures, we utilize a hyper-parameter γ. If the curvature ratio is smaller than γ, the curvature difference is defined to be large. In this case, we completely remove the gradient along the maximum principal direction to keep the local shape. In contrast, if the curvature ratio is larger than γ, the curvature difference is defined to be small. In this case, we do not need to completely remove the gradient along the maximum principal direction. Instead, we deploy the adaptive gradient direction by taking the curvature ratio into consideration. Specifically, we set the parameters a = λ2 √ λ2 1+λ2 2 and b = λ1 √ λ2 1+λ2 2 . As such, we combine the two principal directions with more weights on the minimum principal direction. We denote the update direction as the balanced principal direction. In summary, the gradient update direction x′ is: x′ = ( e2 cr < γ λ2 √ λ2 1+λ2 2 · e1 + λ1 √ λ2 1+λ2 2 · e2 cr ≥γ (8) With the aim of rectifying the update gradient along the balanced principal direction, we can directly modify the gradient of the point i by the following equations: gi1 = gi1, gi2 = 0. (9) Gradient Regularization along the Normal Direction. We describe the gradient rectification along the normal direction based on the second assumption. With the objective to only allow the gradient along the negative normal direction, we can directly rectify the update gradient by the formula: gi3 = min (gi3, 0) (10) Algorithm 1: Curvature-Invariant Method 1: Input: input point cloud P and its ground-truth label c 2: Input: the classifier f, attack budget ϵ, and iteration T 3: Input: hyper-parameter γ and loss function L 4: Output: adversarial point cloud PT 5: α = ϵ T , P0 = P 6: for t = 0 ←T −1 do 7: Compute (λ1, λ2) and (e1, e2, e3) ▷Eq. (1) 8: cr = λ2 √ λ2 1+λ2 2 9: if cr < γ then 10: a = 0, b = 1 11: else 12: a = λ2 √ λ2 1+λ2 2 , b = λ1 √ λ2 1+λ2 2 13: end if 14: Transform Pt to P ′ t ▷Eq. (2) 15: G = ∂L({Rt T P ′ t −Tt} ∂{P ′ t } 16: G2 = 0 ▷Eq. (9) 17: G3 = min (G3, 0) ▷Eq. (10) 18: P ′ t+1 = P ′ t −α · G ∥G∥1 19: Transform P ′ t+1 to Pt+1 ▷Eq. (2) 20: Pt+1 = Clipϵ{Pt+1} 21: end for In a nutshell, we rectify the update gradient of each point in the point cloud by the constraints on the tangent plane and the normal direction. Our overall attacking algorithm is shown in Algorithm 1. In our Curvature-Invariant Method, we can compute the upper bound of the variation of the loss function for each point in one iteration. Theorem 3. Given the loss function L and the variable point i in the point cloud (x′ i, y′ i, z′ i) initialized as (p′ i1, p′ i2, p′ i3). The variation of L is upper bounded by p g2 i1 + g2 i2. Experiments In this section, we conduct extensive experiments to validate the effectiveness of our proposed Curvature-Invariant Method. We first clarify the setup of the experiments. After that, we demonstrate the white-box attacking performance and the imperceptibility measures of our method against competitive baseline methods. We also compare the attack effectiveness on defense models. The experiment results demonstrate the effectiveness of our methods that both improve the attack success rate and the imperceptibility of adversarial examples compared with baseline methods. Furthermore, we present the ablation study on the attack budget to further demonstrate the superiority of our approach in terms of attacking performance and imperceptibility. Experiment Setup We follow the protocol of the baseline method (Huang et al. 2022) to set up the experiments for a fair comparison to attack 3D point cloud classification models trained on ModelThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7146 Methods PointNet PointNet++ PointConv DGCNN ASR MSE DH DG ASR MSE DH DG ASR MSE DH DG ASR MSE DH DG 3d-ADV 90.6 3.19 4.05 6.88 92.8 4.44 4.13 12.22 88.3 4.36 4.01 12.80 95.8 5.39 4.11 15.00 GeoA 92.5 2.16 3.47 5.35 94.0 2.61 3.20 7.38 92.5 3.07 3.59 9.90 95.7 3.34 3.12 8.51 SI-PC 94.2 1.83 3.45 4.54 91.9 2.83 3.49 8.57 93.2 3.15 3.68 9.47 96.3 3.44 3.12 7.97 CIM 96.3 1.82 3.29 4.18 94.9 2.55 3.13 6.68 94.8 2.93 3.36 8.00 96.5 3.25 2.96 6.96 Table 1: The attacking performance on ModelNet40. ASR is the attatck success rate (%) and MSE is the mean square errors. DH measures the Hausdorff distance (10−2) and DG shows the Gaussian curvature distance (10−4).The best result is in bold. Net40 (Wu et al. 2015). ModelNet40 is also the most widely utilized benchmark task for 3D point cloud adversarial attacks (Wen et al. 2020; Xiang, Qi, and Li 2019; Huang et al. 2022). Here are the details of the experiment setup. Dataset. We follow the dataset selection of the baseline method (Huang et al. 2022) by utilizing the dataset ModelNet40. ModelNet40 consists of 12,311 CAD models from 40 object categories, in which 9,843 models are intended for training and the other 2,468 for testing. Following the preprocessing of the PointNet (Qi et al. 2017a), we uniformly sample 1,024 points from the surface of each object and rescale them into a unit cube. Models. We choose four representative 3D point cloud recognition models containing PointNet (Qi et al. 2017a), PointNet++ with MSG (Qi et al. 2017b), PointConv (Wu, Qi, and Fuxin 2019) and DGCNN (Wang et al. 2019) as the target model to craft adversarial point clouds and directly test the models under the white-box setting. Furthermore, we also consider the defended models as the target ones. We select three defense methods covering input preprocessingbased defense SRS, point cloud statistical outlier removal SOR and DUP-Net (Yang et al. 2019). Baseline Methods. We compare our approach with three state-of-the-art attacking algorithms: 3d-ADV (Xiang, Qi, and Li 2019), GeoA (Wen et al. 2020), and SI-PC (Huang et al. 2022). 3d-ADV and GeoA are optimization approaches, which incorporate different quality measures like MSE (Xiang, Qi, and Li 2019), Hausdorff Distance (Taha and Hanbury 2015), and local curvature (Wen et al. 2020)) into the loss function to guarantee the quality of adversarial point cloud. While, SI-PC is a gradient regularization approach, which drops out the gradient along the normal direction to keep the shape of the adversarial point cloud. We compare our approach with them under various settings to validate the effectiveness of our method. Evaluation. We first evaluate the imperceptibility of the crafted adversarial point cloud from two perspectives. We compare the l2 distance (MSE) and Hausdorff Distance DH (Taha and Hanbury 2015) between the original point cloud and the adversarial point cloud to measure the perturbation generated by the attacking methods. We also evaluate the imperceptibility from the point of view of the local surface that we compute the difference of the Gaussian curvature DG (Do Carmo 2016) between the original point cloud and the adversarial one by following (Miao et al. 2022), which is the difference of the two principle curvatures multiplication. In addition to the measurement of imperceptibility, we also evaluate the attacking performance by deploying the attack Attack SOR SRS DUP-Net Average 3d-ADV 56.5 56.5 58.3 57.1 GeoA 60.6 63.3 60.8 61.6 SI-PC 77.7 70.0 79.3 75.7 CIM 78.4 73.7 80.4 77.5 Table 2: The attack success rates (%) of the adversarial point clouds on three defense mechanisms. The examples are generated on the PointNet model and the best result is in bold. success rate (ASR). The attack success rate is the ratio of the adversarial examples that successfully mislead the target model among all the generated adversarial examples. All the experiments are conducted on a server equipped with one TITAN X GPU. Parameter. For a fair comparison, we set the maximum L∞budget of all the attacking methods to be ϵ = 0.16. In addition, the number of iterations is set to be T = 5, and the step length is 0.07. In the experiment, we adopt the untargeted attack under the same setting to evaluate the imperceptibility and attacking performance. For our approach, we set the hyper-parameter γ to regularize the gradient on the tangent plane to be 0.3. Performance Comparison In this section, we analyze the performance of our approach against the state-of-the-art baselines from the perspective of imperceptibility and attack success rate, respectively. As shown in Table 1, our approach achieves the highest 95.6% white-box attacking success rate on average compared with all the baselines. In addition, our method outperforms all the other baselines on all three measures of imperceptibility, demonstrating the high quality of adversarial samples generated by our approach. Especially, we outperform the other baselines on the measure of Hausdorff distance and the Gaussian curvature with a large margin of 7.2% and 14.5% improvement respectively. Though GeoA considers taking the curvature into the loss function, the complex compound loss terms hinder the attacking algorithm from achieving high quality, and GeoA disregards the different perceptibility of distortion along different directions. Furthermore, SI-PC regularizes the gradient by allowing the perturbation along the tangent plane to keep the local shape. However, our approach takes the curvature into consideration, and we propose to constrain the gradient on the tangent plane with large curvature to preserve the local shape. Furthermore, we allow the negative gradient along the normal direction to further enhance the performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7147 Figure 4: Ablation study on budget. In addition, we assess the performance against the model with defense mechanisms. We take PointNet as the source model and generate adversarial examples for all the baseline methods. Then we test the prediction accuracy of adversarial examples on the PointNet with defense methods as shown in Table 2. Our proposed method achieves a 77.5 % attack success rate on average and surpasses all of the baselines with a margin of 1.8%. From the above experiments, our proposed method has more imperceptibility compared with baselines. We conclude the reasons why CIM has good imperceptibility are two-folded. Firstly, CIM attempts to utilize the information of local curvature to preserve the local geometry of 3D point clouds. We further regularize the update gradient by reducing the gradient along the large curvature direction and only keeping the gradient along the negative normal direction. Qualitative Results We further visualize the adversarial point clouds to show the qualitative results. We observe from Figure 1, our attacking algorithm preserves the local curvature well. We can hardly find any outliers on the generated adversarial point clouds of our approach. Notably, our approach can preserve the local shape compared with 3d-ADV and SI-PC. Furthermore, our method has fewer outliers compared with GeoA. The qualitative further validates the good imperceptibility of our proposed approach against all the baselines. Ablation Study We do ablation studies on the influence of two branch factors 1) Inner factor: the regularization on the tangent plane and normal direction. We want to see the contribution of each regularization to the imperceptibility and attacking performance. 2) Outer factor: the query budget and iteration time. Figure 5: Ablation study on iteration. Attack MSE DH DG None 1.83 3.65 5.29 Normal Regularization 1.83 3.63 5.33 Tangent Regularization 1.83 3.36 4.24 Tangenmt+Normal (our) 1.82 3.29 4.18 Table 3: The results of the ablation study. Regularization. We do an ablation study on the gradient regularization of CIM and observe the imperceptibility to show the effectiveness of the gradient regularization on both the tangent plane and the normal direction. We choose PointNet as the source model with different regularization strategies. As shown in Table 3, regularizing the gradient on the tangent can largely enhance the imperceptibility, while constraining the gradient on the normal direction lonely does not boost the imperceptibility. However, combining the regularization together benefits the improvement of imperceptibility, which is consistent with our two assumptions. Query Budget & Iteration Time. We measure the performance of adversarial examples generated from the PointNet model by altering the factors. We observe from Figure 4 and Figure 5 that our attacking algorithm outperforms all the baselines under all the outer factor settings. Conclusion In this paper, we find that current attacking methods fail to keep the local shape of the adversarial point cloud. Therefore, we propose the curvature-invariant method by constraining the gradient on the tangent plane along a small curvature direction and eliminating the negative gradient along the normal direction. Our approach boosts both the attack imperceptibility and the attack success rate. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7148 Acknowledgments The work described in this paper was supported by the National Natural Science Foundation of China (Grant No. 62206318) and the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14206921 of the General Research Fund). References Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39–57. IEEE. Deng, Y.; Wu, W.; Zhang, J.; and Zheng, Z. 2023. BlurredDilated Method for Adversarial Attacks. In Thirty-seventh Conference on Neural Information Processing Systems. Do Carmo, M. P. 2016. Differential geometry of curves and surfaces: revised and updated second edition. Courier Dover Publications. Dong, S.; Wang, P.; and Abbas, K. 2021. A survey on deep learning and its applications. Computer Science Review, 40: 100379. Dong, X.; Chen, D.; Zhou, H.; Hua, G.; Zhang, W.; and Yu, N. 2020. Self-robust 3d point recognition via gather-vector guidance. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11513–11521. IEEE. Fan, H.; Su, H.; and Guibas, L. J. 2017. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, 605–613. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Hamdi, A.; Rojas, S.; Thabet, A.; and Ghanem, B. 2020. Advpc: Transferable adversarial perturbations on 3d point clouds. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16, 241–257. Springer. Hoppe, H.; DeRose, T.; Duchamp, T.; McDonald, J.; and Stuetzle, W. 1992. Surface reconstruction from unorganized points. In Proceedings of the 19th annual conference on computer graphics and interactive techniques, 71–78. Huang, Q.; Dong, X.; Chen, D.; Zhou, H.; Zhang, W.; and Yu, N. 2022. Shape-invariant 3D Adversarial Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15335–15344. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Liu, D.; and Hu, W. 2022. Imperceptible transfer attack and defense on 3d point cloud classification. IEEE Transactions on Pattern Analysis and Machine Intelligence. Liu, D.; Yu, R.; and Su, H. 2019. Extending adversarial attacks and defenses to deep 3d point cloud classifiers. In 2019 IEEE International Conference on Image Processing (ICIP), 2279–2283. IEEE. Miao, Y.; Dong, Y.; Zhu, J.; and Gao, X.-S. 2022. Isometric 3D Adversarial Examples in the Physical World. In Oh, A. H.; Agarwal, A.; Belgrave, D.; and Cho, K., eds., Advances in Neural Information Processing Systems. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 652–660. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3): 211–252. Sun, J.; Cao, Y.; Choy, C. B.; Yu, Z.; Anandkumar, A.; Mao, Z. M.; and Xiao, C. 2021. Adversarially robust 3d point cloud recognition using self-supervisions. Advances in Neural Information Processing Systems, 34: 15498–15512. Taha, A. A.; and Hanbury, A. 2015. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC medical imaging, 15(1): 1–28. Tu, J.; Ren, M.; Manivasagam, S.; Liang, M.; Yang, B.; Du, R.; Cheng, F.; and Urtasun, R. 2020. Physically realizable adversarial examples for lidar object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13716–13725. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5): 1–12. Wen, Y.; Lin, J.; Chen, K.; Chen, C. P.; and Jia, K. 2020. Geometry-aware generation of adversarial point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6): 2984–2999. Wu, W.; Qi, Z.; and Fuxin, L. 2019. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9621–9630. Wu, W.; Su, Y.; Chen, X.; Zhao, S.; King, I.; Lyu, M. R.; and Tai, Y.-W. 2020a. Boosting the transferability of adversarial samples via attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1161–1170. Wu, W.; Su, Y.; Chen, X.; Zhao, S.; King, I.; Lyu, M. R.; and Tai, Y.-W. 2020b. Towards global explanations of convolutional neural networks with concept attribution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8652–8661. Wu, W.; Su, Y.; Lyu, M. R.; and King, I. 2021. Improving the transferability of adversarial samples with adversarial transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9024–9033. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7149 Wu, W.; Xu, H.; Zhong, S.; Lyu, M. R.; and King, I. 2019. Deep Validation: Toward detecting real-world corner cases for deep neural networks. In IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 125– 137. IEEE. Wu, W.; Zhang, J.; Wei, V. J.; Chen, X.; Zheng, Z.; King, I.; and Lyu, M. R. 2023. Practical and Efficient Model Extraction of Sentiment Analysis APIs. In IEEE/ACM 45th International Conference on Software Engineering (ICSE), 524–536. IEEE. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1912–1920. Xiang, C.; Qi, C. R.; and Li, B. 2019. Generating 3d adversarial point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9136– 9144. Yang, J.; Zhang, Q.; Fang, R.; Ni, B.; Liu, J.; and Tian, Q. 2019. Adversarial attack and defense on point sets. arXiv preprint arXiv:1902.10899. Yi, L.; Kim, V. G.; Ceylan, D.; Shen, I.-C.; Yan, M.; Su, H.; Lu, C.; Huang, Q.; Sheffer, A.; and Guibas, L. 2016. A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (ToG), 35(6): 1–12. Zhang, J.; Huang, J.-t.; Wang, W.; Li, Y.; Wu, W.; Wang, X.; Su, Y.; and Lyu, M. R. 2023a. Improving the Transferability of Adversarial Samples by Path-Augmented Method. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8173–8182. Zhang, J.; Huang, Y.; Wu, W.; and Lyu, M. R. 2023b. Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16415–16424. Zhang, J.; Huang, Y.-C.; Wu, W.; and Lyu, M. R. 2023c. Towards semantics-and domain-aware adversarial attacks. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 536–544. Zhang, J.; Wu, W.; Huang, J.-t.; Huang, Y.; Wang, W.; Su, Y.; and Lyu, M. R. 2022. Improving Adversarial Transferability via Neuron Attribution-Based Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14993–15002. Zhang, J.; Xu, Z.; Cui, S.; Meng, C.; Wu, W.; and Lyu, M. R. 2023d. On the Robustness of Latent Diffusion Models. arXiv preprint arXiv:2306.08257. Zheng, T.; Chen, C.; Yuan, J.; Li, B.; and Ren, K. 2019. Pointcloud saliency maps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1598–1606. Zhou, H.; Chen, D.; Liao, J.; Chen, K.; Dong, X.; Liu, K.; Zhang, W.; Hua, G.; and Yu, N. 2020. Lg-gan: Label guided adversarial network for flexible targeted attack of point cloud based deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10356–10365. Zhou, H.; Chen, K.; Zhang, W.; Fang, H.; Zhou, W.; and Yu, N. 2019. Dup-net: Denoiser and upsampler network for 3d adversarial point clouds defense. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1961–1970. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7150
2024
794
18,622
Cross-Modal Feature Distribution Calibration for Few-Shot Visual Question Answering Jing Zhang*†, Xiaoqiang Liu*, Mingzhe Chen, Zhe Wang† Department of Computer Science and Engineering, East China University of Science and Technology, China jingzhang@ecust.edu.cn, 2681332916lxq@gmail.com, ecustcmz@gmail.com, wangzhe@ecust.edu.cn Abstract Few-shot Visual Question Answering (VQA) realizes fewshot cross-modal learning, which is an emerging and challenging task in computer vision. Currently, most of the fewshot VQA methods are confined to simply extending few-shot classification methods to cross-modal tasks while ignoring the spatial distribution properties of multimodal features and cross-modal information interaction. To address this problem, we propose a novel Cross-modal feature Distribution Calibration Inference Network (CDCIN) in this paper, where a new concept named visual information entropy is proposed to realize multimodal features distribution calibration by cross-modal information interaction for more effective few-shot VQA. Visual information entropy is a statistical variable that represents the spatial distribution of visual features guided by the question, which is aligned before and after the reasoning process to mitigate redundant information and improve multi-modal features by our proposed visual information entropy calibration module. To further enhance the inference ability of cross-modal features, we additionally propose a novel pre-training method, where the reasoning sub-network of CDCIN is pretrained on the base class in a VQA classification paradigm and fine-tuned on the fewshot VQA datasets. Extensive experiments demonstrate that our proposed CDCIN achieves excellent performance on fewshot VQA and outperforms state-of-the-art methods on three widely used benchmark datasets. Introduction With the booming development of deep learning in recent years, all kinds of Visual Language Processing (VLP) tasks have attracted widespread attention from researchers, such as image captioning (Pan et al. 2020), visual entailment (Tran et al. 2022), Visual Question Answering (VQA) (Jiang et al. 2020a; Penamakuri et al. 2023; Dancette et al. 2023) and so on. As an important topic in VLP, VQA is a typical cross-modal problem that needs to analyze visual content and question semantics simultaneously. Currently, it is typically viewed as a classification problem where the goal is to predict the accurate answer given a pair of images and questions. *These authors contributed equally. †It is on behalf of the corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Visual information distribution of different inference stages. The shade of the color means that the information in that region is likely to be caught by the model. The early joint embedding models (Fukui et al. 2016; Kim et al. 2016; Ben-Younes et al. 2019) for VQA focused on the fusion of multimodal features and cross-modal interactions, most of which were realized through attention mechanism (Yu et al. 2018; Kim, Jun, and Zhang 2018; Guo, Yao, and Chu 2023) and graph neural network (Huang et al. 2020; Li et al. 2019). These classical VQA methods are generally trained on a large amount of labeled multimodal data and ignore the sparsity problem in most categories caused by the diversity of multimodal data. As a machine learning method that aims to recognize new concepts with few samples, fewshot learning caters to the characteristics of the VQA task and trains an effective model with a small amount of labeled data. For this reason, some methods (Dong et al. 2018; Yin et al. 2021) are proposed, which attempt to apply few-shot learning to solve few-shot cross-modal learning. However, these methods cannot effectively deal with cross-modal information inference and constrain multimodal feature distribution, which limits the performance of few-shot VQA. Cross-modal semantic inference is capable of facilitating joint reasoning based on correlation analysis between modalities, which is important for few-shot VQA. For a given image and question, the operation of few-shot VQA can be divided into two steps: understanding and reasoning. The performance of “inference” becomes crucial for answer prediction when the process of “understanding” can be well completed by existing encoders such as ViT (Dosovitskiy The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7151 et al. 2021), SwinT (Liu et al. 2021b) and so on. Crossmodal “inference” aims to explore the relationships among multimodal data and use one modality to guide the filtering and enhancement of features in another modality. The general few-shot VQA methods barely have the ability to perform cross-modal semantic inference, hence the semantics of the question cannot be effectively used to guide the visual encoding, resulting in visual encoding that may focus on unimportant regions. For example, the information distribution in the Figure 1 (a) means that the general model cannot accurately explore the relationship between the question word “mustache” and the visual object without crossmodal inference. It captures the blue region associated with the concept “person”. Obviously, the blue region contains noise, which makes the spatial distribution of features far from the corresponding categories. To address this problem, the model requires learning how to discriminate redundant information and adjust the feature distribution to a more reasonable state. As shown in the Figure 1 (b), if the model catches critical visual region “mustache” covered by the red and removes the unrelated region, it will narrow the distance of the multimodal features among the samples of the same category in the feature space. Based on the above analysis, we propose a Cross-modal feature Distribution Calibration Inference Network (CDCIN) for few-shot VQA in this paper. A novel concept of visual information entropy is defined, which is a statistical form similar to information entropy in information theory. It defines the entropy (average self-information) of visual information discrete source under the condition of a given question and reflects the spatial information distribution of visual features guided by the question. We also propose a Visual Information Entropy Calibration Module (VIECM) to realize the alignment of visual information entropy, in which the consistency of the visual information entropy from preinference (before multimodal relationship interaction) and post-inference (after multimodal relationship interaction) is used to realize information filtering and feature distribution calibrating. Additionally, we also propose an inference enhancement pre-training strategy, which strengthens the representation ability of multimodal features by pre-training on a classic VQA paradigm. Extensive experiments on three benchmark datasets demonstrate that our proposed CDCIN performs excellently and outperforms state-of-the-art approaches. Related Work Although visual question answering is a hot topic in compute vision, few-shot cross-modal learning only recently starts to attract more attention and still has great research potential. In this section, we will discuss some works related to our method from the perspectives of visual question answering, few-shot learning and few-shot visual question answering. Visual Question Answering Visual question answering is a challenging cross-modal analysis task since it requires establishing relationships between visual and textual modalities to achieve cross-modal semantic reasoning. Most of the current methods (Zhou et al. 2015; Yu et al. 2017; Ding et al. 2022) focused on improving the fusion strategy of visual and textual features to achieve good performance. These methods usually ignore the significant guidance of question semantic information for image understanding and are not good at relationship reasoning. To address the insufficient of the above methods, some methods (Xu and Saenko 2016; Anderson et al. 2018; Pan et al. 2022) attempt to utilize the attention mechanism to reinforce cross-modal interaction. Although the above attention related models can realize certain multimodal semantic inference, it is difficult for them to reach high-level reasoning with limited semantic interactions. Therefore, some VQA works (Huang et al. 2020; Jing et al. 2022; Cao et al. 2019) are devoted to enhancing the reasoning ability of attention networks to achieve more deeply cross-modal information interaction and reasoning. The above methods can obtain good performance on classical VQA, but they suffer severe failures when encountering certain classes with small data sizes. This suggests that it is meaningful and promising work to deal with few-shot VQA by utilizing few-shot learning methods. Few-shot Learning Few-shot learning is an idea of solving problems, which teaches the model how to learn and enables the model to recognize new concepts with few samples. Many few-shot learning works (Jiang et al. 2020b; Zhang et al. 2022; Li, Wang, and Hu 2021) use metric-based approaches to generate prototype representations for fast training of classifiers. Some researchers (Yang et al. 2021; Ma et al. 2020) attempt to use different calibration methods to reinforce the performance of few-shot learning. While numerous existing fewshot learning methods are applicable to multimodal data, a comprehensive examination of the interplay between modalities is often lacking in many of these approaches. When dealing with cross-modal tasks that require deep interaction between visual and semantic, these methods expose their insufficiencies. Therefore, there is still significant room for the development of few-shot learning on cross-modal reasoning tasks. Few-shot Visual Question Answering Few-shot visual question answering aims to train an excellent VQA model with limited data, which requires outstanding cross-modal reasoning and powerful feature representation ability. Dong et al. (Dong et al. 2018) introduced fewshot learning to VQA and image captioning and proposed a fast parameter adaptation method to train the joint imagetext learner. Yin et al. (Yin et al. 2021) propose a two-stage network, where each stage is responsible for intra-modal or inter-modal relation capture. They extract features at different levels by constructing visual feature maps and semantic relationship maps by a multi-layer attention mechanism. At present, the research results on few-shot VQA are relatively few, and the studies have not attracted widespread attention. These two papers mentioned above are considered pioneering work on few-shot VQA. Although these works have solved the problems of few-shot VQA to some extent, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7152 Figure 2: The framework of the cross-modal feature distribution calibration inference network. Given the support and query sets, the CDCIN extracts visual and textual features and feeds them into CAIM to model cross-modal interaction. The distribution of visual information before and after inference is aligned in the VIECM. they fail to achieve efficient cross-modal inference to adjust the integrated multi-modal feature distribution. This leads to the dispersion of the feature distribution for each class, with considerable distances from the class center, affecting the overall performance adversely. Regarding the issue above, we propose a Cross-modal feature distribution calibration inference network for few-shot VQA that aligns the visual information entropy to enhance the ability of feature distribution calibration. Extensive experiments on widely used benchmark datasets demonstrate that the performance our method surpasses state-of-the-art few-shot VQA methods by a large margin. Methodology In this section, we will describe the problem definition of few-shot VQA and introduce our proposed CDCIN and pretraining method in detail. Problem Statement For each task τ, the few-shot VQA is formulated as a Nway K-shot classification problem with N classes sampled from the answer set and K examples per class. The answer set contains the labels of the corresponding samples. If the number of query examples for each class is M, we can get a query set {Q} with N × M samples and a support set {S} with N × K samples. A training sample is a triplet containing an image, a question, and an answer, so there are three query subsets {QI}, {QT }, {QA} and support subsets {SI}, {ST }, {SA} actually. We combine samples from the same modality into the image set {I} = {SI, QI}, the question set {T} = {ST , QT } and the answer set {A} = {SA, QA}. In this work, the features from {I}N×K+N×M and {T}N×K+N×M are extracted through visual embedding ψ(·; θψ) and text embedding ϕ(·; θϕ) neural networks. During the classification phase, the fused multimodal features are divided into corresponding support sets {Smulti}N×K and query sets {Qmulti}N×M, and the support sets are taken as input to train the CDCIN by minimizing the loss over the corresponding query sets. Cross-Modal Feature Distribution Calibration Inference Network In this paper, we focus on studying deep cross-modal inference and helping the model filter out redundant information to calibrate the spatial distribution of multimodal features for few-shot VQA. To this end, we propose a Crossmodal feature Distribution Calibration Inference Network (CDCIN), as illustrated in the Figure 2. In the CDCIN, a new concept called visual information entropy is proposed to reflect the spatial distribution of visual information, which assists in excluding irrelevant information. Specifically, CDCIN mainly includes a Co-Attention Inference Module (CAIM) and a Visual Information Entropy Calibration Module (VIECM). In order to strengthen the representations of multiple modalities, we also design an Inference Enhancement Pre-training strategy. In the CDCIN, the word tokens of the question are embedded by the pre-trained GolVe (Pennington, Socher, and Manning 2014), and the visual features are extracted by the Swin-Transformer (Liu et al. 2021b). Then the visual and question features are simultaneously input to the CAIM to mine cross-modal fine-grained interaction. The integrated features will be sent to the multimodal feature adaptive fusion module in the VIECM to generate visual information entropy and the multimodal feature vector in the later reasoning stage. Finally, the information distribution alignment is achieved through the information consistency loss. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7153 Co-Attention Inference Existing few-shot learning methods (Ye et al. 2020; Zhang et al. 2022) usually do not consider cross-modal inference, resulting in insufficient reasoning ability. To completely realize the reasoning process of VQA under the few-shot approach, we introduce a CoAttention Inference Module (CAIM). It is mainly composed of masked multi-head self-attention and cross attention and can realize the fine-grained interactions between images and questions. The CAIM deeply explores the correlation between visual and textual features. Specifically, the text encoder is responsible for generating transitional features ready for cross-modal interactive operations, which mainly achieves through multi-head self-attention. Given a key and value of dimension dk and query of dimension dv, the attended features are obtained as follows: SA(Q, K, V ) = softmax(QKT √ d )V (1) where K ∈Rn×d, Q ∈Rm×d, V ∈Rn×d are key, query and value respectively. The multi-head self-attention splits the features into parallel “heads”. Each head independently performs the dotproduction. The calculation is given by: T = MHSA(Q, K, V ) = [hi, h2, ..., ht]Wh (2) hi = SA(QW Q i , KW K i , V W V i ) (3) where T is attended question features, W Q i , W K i , W V i are the projection matrices, and Wh ∈Rh×dh×d. dh is the dimension of each head. The essential component of the CAIM is the image encoder containing cross attention, which promotes the query of critical visual information according to the question, and is beneficial for the cross-modal information interaction. Visual Information Entropy Calibration In order to maintain the consistency of visual information entropy to calibrate the feature distribution, we propose a new Visual Information Entropy Calibration Module (VIECM), which contains two sub-modules: the multimodal feature adaptive fusion module and information consistency loss module, as illustrated in the Figure. 3. The features generated by CAIM are fed into the multimodal feature adaptive fusion module to learn the multimodal features and the later visual information entropy. The original visual features extracted by Swin Transformer are fed into the information consistency loss module to compute the former visual information entropy. Finally, consistency loss is calculated from the former and later visual information entropy. Multimodal Feature Adaptive Fusion: Few-shot learning methods (Chen et al. 2021) usually perform average pooling on the corresponding features to compute the similarity between the support set and the query set, which always results in omitting critical information. The information represented by the features F n×d in the dimension of n is quite different from each other. Therefore, we propose a multimodal adaptive feature fusion module ξ(·; θξ), which assigns reliable weights to features of each dimension of n. Figure 3: The illustration of the Visual Information Entropy Calibration Module. The visual features F I ∈Rn×d inferred by the CAIM interact deeply with the question features in the cross attention. After that, adaptive fusion weight αI is obtained through multi-layer perception and softmax function. Then the visual features are summed up according to their relative weights to generate a flattened visual vector f I ∈R1×d. F I = Norm(MHSA(F I, F Q, F Q) + F I) (4) αI = softmax(MLP(F I)) (5) f I = n X j=1 αI j ⊙F I j (6) The flattened question features f Q = ξ(F Q; θξ) and visual features are concatenated to compute similarity. The weights αI ∈Rn×1 generated in the flattening stage represent the distribution of visual features after reasoning. The features with high weight have strong representation capabilities. By adjusting the weights, we can get an accurate feature distribution. To this end, we convert them into visual information entropy δl and feed them into the information consistency loss module. δl = ∆(αI) = n X i=1 αI i log(αI i ) (7) Information Consistency Loss: We have discussed that there are discrepancies in information distribution at different stages of the inference network. In order to ease this gap, we utilize a loss function Le in the information consistency loss module to constrain the information distribution of these two stages. In the multimodal feature adaptive fusion module, we calculate the later visual information entropy δl for information distribution alignment. In the information consistency loss module, we generate adaptive weight from original visual features F OI ∈Rn×d according to question features and converted it into visual information entropy by ∆(·). δf = ∆(softmax(MLP(WIF OI ⊙WQF Q))) (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7154 Figure 4: The illustration of the Inference Enhancement Pretraining. The components of the same color share parameters. where WI, WQ are the projection matrices. Our method is trained to minimize the difference in visual information entropy between stages of inference to reinforce the convergence of the feature distribution. We define an information consistency loss function to calculate the squared difference of later and former visual information entropy, denoted as Le: Le = N X i=1 (δl −δf)2 (9) Lt = − N X i=1 C X j=1 yij log(ˆyij) + λtLe (10) where λt is the trade-off of the strength of information distribution alignment, and Lt is the joint loss function. Inference Enhancement Pre-training Some few-shot learning methods (Liu et al. 2021a) pre-train the image encoder on the base class to enhance the representation ability of features. In few-shot VQA, it is difficult to achieve significant success just by pre-training the visual encoder. What distinguishes few-shot VQA from other fewshot learning tasks is that it requires the network to interact across modalities. We design an inference enhancement pretraining strategy as illustrated in the Figure. 4 to enhance the fine-grained interaction for generating better features. We divide the whole CDCIN into two parts: feature extraction ζ(·; θζ) and inference network µ(·; θµ). In the pretraining stage, we adopt the traditional classification setting of VQA to train the inference network. For a given training set Υtrain = {(Ii, Qi, Ai)|1 ≤i ≤n}, the target is training the parameters θ of CDCIN to predict the answers A, taking images I and questions Q as input. θ = arg max θ n X i=1 logP(yi = Ai|f(Ii, Qi|θ)) (11) The parameters of the inference network in the metalearning stage µ(·; θµ) are shared with that in the pretraining stage, while the parameters of the feature extractor ζ(·; θ′ ζ) are not. This avoids overfitting the feature extractor during the pre-training stage and preserves the reasoning ability of the network. Experiments To evaluate the effectiveness of CDCIN for few-shot VQA, we conduct a series of experiments on datasets based on widely used Toronto COCO-QA (Ren, Kiros, and Zemel 2015), Visual Genome-QA (Krishna et al. 2017) and VQA v2 (Goyal et al. 2017), including the quantitative analysis, qualitative analysis, ablation studies. Dataset and Implementation Details Datasets. Before training the network, we preprocess Toronto COCO-QA, Visual Genome-QA and VQA v2 for few-shot VQA. Different from these works (Dong et al. 2018; Yin et al. 2021), we fully considerate the imbalance problem of the datasets and construct three balanced fewshot datasets, named FS COCO-QA, FS VG-QA, and FS VQA. We abide by the following rules to clean the data: (1) the occurrence of each word is not less than 3; (2) the number of samples in each class is more than 30 and less than 60; (3) there is no duplication of images in all examples pairs. Finally, we randomly select 60% samples of the final set as the training set, 20% as the valid set, and the rest as the test set. The details of datasets and implementation can be seen in supplementary materials. Comparison Experiments To demonstrate the effectiveness of the proposed CDCIN, we compare it with several few-shot VQA methods. The results are illustrated in Table 1 and 2. All experiments in the Table 1 were conducted on the datasets cleaned by the paper (Yin et al. 2021). We compare the performance of the proposed CDCIN with existing few-shot VQA methods, such as FPAIT (Dong et al. 2018) and HGAT (Yin et al. 2021). From the experimental results, the CDCIN outperforms all the methods in the Table 1. Compared with HGAT, the state-of-the-art algorithm, the accuracy of our method on Toronto COCO-QA improves by 15.95%, 13.77%, 21.17%, and 20.99% under the settings of 5-way-1-shot, 5-way-5-shot, 10-way-1-shot and 10-way-5shot respectively. The performance on Visual Genome-QA improved by 6.68%, 7.44%, 11.79% and 16.66%. The excellent performance on two benchmark datasets shows that our method effectively implements multimodal information interaction and captures the commonality between modalities. The visual information entropy alignment enables the model to obtain an accurate spatial distribution of visual feature, which enhances the filtering ability of irrelevant information, thus converging the multimodal feature distribution and improving the classification performance. We also investigate the impact of different visual backbones on the performance of CDCIN, which is illustrated in the Table 1. Three visual feature extractors are utilized in the CDCIN, i.e, ResNet12, Vit-S, and Swin-T. Among them, Swin-T and Vit-S perform better than ResNet12. Because the pixel of images inputted into Swin Transformer-Tiny and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7155 Method Toronto COCO-QA Visual Genome-QA 5 way accuracy 10 way accuracy 5 way accuracy 10 way accuracy 1 shot 5shot 1 shot 5shot 1 shot 5shot 1 shot 5shot FPAIT-CNN(Dong et al. 2018) 59.38 71.92 45.11 60.20 75.49 79.12 61.66 67.62 FPAIT+CLT(Dong et al. 2018) 60.61 72.17 46.37 60.92 75.05 79.28 60.82 67.48 Relation Net(Sung et al. 2018) 61.75 71.89 45.60 60.13 77.21 80.72 63.14 68.10 EGNN(Kim et al. 2019) 62.21 73.41 46.99 60.01 77.67 83.26 64.07 70.87 HGAT-Res12-Res12(Yin et al. 2021) 63.13 75.41 48.10 61.50 79.56 86.10 66.62 72.13 GCN-Res12⋆(Satorras and Estrach 2018) 64.20 76.85 51.36 66.12 69.14 79.49 56.91 70.12 FEAT-Res12⋆(Ye et al. 2020) 62.67 75.18 49.53 64.56 66.95 78.12 55.65 69.36 ProtoNet-Res12⋆(Chen et al. 2021) 63.08 78.40 50.74 67.63 70.16 82.82 59.00 75.61 CDCIN-Res12 72.26 84.18 59.70 75.65 83.69 91.89 74.84 85.72 CDCIN-Vit 76.52 87.44 65.49 79.46 85.83 93.10 78.18 88.25 CDCIN-Swin 79.08 89.48 69.27 82.49 86.24 93.54 78.41 88.79 Table 1: Comparison of accuracy on Toronto COCO-QA, Visual Genome-QA. The ⋆represents that we extend these few-shot learning methods for few-shot VQA and the bolded data indicate the best results under this experimental setup. Method FS COCO-QA FS VG-QA FS VQA 5 way 10 way 5 way 10 way 5 way 10 way 1 shot 5shot 1 shot 5shot 1 shot 5shot 1 shot 5shot 1 shot 5shot 1 shot 5shot FEAT-Res12⋆(Ye et al. 2020) 59.90 72.62 46.81 61.58 69.65 80.39 57.33 70.70 60.88 74.03 50.13 64.97 GCN-Res12⋆(Satorras and Estrach 2018) 61.84 74.72 47.69 62.26 70.71 81.63 58.47 71.24 63.20 76.25 51.81 67.05 MatchingNet-Res12⋆(Vinyals et al. 2016) 61.50 75.36 47.32 64.05 70.45 83.87 57.96 75.14 63.07 80.74 51.01 72.68 ProtoNet-Res12⋆(Chen et al. 2021) 61.64 75.76 47.52 64.83 71.19 83.97 60.06 74.73 65.80 80.51 54.59 73.41 CDCIN-Res12 66.11 79.43 52.36 68.83 80.52 89.35 70.22 82.16 78.75 88.85 69.50 82.43 CDCIN-Vit 71.42 83.77 59.48 74.21 82.84 90.67 72.88 83.47 80.01 89.41 70.58 83.04 CDCIN-Swin 74.30 86.02 63.12 77.94 83.43 91.59 73.79 85.38 80.23 90.14 71.88 84.67 Table 2: The accuracy of comparison on FS COCO-QA, FS VG-QA and FS VQA. The ⋆represents that we extend these fewshot learning methods for few-shot VQA and the bolded data indicate the best results under this experimental setup. Vision Transformer-Small is 224×224 much larger than that of ResNet which is resized into 84 × 84. Swin Transformer uses shifted window self-attention to reduce computational cost and reinforce the local receptive field, making the accuracy of CDCIN-Swin 2.88%, 2.25%, 3.64%, and 3.73% higher than that of CDCIN-Vit. In order to eliminate the effect of data imbalance, we carefully cleaned up the data in three benchmark datasets and constructed three balanced datasets, which were introduced in the section “dataset and implementation”. We reproduced some excellent few-shot learning methods and extended them to few-shot VQA. The experimental results are shown in the Table 2. We achieve two conclusions from these experimental results. (1) The performance of all the models on the cleaned datasets degrades. The accuracy of CDCIN-Swin dropped by 4.78%, 3.46%, 6.15%, and 4.55% under all of the experimental settings on COCO-QA. The reason is that we balanced the data classes and constrained the sample number of each class to not exceed 60. In this way, the phenomenon that examples of a certain category are frequently sampled will be removed, which enhances the inference capability of this kind of questions. (2) Our proposed CDCIN still outperforms all the other methods on the balanced datasets and reaches the state-of-the-art. Compared to ProtoNet, which performs the best among all reproduced methods equipped with ResNet12, the accuracy of CDCIN-Res12 increases by 4.47%, 3.67%, 4.84%, and 4.00% under all of the experimental settings on FS COCOQA. These boosts demonstrate that the CDCIM can build deep interactions between modalities to improve the performance of inference. What’s more, our best model, CDCINSwin, improves the accuracy by 12.66%, 10.26%, 15.6%, and 13.11% under all of the experimental settings. Ablation Study We conduct a series of ablation experiments to verify the effectiveness of each proposed module of the CDCIN. Table 3 presents the results of the ablation experiments on Toronto COCO-QA, which use ResNet-12 to extract visual features. The baseline is a simple network that only consists of a visual encoder and a question encoder, i.e. Case 1. According to Table 3, all the proposed components and methods are helpful in improving the accuracy of few-shot VQA. Case 2 is equipped with CAIM, which provides multimodal fine-grained information interaction for the baseline and enables the model to learn to reason. Therefore, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7156 Case CAIM VIECM Pre 5 way 10 way 1 shot 5 shot 1 shot 5 shot 1 63.08 78.40 50.74 67.63 2 ✔ 65.94 79.69 53.32 69.04 3 ✔ ✔ 69.92 82.45 57.70 73.42 4 ✔ ✔ 70.32 82.35 57.72 73.05 5 ✔ ✔ ✔ 72.26 84.18 59.70 75.65 Table 3: Ablation Study of accuracy on Toronto COCO-QA. The bolded data indicate the best results under this experimental setup. compared with Case 1, the accuracy of Case 2 in each experimental setting increased by 2.86%, 1.29%, 2.58%, and 1.41%. On the basis of Case 2, VIECM is introduced to constitute Case 3 and achieves the improvements of 3.98%, 2.76%, 4.38%, and 4.38%. This shows that VIECM maintains the consistency between the pre- and post-inference information distribution, calibrating the feature distribution and strengthening inference. We also apply the traditional VQA paradigm to train CAIM and VIECM on the base class of Toronto COCO-QA and fine-tune the pre-trained parameters, namely Case 5. This model obtains the improvements of accuracy by 7.24%, 4.13%, 6.98%, and 5.42%, which confirms that our proposed pre-training strategy can effectively preserve the reasoning ability learned during the pretraining stage. Qualitative Analysis In order to clearly demonstrate the whole procedure that the proposed CDCIN calibrates the feature distribution, we visualize the multimodal feature distribution before and after inference. Figure 5 shows an example of the feature distribution of pre- and post-inference, which comes from a 5way 5-shot VQA task on the FS VQA test set. The figure (a) represent the multimodal feature distribution without passing through the inference network. When the CDCIN has not undergone deep interaction, which only learns the common representations of similar samples and fails to capture the critical information accurately, making the boundaries between feature distribution blurred. The figure (b) represent that the model has completed the multimodal information interaction and calibrated the distribution of features. The constrained features have their distributions converged toward the prototype since they are localized to important information by the model, enhancing the classification performance. The main target of CDCIN is to achieve feature distribution calibration by aligning the information entropy between pre- and post-inference. The Figure 5 shows the visual information entropy visualization of two test set samples from the FS VQA dataset. The figure “Base” refers to the model that has not equipped with the VIECM. While “Base” has the ability to capture essential information in the image (e.g., the body of the motorcycle), it still suffers from other redundant information (e.g., trails and children). Our proposed method that calibrates the information distribution can not only retain the original key information but also search for auxiliary Figure 5: Figure I is the feature distribution of a few samples under the setting of 5-way 5-shot. Note that “⋆” means the prototype of each class and “o” means the feature distribution of each query sample. Figure II is the visualization of visual information entropy generated by CDCIN-Swint. information (e.g., the wheels of the motorcycle). This allows the CDCIN to dynamically adjust the feature distribution according to the information calibration, thus improving the classification performance. Conclusion In this paper, we propose a cross-modal feature distribution calibration inference network for few-shot VQA, in which a novel visual information entropy is proposed to represent the spatial distribution of visual information and used to calibrate the distribution of multimodal features. Initially, visual information entropy is obtained by joint computation of original visual features and question queries, which means the visual distribution that has not yet interacted with textual features. Later, visual information entropy is generated in the multimodal feature adaptive fusion module representing the result of inference. Two kinds of visual information entropy are all sent to the information consistency loss module for distribution alignment and feature distribution calibration, thus realizing more accurate answer prediction. We conduct extensive experiments on three benchmark datasets and achieve excellent performance, surpassing the state-ofthe-art methods by a large margin. Acknowledgments This study was funded by the Natural Science Foundation of Shanghai, China (grant number 22ZR1418400). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7157 References Anderson, P.; He, X.; Buehler, C.; Teney, D.; Johnson, M.; Gould, S.; and Zhang, L. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6077–6086. Ben-Younes, H.; Cadene, R.; Thome, N.; and Cord, M. 2019. Block: Bilinear superdiagonal fusion for visual question answering and visual relationship detection. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 8102–8109. Cao, Q.; Liang, X.; Li, B.; and Lin, L. 2019. Interpretable visual question answering by reasoning on dependency trees. IEEE transactions on pattern analysis and machine intelligence, 43(3): 887–901. Chen, Y.; Liu, Z.; Xu, H.; Darrell, T.; and Wang, X. 2021. Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9062–9071. Dancette, C.; Whitehead, S.; Maheshwary, R.; Vedantam, R.; Scherer, S.; Chen, X.; Cord, M.; and Rohrbach, M. 2023. Improving Selective Visual Question Answering by Learning from Your Peers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 24049–24059. Ding, Y.; Yu, J.; Liu, B.; Hu, Y.; Cui, M.; and Wu, Q. 2022. Mukea: Multimodal knowledge extraction and accumulation for knowledge-based visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5089–5098. Dong, X.; Zhu, L.; Zhang, D.; Yang, Y.; and Wu, F. 2018. Fast parameter adaptation for few-shot image captioning and visual question answering. In Proceedings of the 26th ACM international conference on Multimedia, 54–62. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In 9th International Conference on Learning Representations, ICLR. Fukui, A.; Park, D. H.; Yang, D.; Rohrbach, A.; Darrell, T.; and Rohrbach, M. 2016. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. arXiv:1606.01847. Goyal, Y.; Khot, T.; Summers-Stay, D.; Batra, D.; and Parikh, D. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6904–6913. Guo, Q.; Yao, K.; and Chu, W. 2023. Switch-BERT: Learning to Model Multimodal Interactions by Switching Attention and Input. arXiv:2306.14182. Huang, Q.; Wei, J.; Cai, Y.; Zheng, C.; Chen, J.; Leung, H.f.; and Li, Q. 2020. Aligned dual channel graph convolutional network for visual question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 7166–7176. Jiang, H.; Misra, I.; Rohrbach, M.; Learned-Miller, E.; and Chen, X. 2020a. In defense of grid features for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10267– 10276. Jiang, W.; Huang, K.; Geng, J.; and Deng, X. 2020b. Multiscale metric learning for few-shot learning. IEEE Transactions on Circuits and Systems for Video Technology, 31(3): 1091–1102. Jing, C.; Jia, Y.; Wu, Y.; Liu, X.; and Wu, Q. 2022. Maintaining Reasoning Consistency in Compositional Visual Question Answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5099– 5108. Kim, J.; Kim, T.; Kim, S.; and Yoo, C. D. 2019. Edgelabeling graph neural network for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11–20. Kim, J.-H.; Jun, J.; and Zhang, B.-T. 2018. Bilinear attention networks. arXiv preprint arXiv:1805.07932. Kim, J.-H.; On, K.-W.; Lim, W.; Kim, J.; Ha, J.-W.; and Zhang, B.-T. 2016. Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325. Krishna, R.; Zhu, Y.; Groth, O.; Johnson, J.; Hata, K.; Kravitz, J.; Chen, S.; Kalantidis, Y.; Li, L.-J.; Shamma, D. A.; et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123: 32–73. Li, J.; Wang, Z.; and Hu, X. 2021. Learning intact features by erasing-inpainting for few-shot classification. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 8401–8409. Li, L.; Gan, Z.; Cheng, Y.; and Liu, J. 2019. Relation-aware graph attention network for visual question answering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10313–10322. Liu, X.; Tian, X.; Lin, S.; Qu, Y.; Ma, L.; Yuan, W.; Zhang, Z.; and Xie, Y. 2021a. Learn from Concepts: Towards the Purified Memory for Few-shot Learning. In IJCAI, 888– 894. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021b. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Ma, Y.; Liu, W.; Bai, S.; Zhang, Q.; Liu, A.; Chen, W.; and Liu, X. 2020. Few-shot Visual Learning with Contextual Memory and Fine-grained Calibration. In IJCAI, 811–817. Pan, H.; He, S.; Zhang, K.; Qu, B.; Chen, C.; and Shi, K. 2022. AMAM: an attention-based multimodal alignment model for medical visual question answering. KnowledgeBased Systems, 255: 109763. Pan, Y.; Yao, T.; Li, Y.; and Mei, T. 2020. X-linear attention networks for image captioning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10971–10980. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7158 Penamakuri, A. S.; Gupta, M.; Gupta, M. D.; and Mishra, A. 2023. Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question Answering. arXiv:2306.16713. Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 1532–1543. Ren, M.; Kiros, R.; and Zemel, R. 2015. Exploring models and data for image question answering. Advances in neural information processing systems, 28. Satorras, V. G.; and Estrach, J. B. 2018. Few-shot learning with graph neural networks. In International conference on learning representations. Sung, F.; Yang, Y.; Zhang, L.; Xiang, T.; Torr, P. H.; and Hospedales, T. M. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1199– 1208. Tran, Q.-T.; Tran, T.-P.; Dao, M.-S.; La, T.-V.; Tran, A.D.; and Dang Nguyen, D. T. 2022. A Textual-VisualEntailment-based Unsupervised Algorithm for Cheapfake Detection. In Proceedings of the 30th ACM International Conference on Multimedia, 7145–7149. Vinyals, O.; Blundell, C.; Lillicrap, T.; Wierstra, D.; et al. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, 3630–3638. Xu, H.; and Saenko, K. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14, 451–466. Springer. Yang, S.; Wu, S.; Liu, T.; and Xu, M. 2021. Bridging the gap between few-shot and many-shot learning via distribution calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12): 9830–9843. Ye, H.-J.; Hu, H.; Zhan, D.-C.; and Sha, F. 2020. Few-shot learning via embedding adaptation with set-to-set functions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8808–8817. Yin, C.; Wu, K.; Che, Z.; Jiang, B.; Xu, Z.; and Tang, J. 2021. Hierarchical graph attention network for few-shot visual-semantic learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2177–2186. Yu, Z.; Yu, J.; Fan, J.; and Tao, D. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In Proceedings of the IEEE international conference on computer vision, 1821–1830. Yu, Z.; Yu, J.; Xiang, C.; Fan, J.; and Tao, D. 2018. Beyond bilinear: Generalized multimodal factorized high-order pooling for visual question answering. IEEE transactions on neural networks and learning systems, 29(12): 5947–5959. Zhang, C.; Cai, Y.; Lin, G.; and Shen, C. 2022. Deepemd: Differentiable earth mover’s distance for few-shot learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhou, B.; Tian, Y.; Sukhbaatar, S.; Szlam, A.; and Fergus, R. 2015. Simple baseline for visual question answering. arXiv preprint arXiv:1512.02167. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7159
2024
795
18,623
Robust 3D Tracking with Quality-Aware Shape Completion Jingwen Zhang1*, Zikun Zhou2*†, Guangming Lu1, Jiandong Tian3, Wenjie Pei1† 1Harbin Institute of Technology, Shenzhen 2Peng Cheng Laboratory 3Shenyang Institute of Automation, Chinese Academy of Sciences {jingwenz1022, zhouzikunhit}@gmail.com, luguangm@hit.edu.cn, tianjd@sia.cn, wenjiecoder@outlook.com Abstract 3D single object tracking remains a challenging problem due to the sparsity and incompleteness of the point clouds. Existing algorithms attempt to address the challenges in two strategies. The first strategy is to learn dense geometric features based on the captured sparse point cloud. Nevertheless, it is quite a formidable task since the learned dense geometric features are with high uncertainty for depicting the shape of the target object. The other strategy is to aggregate the sparse geometric features of multiple templates to enrich the shape information, which is a routine solution in 2D tracking. However, aggregating the coarse shape representations can hardly yield a precise shape representation. Different from 2D pixels, 3D points of different frames can be directly fused by coordinate transform, i.e., shape completion. Considering that, we propose to construct a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking. Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions. It enables us to effectively construct and leverage the synthetic target representation. Besides, we also develop a voxelized relation modeling module and box refinement module to improve tracking performance. Favorable performance against state-of-the-art algorithms on three benchmarks demonstrates the effectiveness and generalization ability of our method. Introduction 3D object tracking in LiDAR point clouds aims to predict the target position and orientation in subsequent frames, given the initial state of the target object. Existing 3D trackers (Giancola, Zarzar, and Ghanem 2019; Qi et al. 2020; Hui et al. 2022; Shan et al. 2021; Zhou et al. 2022) predominantly follow the Siamese tracking paradigm (Bertinetto et al. 2016; Zhou et al. 2021), which has achieved astonishing success in 2D tracking. The pioneering study SC3D (Giancola, Zarzar, and Ghanem 2019) calculates the feature similarities between the template and randomly sampled candidates to track the target. After that, many advanced techniques are *These authors contributed equally. †Zikun Zhou and Wenjie Pei are corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Encoder Decoder Dense Geometric Feature Sparse Geometric Features Tracking Sparse Points Synthetic Target Representation Tracking (a) Dense geometric feature learning (b) Sparse geometric feature aggregation (c) Shape completion with real points (Ours) Tracking Sparse Points Dense Points Quality-Aware Shape Completion Template Feature Aggregation … 图b 和图c的点太淡太小了 Backbone Backbone … Sparse Points Figure 1: Different methods for addressing the challenges of sparsity and incompleteness. (a) Learning dense geometric features based on sparse points, which is a formidable task as the learned dense geometric features are with high uncertainty. (b) Aggregating the sparse geometric features of multiple templates, which is a sub-optimal solution as combining coarse shape representations can hardly obtain a precise shape. (c) Our method, which performs shape completion by adaptively fusing the real points of the target object from multiple frames to depict its shape precisely. introduced to improve 3D tracking performance, including end-to-end tracking framework (Qi et al. 2020), transformerbased relation modeling (Shan et al. 2021; Zhou et al. 2022), box-aware feature fusion (Zheng et al. 2021), and contextual information modeling (Xu et al. 2023a; Guo et al. 2022). Despite the great progress, many existing trackers (Qi et al. 2020; Hui et al. 2022; Xu et al. 2023a; Shan et al. 2021; Zhou et al. 2022) pay less attention to the sparsity and incompleteness of the point clouds, which are usually caused by limited sensor capabilities and self-occlusion. For example, 51% of cars in the KITTI (Geiger, Lenz, and Urtasun 2012) dataset have less than 100 points. A typical challenging case is that only a few points of the template and the current target are overlapped due to the sparsity and incomThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7160 pleteness, in which accurately matching the template with the real target is quite difficult. As a result, these methods struggle to discriminate the target in extremely sparse and incomplete point clouds. Several methods have been proposed to address the challenges of sparsity and incompleteness. SC3D (Giancola, Zarzar, and Ghanem 2019) and V2B (Hui et al. 2021) adopt a strategy of learning dense geometric features based on sparse point clouds, as shown in Figure 1 (a). However, such a learning task is quite formidable since the learned dense geometric features are with high uncertainty. The trackers take the risk of misleading by the inaccurate dense features. TAT (Lan, Jiang, and Xie 2022) chooses to aggregate the sparse geometric features of multiple templates to obtain richer target shape information, which is a routine solution in 2D tracking (Wang et al. 2021; Zhang et al. 2019), as shown in Figure 1 (b). Although this strategy allows the tracker to take more target points into account, aggregating the coarse shape representations extracted from sparse points can hardly generate a precise shape representation. Hence, this aggregation strategy is a sub-optimal solution for addressing the challenges of sparsity and incompleteness. Unlike the 2D image pixels, sparse 3D point clouds from different frames can be efficiently fused through coordinate transform to create a dense point cloud. Therefore, we propose to perform shape completion by fusing the target points from historical frames to construct a synthetic target representation for 3D tracking, as illustrated in Figure 1 (c). Herein the synthetic target representation consists of dense and complete point clouds depicting the shape of the target object precisely, enabling us to address the challenges of sparsity and incompleteness in 3D tracking. In light of this idea, we design a robust 3D tracking framework that maintains a synthetic target representation by shape completion and performs 3D tracking in a voxelized manner, termed SCVTrack. The tricky part of SCVTrack is that the synthetic target representation is sensitive to inaccurate historical predictions, and a noisy synthetic target representation can easily lead to tracking collapse. To alleviate the adverse effect of historical prediction errors, we propose a quality-aware shape completion module, which selectively fuses the well-aligned source points into the synthetic target representation. The shape completion naturally causes the imbalance between the point clouds of the template and the search area in terms of point density. It increases the difficulty of learning to model the relation between the two sets of point clouds. Therefore, we perform tracking based on the voxelized features instead of the point features to eliminate the imbalance. Besides, the voxelized tracking framework enables us to explicitly exploit the neighbor relation between voxels and is more computationally efficient. We also introduce a box refinement approach to further exploit the synthetic target representation to refine the target box, effectively improving tracking performance. To conclude, we make the following contributions: (1) we propose a voxelized 3D tracking framework with shape completion to effectively leverage the real target points from historical frames to address the challenges of sparsity and incompleteness; (2) we design a quality-aware shape completion mechanism, taking the quality of the points into account for shape completion to alleviate the adverse effect of historical prediction errors; (3) we achieve favorable 3D tracking performance against state-of-the-art algorithms on three datasets, demonstrating the effectiveness of our method. Related Work 3D object tracking. Early 3D trackers (Asvadi et al. 2016; Bibi, Zhang, and Ghanem 2016; Liu et al. 2018) based on RGB-Depth image pairs are vulnerable to lighting conditions that affect the RGB imaging quality. Recently, 3D tracking based on point clouds has drawn much more attention, as point clouds are robust to illumination changes. Most existing 3D trackers (Giancola, Zarzar, and Ghanem 2019; Qi et al. 2020; Fang et al. 2020; Zhou et al. 2022; Hui et al. 2022; Guo et al. 2022; Xu et al. 2023b) based on point clouds follow the Siamese tracking pipeline, which formulates 3D tracking as a template-candidate matching problem. Besides the Siamese tracking pipeline, M2T (Zheng et al. 2022) recently proposes a motion-centric tracking paradigm, which directly predicts the target motion between two consecutive frames and achieves promising tracking performance. Despite the astonishing progress, the sparsity and incompleteness of 3D point clouds still plague these trackers. An existing typical strategy to address the sparsity and incompleteness challenges is learning dense geometric features based on the given sparse point clouds. SC3D (Giancola, Zarzar, and Ghanem 2019) and V2B (Hui et al. 2021) are two methods following this strategy. However, such a learning task is quite challenging since the learned dense geometric features are with high uncertainty. As a result, these two methods only achieve limited tracking performance. A recently proposed approach, TAT (Lan, Jiang, and Xie 2022), adopts a multi-frame point feature aggregation strategy to enrich the shape information. Although it can alleviate the effect of sparsity and incompleteness, it is still a sub-optimal solution since aggregating the coarse shape representations extracted from sparse points can hardly generate a precise shape representation. Unlike the above methods, our method directly fuses the real target points from historical frames to construct a synthetic target representation for addressing the sparsity and incompleteness challenges. Voxel-based 3D vision. Most existing 3D trackers (Giancola, Zarzar, and Ghanem 2019; Fang et al. 2020; Shan et al. 2021) follow the point-based deep learning paradigm (Li et al. 2018; Yang et al. 2020), performing tracking using the unordered point-based features. The voxel-based (Liu et al. 2019; Qi et al. 2016; Zhou and Tuzel 2018; Yin, Zhou, and Krahenbuhl 2021) learning paradigm is another popular way to process point data, which has been widely applied in 3D detection (Zhou and Tuzel 2018; Yan, Mao, and Li 2018; Lang et al. 2019; Yin, Zhou, and Krahenbuhl 2021). However, it has rarely been explored in 3D tracking, except for V2B (Hui et al. 2021) and MTM (Li et al. 2023). This paradigm assigns the points into different voxel bins and extracts structured voxelized features from unordered point clouds. In this work, we resort to voxelized relation modeling to deal with the imbalance issue in terms of point density due to shape completion. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7161 Refined Box (b) Bounding Box Refinement Coarse Box Coarse Box Synthetic Target Model share MLP Box Refinement Bacbone Backbone x y z Avg & Fill Point Feature Voxelization Voxelized Space Avg & Fill 3D Conv Max Pooling 3D Conv Max Pooling Cross-Attention Self-Attention Self-Attention L× Relation Modeling Point Fusion (a) Voxelized Relation Modeling Refined Box Quality-Aware Shape Completion Quality-Aware Shape Completion t 1 ˆ t− 1 t− 1 ˆ t− t 1 ˆ t− t 1 ˆ voxel t− voxel t BEV t 1 ˆ BEV t− 1 t− t t t t t Regression Synthetic Target Model t Voxelized Relation Modeling 1 t− Figure 2: Overall framework of our SCVTrack, which mainly consists of a quality-aware shape completion module, a voxelized relation modeling module, and a box refinement module. It maintains a synthetic target representation T via quality-aware shape completion and performs 3D tracking in a voxelized manner. The red points in ˆPt−1 denote those coming from T . Method Problem Definition Given the initial 3D box of the target object, 3D tracking aims to estimate the target box in each subsequent frame. A 3D box is parameterized by its center position (xyz coordinate), orientation (heading angle θ around the up-axis), and size (width w, length l, and height h). The size of the target object, even for non-rigid objects like pedestrians and cyclists, remains approximately unchanged in 3D tracking. Thus, we only predict the translation (∆x, ∆y, ∆z) and the rotation angle (∆θ) of the target object between two consecutive frames, and then obtain the 3D box Bt at t-th frame by transforming Bt−1 with the translation and rotation angle. Overall Tracking Framework Figure 2 illustrates the overall framework of our SCVTrack. It mainly consists of the quality-aware shape completion, voxelized relation modeling, and box refinement modules. SCVTrack maintains a synthetic target representation T and performs tracking with it between two consecutive frames. Herein, the synthetic target representation is composed of dense and complete point clouds depicting the target shape precisely. We construct it by adaptively fusing the points belonging to the target from historical frames. Suppose that the point clouds of the template and search area from two consecutive frames are denoted as Pt−1 ∈ RNt−1×3 and Pt ∈RNt×3, where Nt−1 and Nt are the numbers of points. To localize the target in Pt, our SCVTrack first completes Pt−1 with the synthetic target representation T via quality-aware shape completion, yielding a completed template ˆPt−1 ∈RN ′ t−1×3 with dense target points. Note that N ′ t−1 is usually much larger than both Nt−1 and Nt due to the shape completion. With ˆPt−1 and Pt, SCVTrack adopts a shared backbone to extract their point features ˆFt−1 ∈RN ′ t−1×C and Ft ∈RNt×C without downsample, respectively, where C is the feature dimension. Then SCVTrack voxelizes these point features and performs relation modeling between them to propagate the tracked target information from the template to the search area, generating the enhanced feature ˜Ft. An MLP regression head is constructed on top of ˜Ft to predict a coarse box ˆBt. SCVTrack then performs box refinement with the guidance of T to obtain the refined box Bt. After the tracking process, we use the new target points in Bt to update the synthetic target representation T via quality-aware shape completion. Note that we opt to complete Pt−1 with T to obtain a dense template instead of directly using T as the template. The rationale behind this design is that the target state in Pt−1 is most similar to that in Pt in general, and completing Pt−1 with T can not only obtain a dense template but also leverage all target points in Pt−1 for tracking. Quality-Aware Shape Completion The quality-aware shape completion module aims to adaptively fuse the source point cloud Psrc with the point cloud Ptgt to be completed. For generating a dense template, T is treated as the source point cloud, and Pt−1 is treated as the point cloud to be completed. In turn, for updating T , Pt is treated as the source point cloud, and T is treated as the point cloud to be completed. Figure 3 shows the shape completion process taking the completion for generating a dense template as an example. In the above two completion process, the source point cloud Psrc is always obtained based on predicted target states, which inevitably contains noisy points. Directly using all points in Psrc for shape completion will lead to error accumulation and even tracking collapse. To address this issue, we propose to evaluate the quality of the point clouds and perform selectively voxel-wise shape completion conditioned on the quality score. Quality evaluator. To evaluate the quality of a point cloud, we design a quality evaluator consisting of a PointNet (Qi et al. 2017a) backbone and a three-layer MLP, which takes as input a point cloud and outputs a quality score. We formulate the quality evaluation task as a classification task. To be specific, we train the evaluator to differentiate the dense and well-aligned point clouds from the sparse and miss-aligned point clouds. To generate the required training samples, we first crop and center the points lying inside the object box from multiple frames. Then we concatenate these points to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7162 Source Point Clouds Quality Evaluator raw points Target Point Clouds Quality Evaluator Pt-1 Pc>Pt-1 Quality Evaluator Pc<=Pt-1 Voxelized Space Voxel bin tgt S src Point Fusion Space Voxelization Quality Evaluator fuse tgt S  Point Fusion for a Voxel + = Shape Completion Shape completion operation for each voxel Compare cmp fuse tgt S S  cmp tgt S S  这块蓝色和橙色颜色弄反了(公式 后面可以加上括号) i tmp tgt S > S i tmp tgt S S i src i tgt i tmp U i tmp Space Voxelization tgt Figure 3: Illustration of the quality-aware shape completion module. ⊎denotes the concatenation operation. This module performs selectively voxel-wise shape completion based on the output of the quality evaluator. generate a dense and well-aligned point cloud as the positive sample. The negative sample is obtained by adding random position disturbance during concatenation or directly selecting a sparse point cloud from a certain frame. We use binary cross-entropy to train the quality evaluator. After training, the output logit is used as the quality score. Voxel-wise shape completion. With the quality evaluator, we design a voxel-wise shape completion strategy to selectively complete different parts of Ptgt to alleviate the adverse effect of the noisy points. To this end, we voxelize the 3D space and assign the points in Ptgt and Psrc into the corresponding voxel bin, as shown in Figure 3. The points of Ptgt and Psrc lying inside the i-th voxel bin are denoted by Vi tgt and Vi src. Before shape completion, we first evaluate the quality of Ptgt, obtaining its quality score Stgt as a reference. To complete the shape in the i-th voxel, we concatenate Vi src with Vi tgt, yielding a dense point cloud Vi tmp in the i-th voxel. We refer to the point cloud Ptgt whose i-th voxel is replaced with Vi tmp as Pi tmp. Then we evaluate the quality of Pi tmp, obtaining a quality score Si tmp. After that, we compare Si tmp with Stgt to judge whether the above completion in the i-th voxel improves the quality of the point cloud Ptgt. Only when the quality is improved, we will update the points in the i-th voxel with Vi tmp. The above completion operation can be formulated as: Vi tmp = Vi tgt ⊎Vi src; Si tmp = ϕquality(Pi tmp); Vi cmp =  Vi tmp, if Si tmp > Stgt; Vi tgt, else. (1) Herein, ⊎denotes the concatenation operation, ϕquality refers to the quality evaluator, Vi cmp denotes the points in the i-th voxel of the final completed point cloud Pcmp. Note that the voxel-wise shape completion can be done in a single forward propagation, as the above completion operation for different voxels can be performed in parallel. Voxelized Relation Modeling Taking as input the point features ˆFt−1 and Ft, relation modeling aims to propagate the target information from the previous frame to the current one, generating the enhanced feature ˜Ft for localizing the target. As above-mentioned, the motivations that we opt for voxelized relation modeling lie in eliminating the imbalance between ˆFt−1 and Ft and exploiting the neighbor relation explicitly. To this end, we first voxelize the point feature and then perform relation modeling between the voxelized feature, as shown in Figure 4. Point feature voxelization. We convert the point features ˆFt−1 and Ft into the voxelized representations ˆFvxl t−1 and Fvxl t , respectively, by averaging the features of the points lying inside the same voxel bin. Then we apply shared 3D convolution layers to aggregate the shape information in the adjacent feature voxels to enhance the voxelized feature representations. Similar to (Hui et al. 2021), we perform max-pooling on these voxelized features along the zaxis to obtain the dense bird’s eye view (BEV) features ˆFbev t−1 ∈RH×W ×C and Fbev t ∈RH×W ×C to alleviate the adverse effect the empty voxels. Relation modeling. Considering that Pt−1 contains both the target and background points, we introduce a learnable target mask Mt−1 ∈RH×W ×C to embed the target state information into ˆFbev t−1 before relation modeling. Technically, we introduce three learnable vectors indicating the three different positional states of a voxel, which are lying inside the box, outside the box, and across the box boundary. Then we generate the mask Mt−1 according to the 2D projection box (along the z-axis) of the 3D box Bt−1. Inspired by recent advances (Ye et al. 2022; Zhou et al. 2023) in 2D tracking, we adopt an attention-based method to propagate the target information from ˆFbev t−1 to Fbev t . As shown in Figure 2, a shared self-attention layer is first employed to model the intra-frame voxel relation. Then a crossattention is used to model the cross-frame voxel relation, where the feature of the current frame is used as query and the feature of the previous frame is used as key and value. This process can be formulated as: ˜Ft = ψca(ψsa( ˆFbev t−1 ⊕Mt−1 ⊕E), ψsa(Fbev t ⊕E)), (2) where ψsa and ψca denote the self-attention and crossattention, respectively. ⊕means element-wise summation, and E refers to the position embedding. Note that we omit the flatten and reshape operation in Eq. 2, and this attention architecture is repeated L times. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7163 (b) Box Refinement Coarse Box Backbone Avg & Fill Point Feature Voxelization Voxelized Space Avg & Fill 3D Conv Max Pooling 3D Conv Max Pooling Cross-Attention Self-Attention Self-Attention L× Relation Modeling Point Fusion (a) Voxelized Relation Modeling Refined Box t t 1 ˆ t− t 1 ˆ voxel t− voxel t BEV t 1 ˆ BEV t− 1 t− t t t Regression Synthetic Target Representation Figure 4: Illustration of the voxelized relation modeling and box refinement modules. ⊕denotes element-wise summation. Box Refinement The box refinement module aims to refine the coarse box ˜Bt with the guidance of the dense geometric information in T . To this end, we first fuse the dense points in T into the coarse box ˜Bt by coordinate transform, obtaining a new point could ˆPt depicting the target object with dense points. The offset between the coarse box ˜Bt and the real target object will affect the smoothness of ˆPt. Based on this principle, we deploy a PointNet backbone following an MLP on top of ˆPt to regress the above-mentioned offset to refine the target box. End-to-end Modeling Learning Our framework consists of two learnable parts: the quality evaluator and the remaining tracking model, which are trained separately. The quality evaluator is end-to-end trained as above-mentioned. The tracking model is end-toend trained with pairs of consecutive frames. We impose smooth-l1 loss (Girshick 2015) on both the coarse and refined boxes to supervise the learning of the tracking model. Note that the synthetic target representation used in tracking model learning is pre-calculated with grounding truth. Experiments Experimental Setup Implementation details. We use a modified PointNet++ (Qi et al. 2017b) as our backbone, which is tailored to contain three set-abstraction (SA) layers and three feature propagation (FP) layers. In the three SA layers, the sample radiuses are set to 0.3, 0.5, and 0.7, and the points are randomly sampled to 512, 256, and 128 points, respectively. Similar to (Zheng et al. 2022), we enlarge the target box predicted in the previous frame by 2 meters to obtain the search area in the current frame. We utilize the targetness prediction operation (Zheng et al. 2022) as a pre-process in our tracking framework. At the beginning of tracking, we use the target points lying inside the given box to initialize T . Benchmarks and metrics. We evaluate our algorithm on KITTI (Geiger, Lenz, and Urtasun 2012), NuScenes (Caesar et al. 2020), and Waymo Open Dataset (WOD) (Sun et al. 2020). KITTI consists of 21 training and 29 test sequences. We split the training set into train/validation/test splits as the test labels are inaccessible, following (Giancola, Zarzar, and Ghanem 2019; Zheng et al. 2022). NuScenes comprises 1000 scenes, which are divided into train/validation/test sets. Following (Zheng et al. 2021, 2022), we use the “train track” split of the train set to train our model and test it on the validation set. WOD contains 1150 scenes, of which 798/202/150 scenes are used for training/validation/testing, respectively. We evaluate our method on WOD following two protocols: Protocol I (Xu et al. 2023a), where we directly test the KITTI pre-trained model on the validation set to evaluate generalization; Protocol II (Zheng et al. 2022), in which the model is trained on the training set and evaluated on the validation set. We use success and precision as metrics and report the Area Under Curve (AUC). Ablation Studies To analyze the effect of each component in SCVTrack, we conduct ablation experiments with six variants of SCVTrack: 1) the baseline (BL) model removing the shape completion mechanism and box refinement module from SCVTrack; 2) the variant using a naive shape completion mechanism without considering the point cloud quality into BL; 3) the variant performing quality-aware shape completion based on BL; 4) the variant that adopts the box refinement module based on the second variant; 5) our intact model; 6) the variant performing tracking with point features instead of voxelized features. This variant directly uses the attentionbased method to process the point features and adopts an MLP head to regress the target box based on the output point features. Table 1 presents the experimental results. Effect of the shape completion mechanism. The comparisons between the first three variants show that both the naive shape completion and quality-aware shape completion mechanisms can boost tracking performance. It manifests that performing shape completion in the raw point cloud space is an effective way to address the challenges of sparsity and incompleteness. Effect of the quality evaluator. The performance gaps between the second and third variants and between the fourth and fifth variants demonstrate that the quality evaluator can substantially improve the quality of the synthetic target representation and further improve tracking performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7164 Variants Car Cyclist Van 1) BL 63.0 | 78.6 72.5 | 93.3 51.9 | 68.1 2) BL+NSC 65.2 | 78.0 73.6 | 93.5 54.9 | 70.1 3) BL+QASC 66.7 | 79.2 75.1 | 93.8 56.1 | 71.9 4) BL+NSC+BR 67.0 | 79.6 75.3 | 93.9 57.8 | 72.1 5) BL+QASC+BR 68.7 | 81.9 77.4 | 94.4 58.6 | 72.8 6) Ours w/o Vox. 64.5 | 79.6 74.5 | 93.6 55.2 | 71.0 Table 1: Ablation study results on the car, cyclist, and van categories. BL refers to the baseline model. NSC denotes naive shape completion. QASC is quality-aware shape completion. BR refers to box refinement. Vox. means voxelization. The best and second-best scores are marked in bold and underline, respectively. Success | Precision are reported. Car Pedestrian Van Cyclist SC3D 41.3 | 57.9 18.2 | 37.8 40.4 | 47.0 41.5 | 70.4 P2B 56.2 | 72.8 28.7 | 49.6 40.8 | 48.4 32.1 | 44.7 LTTR 65.0 | 77.1 33.2 | 56.8 35.8 | 45.6 66.2 | 89.9 BAT 60.5 | 77.7 42.1 | 70.1 52.4 | 67.0 33.7 | 45.4 PTT 67.8 | 81.8 44.9 | 72.0 43.6 | 52.5 37.2 | 47.3 PTTR 65.2 | 77.4 50.9 | 81.6 52.5 | 61.8 65.1 | 90.5 V2B 70.5 | 81.3 48.3 | 73.5 50.1 | 58.0 40.8 | 49.7 CMT 70.5 | 81.9 49.1 | 75.5 54.1 | 64.1 55.1 | 82.4 STNet 72.1 | 84.0 49.9 | 77.2 58.0 | 70.6 73.5 | 93.7 M2T 65.5 | 80.8 61.5 | 88.2 53.8 | 70.7 73.2 | 93.5 TAT 72.2 | 83.3 57.4 | 84.4 58.9 | 69.2 74.2 | 93.9 CXT 69.1 | 81.6 67.0 | 91.5 60.0 | 71.8 74.2 | 94.3 MTM 73.1 | 84.5 70.4 | 95.1 60.8 | 74.2 76.7 | 94.6 MBPT 73.4 | 84.8 68.6 | 93.9 61.3 | 72.7 76.7 | 94.3 Ours 68.7 | 81.9 62.0 | 89.1 58.6 | 72.8 77.4 | 94.4 Table 2: Experimental results on KITTI. Effect of the box refinement. Compared with the second and third variants, the fourth and fifth variants obtain large performance gains in the car and cyclist categories, respectively. It demonstrates the effectiveness of the box refinement guided by the synthetic target representation T . Effect of the voxelized tracking pipeline. Compared with our intact model, performing relation modeling and tracking with the imbalanced point features instead of the voxelized features, i.e., the sixth variant, results in substantial performance drops on these three categories. It validates that our voxelized tracking pipeline can deal with the aforementioned imbalance issue successfully but the point-featurebased tracking pipeline cannot. Quantitative Results The trackers involved in the comparison include SC3D (Giancola, Zarzar, and Ghanem 2019), P2B (Qi et al. 2020), LTTR (Cui et al. 2021), PTT (Shan et al. 2021), PTTR (Zhou et al. 2022) V2B (Hui et al. 2021), BAT (Zheng et al. 2021), STNet (Hui et al. 2022), M2T (Zheng et al. 2022), CMT (Guo et al. 2022), TAT (Lan, Jiang, and Xie 2022), CXT (Xu et al. 2023a), MTM (Li et al. 2023), and Car ≤150 Pedestrian ≤100 Van ≤150 Cyclist ≤100 SC3D 37.9 | 53.0 20.1 | 42.0 36.2 | 48.7 50.2 | 69.2 P2B 56.0 | 70.6 33.1 | 58.2 41.1 | 46.3 24.1 | 28.3 BAT 60.7 | 75.5 48.3 | 77.1 41.5 | 47.4 25.3 | 30.5 V2B 64.7 | 77.4 50.8 | 74.2 46.8 | 55.1 30.4 | 37.2 M2T 61.7 | 75.9 58.3 | 85.4 50.2 | 68.5 68.9 | 91.2 Ours 64.8 | 77.7 60.1 | 88.6 52.8 | 70.5 70.2 | 92.8 Table 3: Experimental results on sparse scenes of KIITI. Vehicle Pedestrian Mean BAT† 54.7 | 62.7 18.2 | 30.3 34.1 | 44.4 V2B† 57.6 | 65.9 23.7 | 37.9 38.4 | 50.1 STNet† 59.7 | 68.0 25.5 | 39.9 40.4 | 52.1 TAT† 58.9 | 66.7 26.7 | 42.2 40.7 | 52.8 CXT† 57.1 | 66.1 30.7 | 49.4 42.2 | 56.7 Ours† 61.3 | 69.8 32.2 | 50.0 44.8 | 58.6 Table 4: Experimental results of different methods on WOD following Protocol I. † denotes the model is pre-trained on KITTI and directly evaluated on WOD validation split. These tracking results measure the generalization ability. MBPT (Xu et al. 2023b). We discuss the results per dataset. KITTI. Table 2 reports the experimental results on KITTI. V2B and TAT opt for dense geometric feature learning and sparse feature aggregation to address the sparsity and incompleteness challenges, respectively. Compared with them, our algorithm achieves better tracking performance in most categories. Our SCVTrack also outperforms M2T in all categories, demonstrating its effectiveness. CXT and MBPT are two recently proposed trackers with sophisticated transformer blocks for relation modeling and target localization and perform better than our approach. WOD. We first evaluate our SCVTrack on WOD following Protocol I to evaluate its generalization ability. Table 4 reports the experimental results. Compared with TAT and CXT, our SCVTrack achieves performance gains of 2.6% in mean success and 1.9% in mean precision. This comparison shows that our method obtains stronger generalization ability. We also evaluate SCVTrack on WOD following Protocol II. As shown in Table 5, our SCVTrack achieves the best performance in both vehicle and pedestrian categories. NuScenes. Table 5 reports the experimental results on NuScenes. Our SCVTrack achieves the best success and precision score in the five categories. Compared with M2T, our SCVTrack achieves performance gains of 2.9% in mean success and 2.0% in mean precision, demonstrating the effectiveness of our SCVTrack. Quantitative results in sparse scenes. To investigate the effectiveness of our method in sparse scenes, we follow V2B to evaluate the performance in the sparse scenes (Car ≤150, Pedestrian ≤100, Van ≤150, and Cyclist ≤100) of KITTI, as shown in Table 3. Our SCVTrack performs favorably against the other methods in all categories. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7165 GroundTruth Baseline model SCVTrack (Ours) t = 20 黑色边框调的扁一点paper空间不太够了 t = 8 t = 8 t = 12 t = 12 t = 16 t = 16 t = 20 t = 24 t = 24 t = 40 t = 40 Synthetic Target Model SCVTrack (Ours) Baseline Model Figure 5: Qualitative comparisons between the variants w/ and w/o shape completion. Blue and red points refer to the raw points and fused points in each frame. We can observe that the shape completion mechanism helps SCVTrack successfully track the target in the extremely sparse scene, even though the synthetic target representation is not satisfactorily dense and complete. NuScenes Waymo Open Dataset Car Pedestrian Truck Trailer Bus Mean Vehicle Pedestrian Mean SC3D 22.3 | 21.9 11.3 | 12.7 30.7 | 27.7 35.3 | 28.1 29.4 | 24.1 20.7 | 20.2 – – – P2B 38.8 | 43.2 28.4 | 52.2 43.0 | 41.6 49.0 | 40.1 33.0 | 27.4 36.5 | 45.1 28.3 | 35.4 15.6 | 29.6 24.2 | 33.5 BAT 40.7 | 43.3 28.8 | 53.3 45.3 | 42.6 52.6 | 44.9 35.4 | 28.0 38.1 | 45.7 35.6 | 44.2 22.1 | 36.8 31.2 | 41.8 M2T 55.9 | 65.1 32.1 | 60.9 57.4 | 59.5 57.6 | 58.3 51.4 | 51.4 49.2 | 62.7 43.6 | 61.6 42.1 | 67.3 43.1 | 63.5 Ours 58.9 | 67.7 34.5 | 61.5 60.6 | 61.4 59.5 | 60.1 54.3 | 53.6 52.1 | 64.7 46.4 | 63.0 44.1 | 68.2 45.7 | 64.7 Table 5: Experimental results of different methods on Nuscenes and WOD. These methods are trained on the training split of the Nuscenes or WOD benchmark and evaluated on the corresponding validation split. Pre-process Shape completion Pointnet++ Time 1.3 ms 11.1 ms 10.6 ms Voxelization Relation modeling Box refinement Time 1.1 ms 5.6 ms 1.7 ms Table 6: Inference time of each component of our model. Tracking speed. We measure the average tracking speed on Car of KITTI on an RTX3090 GPU, which is about 31 FPS. The average inference time per frame is 31.4 ms. Table 6 reports the detailed time consumption. Qualitative Results To further investigate the effectiveness of the shape completion mechanism, we visualize the tracking results of our SCVTrack and baseline model and the synthetic target representation in an extremely sparse scene, as shown in Figure 5. Although the synthetic target representation is not satisfactorily dense due to the extremely sparse point clouds, our SCVTrack keeps tracking the target successfully. By contrast, the baseline model loses the target then the point clouds become extremely sparse and incomplete. Figure 6 compares the tracking results of our method and M2T (Zheng et al. 2022) in a sparse scene. They can both track the target at the beginning. M2T loses the target at about the 20th frame (the 0 50 100 0.00 0.50 1.00 1 10 19 28 37 Point Num IoU Temporal progress (in frame) Ours M2T Point number Figure 6: Results of M2T and ours on a sparse scene. target point cloud becomes quite sparse), while our method keeps tracking the target accurately. Conclusion In this work, we have presented a robust voxelized tracking framework with shape completion, named SCVTrack. Our SCVTrack constructs a dense and complete point cloud depicting the shape of the target precisely, i.e., the synthetic target representation, through shape completion and performs tracking with it in a voxelized manner. Specifically, we design a quality-aware shape completion mechanism, which can effectively alleviate the adverse effect of noisy historical predictions in shape completion. We also develop a voxelized relation modeling module and box refinement module to improve tracking performance. The proposed SCVTrack achieves favorable performance against state-ofthe-art algorithms on three popular 3D tracking benchmarks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7166 Acknowledgements This work was supported in part by the National Natural Science Foundation of China (Grant NO. U2013210, 62006060, 62372133, 62176077), in part by the Guangdong Basic and Applied Basic Research Foundation under Grant (Grant NO. 2022A1515010306), in part by Shenzhen Fundamental Research Program (Grant NO. JCYJ20220818102415032), in part by the Shenzhen Key Technical Project (NO. 2022N001, 2020N046), in part by the Guangdong International Science and Technology Cooperation Project (NO. 20220505), in part by the Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies (NO. 2022B1212010005), and in part by the Key Research Project of Peng Cheng Laboratory (NO. PCL2023A08). References Asvadi, A.; Girao, P.; Peixoto, P.; and Nunes, U. 2016. 3D object tracking using RGB and LIDAR data. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems, 1255–1260. IEEE. Bertinetto, L.; Valmadre, J.; Henriques, J. F.; Vedaldi, A.; and Torr, P. H. 2016. Fully-convolutional siamese networks for object tracking. In European Conference on Computer Vision Workshops, 850–865. Bibi, A.; Zhang, T.; and Ghanem, B. 2016. 3d part-based sparse tracker with automatic synchronization and registration. In IEEE Conference on Computer Vision and Pattern Recognition, 1439–1448. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuscenes: A multimodal dataset for autonomous driving. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11621–11631. Cui, Y.; Fang, Z.; Shan, J.; Gu, Z.; and Zhou, S. 2021. 3d object tracking with transformer. arXiv preprint arXiv:2110.14921. Fang, Z.; Zhou, S.; Cui, Y.; and Scherer, S. 2020. 3dsiamrpn: An end-to-end learning method for real-time 3d single object tracking using raw point cloud. IEEE Sensors Journal, 21(4): 4995–5011. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 3354–3361. IEEE. Giancola, S.; Zarzar, J.; and Ghanem, B. 2019. Leveraging shape completion for 3d siamese tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1359–1368. Girshick, R. 2015. Fast r-cnn. In IEEE International Conference on Computer Vision, 1440–1448. Guo, Z.; Mao, Y.; Zhou, W.; Wang, M.; and Li, H. 2022. CMT: Context-Matching-Guided Transformer for 3D Tracking in Point Clouds. In European Conference on Computer Vision, 95–111. Springer. Hui, L.; Wang, L.; Cheng, M.; Xie, J.; and Yang, J. 2021. 3D Siamese voxel-to-BEV tracker for sparse point clouds. Advances in Neural Information Processing Systems, 34: 28714–28727. Hui, L.; Wang, L.; Tang, L.; Lan, K.; Xie, J.; and Yang, J. 2022. 3D Siamese transformer network for single object tracking on point clouds. In European Conference on Computer Vision, 293–310. Springer. Lan, K.; Jiang, H.; and Xie, J. 2022. Temporal-Aware Siamese Tracker: Integrate Temporal Context for 3D Object Tracking. In Asian Conference on Computer Vision, 399– 414. Lang, A. H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; and Beijbom, O. 2019. Pointpillars: Fast encoders for object detection from point clouds. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12697–12705. Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; and Chen, B. 2018. Pointcnn: Convolution on x-transformed points. Advances in Neural Information Processing Systems, 31. Li, Z.; Lin, Y.; Cui, Y.; Li, S.; and Fang, Z. 2023. Motion-toMatching: A Mixed Paradigm for 3D Single Object Tracking. arXiv preprint arXiv:2308.11875. Liu, Y.; Jing, X.-Y.; Nie, J.; Gao, H.; Liu, J.; and Jiang, G.-P. 2018. Context-aware three-dimensional mean-shift with occlusion handling for robust object tracking in RGB-D videos. IEEE Transactions on Multimedia, 21(3): 664–677. Liu, Z.; Tang, H.; Lin, Y.; and Han, S. 2019. Point-voxel cnn for efficient 3d deep learning. Advances in Neural Information Processing Systems, 32. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, 652–660. Qi, C. R.; Su, H.; Nießner, M.; Dai, A.; Yan, M.; and Guibas, L. J. 2016. Volumetric and multi-view cnns for object classification on 3d data. In IEEE Conference on Computer Vision and Pattern Recognition, 5648–5656. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems, 30. Qi, H.; Feng, C.; Cao, Z.; Zhao, F.; and Xiao, Y. 2020. P2b: Point-to-box network for 3d object tracking in point clouds. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6329–6338. Shan, J.; Zhou, S.; Fang, Z.; and Cui, Y. 2021. PTT: Pointtrack-transformer module for 3D single object tracking in point clouds. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1310–1316. IEEE. Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. 2020. Scalability in perception for autonomous driving: Waymo open dataset. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2446–2454. Wang, N.; Zhou, W.; Wang, J.; and Li, H. 2021. Transformer meets tracker: Exploiting temporal context for robust visual The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7167 tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1571–1580. Xu, T.-X.; Guo, Y.-C.; Lai, Y.-K.; and Zhang, S.-H. 2023a. CXTrack: Improving 3D point cloud tracking with contextual information. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1084–1093. Xu, T.-X.; Guo, Y.-C.; Lai, Y.-K.; and Zhang, S.-H. 2023b. MBPTrack: Improving 3D Point Cloud Tracking with Memory Networks and Box Priors. In IEEE/CVF International Conference on Computer Vision, 9911–9920. Yan, Y.; Mao, Y.; and Li, B. 2018. Second: Sparsely embedded convolutional detection. Sensors, 18(10): 3337. Yang, Z.; Sun, Y.; Liu, S.; and Jia, J. 2020. 3dssd: Pointbased 3d single stage object detector. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11040– 11048. Ye, B.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2022. Joint feature learning and relation modeling for tracking: A one-stream framework. In European Conference on Computer Vision, 341–357. Springer. Yin, T.; Zhou, X.; and Krahenbuhl, P. 2021. Center-based 3d object detection and tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11784–11793. Zhang, L.; Gonzalez-Garcia, A.; Weijer, J. V. D.; Danelljan, M.; and Khan, F. S. 2019. Learning the model update for siamese trackers. In IEEE/CVF International Conference on Computer Vision, 4010–4019. Zheng, C.; Yan, X.; Gao, J.; Zhao, W.; Zhang, W.; Li, Z.; and Cui, S. 2021. Box-aware feature enhancement for single object tracking on point clouds. In IEEE/CVF International Conference on Computer Vision, 13199–13208. Zheng, C.; Yan, X.; Zhang, H.; Wang, B.; Cheng, S.; Cui, S.; and Li, Z. 2022. Beyond 3d siamese tracking: A motioncentric paradigm for 3d single object tracking in point clouds. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8111–8120. Zhou, C.; Luo, Z.; Luo, Y.; Liu, T.; Pan, L.; Cai, Z.; Zhao, H.; and Lu, S. 2022. Pttr: Relational 3d point cloud object tracking with transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8531–8540. Zhou, L.; Zhou, Z.; Mao, K.; and He, Z. 2023. Joint Visual Grounding and Tracking With Natural Language Specification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 23151–23160. Zhou, Y.; and Tuzel, O. 2018. Voxelnet: End-to-end learning for point cloud based 3d object detection. In IEEE Conference on Computer Vision and Pattern Recognition, 4490– 4499. Zhou, Z.; Pei, W.; Li, X.; Wang, H.; Zheng, F.; and He, Z. 2021. Saliency-Associated Object Tracking. In IEEE/CVF international conference on computer vision, 9866–9875. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7168
2024
796
18,624
Neighborhood-Enhanced 3D Human Pose Estimation with Monocular LiDAR in Long-Range Outdoor Scenes Jingyi Zhang1,2, Qihong Mao1,2, Guosheng Hu3, Siqi Shen1,2, Cheng Wang1,2* 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University, China 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, School of Informatics, Xiamen University, China 3 Oosto, Belfast, UK {zhangjingyi1, maoqihong}@stu.xmu.edu.cn; {siqishen, cwang}@xmu.edu.cn; huguosheng100@gmail.com Abstract 3D human pose estimation (3HPE) in large-scale outdoor scenes using commercial LiDAR has attracted significant attention due to its potential for real-life applications. However, existing LiDAR-based methods for 3HPE primarily rely on recovering 3D human poses from individual point clouds, and the coherence cues present in the neighborhood are not sufficiently harnessed. In this work, we explore spatial and contexture coherence cues contained in the neighborhood that leads to great performance improvements in 3HPE. Specifically, firstly, we deeply investigate the 3D neighbor in the background (3BN) which serves as a spatial coherence cue for inferring reliable motion since it provides physical laws to limit motion targets. Secondly, we introduce a novel 3D scanning neighbor (3SN) generated during the data collection and 3SN implies structural edge coherence cues. We use 3SN to overcome the degradation of performance and data quality caused by the sparsity-varying properties of LiDAR point clouds. In order to effectively model the complementation between these distinct cues and build consistent temporal relationships across human motions, we propose a new transformer-based module called the CoherenceFuse module. Extensive experiments conducted on publicly available datasets, namely LidarHuman26M, CIMI4D, SLOPER4D and Waymo Open Dataset v2.0, showcase the superiority and effectiveness of our proposed method. In particular, when compared with LidarCap on the LidarHuman26M dataset, our method demonstrates a reduction of 7.08mm in the average MPJPE metric, along with a decrease of 16.55mm in the MPJPE metric for distances exceeding 25 meters. The code and models are available at https://github.com/jingyi-zhang/Neighborhoodenhanced-LidarCap. Introduction 3D human pose estimation (3HPE) in unconstrained environments is a rapidly advancing field with lots of promising research work (Kim et al. 2022; Joo, Neverova, and Vedaldi 2020; Zhang et al. 2021; Jin et al. 2020; Wang et al. 2022; Zhan et al. 2022). However, accurate 3HPE in long-range outdoor scenes, which has diverse and impactful applications in action recognition, sports analysis, AR/VR, autonomous driving, etc, remains challenging. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. LidarHuman26M SLOPER4D CIMI4D 3BN 3SN Human Figure 1: The visualization of 3SN and 3BN. Our method treats 3SN sequence, 3BN sequence, and human sequence as input to predict 3D human motions. To effectively address long-range 3HPE, various modalities have distinctive challenges that require specific solutions. Some researchers (Xu et al. 2021, 2020b; Wan et al. 2021) focus on recovering 3D human motions from degraded and low-resolution images, due to factors such as poor lighting and long camera-subject distances. Other researchers (Zhang et al. 2022b; Sun et al. 2020; Li et al. 2020; Zhang et al. 2022a) concentrate on alleviating the influence of ill-posed to global location prediction from RGB images since the camera is unable to provide accurate depth information. RGBD-based methods are also unsuitable for long-range 3HPE, due to the limited effective range (less than 5 m). In contrast, bodyworn sensors like Inertial Measurement Units (IMUs) have environment-independent properties, IMUs-based methods (Yi et al. 2022; Yi, Zhou, and Xu 2021b; Marcard et al. 2017; Huang et al. 2018b) dedicated to achieving convenient 3HPE by minimizing the number of wearable devices. HSC4D (Dai et al. 2022) employs LiDAR sensors to obtain depth information and correct an accumulated global drifting artifact that occurs in IMUs when working in long-range outdoor scenes. Despite the success of these sensors, benefitting from the LiDAR’s inherent insensitivity to lighting conditions, its capacity to acquire precise depth information, and its extended detection range, the LiDAR-based method is the preferred choice for conducting motion capture in daily scenarios characterized by extensive distances and expansive environments. Recently, researchers started exploring the LiDAR sensor in human motion capture. LidarCap(Li et al. 2022a) achieves remarkable results in 3D human motion capture within a range of 30 meters, relying solely on individual human point clouds The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7169 obtained from a single monocular LiDAR sensor, demonstrating its outstanding potential. Multimodal fusion methods (Ren et al. 2022; F¨urst et al. 2020; Kim et al. 2019) improve algorithm robustness and accuracy by combining LiDAR data with data from other modalities. Nevertheless, previous methods overlook the fact that LiDAR sensors capture data not only from humans but also from the environment. We hypothesize that environmental information, which correlates with human behavior, is a key element for overcoming fragile motion capture caused by sparsity. SLOPER4D (Dai et al. 2023) and CIMI4D (Yan et al. 2023) incorporate environmental information in the post-optimization stage, they utilize environmental information as a physical constraint to enhance the initial human pose collected by a motion capture system. However, the inclusion of additional physical constraint terms leads to a significant increase in computation time. Therefore, these methods are not suitable for real-time applications. To achieve efficient and accurate LiDAR-based 3HPE, in this paper, we utilize spatial and contexture coherence cues about human behavior in the neighborhood to enhance the performance of the LiDAR-based 3HPE method. Specifically, we introduce two types of coherence cues in the neighborhood. Firstly, the 3D neighbor in the background (3BN) is extracted from surrounding scenes and preserves the accurate distribution of the real world. The spatial coherence cues implied in 3BN are important for 3HPE, which are not easily accessible in images or IMUs data. It provides a physical law and spatial priority to motion targets. Therefore, when human motion is unreliable, the target poses will be extrapolated from spatial coherence cues contained in 3BN. Secondly, the 3D scanning neighbor (3SN) is defined based on the characteristics of LiDAR point clouds. As the laser beam scatters to a certain size, the portion of the beam that illuminates the edges of the human body continues to travel until it hits more distant objects (Robosense 2023). Hence, the 3SN provides contexture coherence cues related to the transition from the human structure to the background. By utilizing this additional information, our approach can address the issue of fragile motion capture caused by degraded LiDAR point clouds at larger capturing distances. The introduction of 3SN originally aimed at mitigating point cloud degradation in distant regions. We observed that projecting the 3SN onto the plane proved more advantageous when the human point cloud is densely populated in close proximity. This observation led us to speculate that 3SN encapsulates not only structural edge coherence cues but also vital human motion cues. In light of this, we devised a self-attention mechanism tailored to 3SN. Furthermore, recognizing the potential synergy among the spatial coherence cues in 3BN, the abundant coherence cues within 3SN, and the motion cues within human points, we introduced a crossattention structure to harmoniously fuse these diverse cues. The whole transformer-based module is named the ”CoherenceFuse” module. To this end, by combining the 3SN with 3BN, our method offers a more comprehensive and reliable way to understand and predict human behavior in long-range outdoor scenes. Despite the simplicity of our method, it outperforms the SOTA learning-based and optimized-based methods on 4 public datasets without the need for additional modal data or physical constraint terms. Specifically, our method compared with the baseline method LidarCap (Li et al. 2022a) on the LidarHuman26M dataset (Li et al. 2022a) reduces the mean per joint position error (MPJPE) metric by 7.08mm and improves the Percentage of Correct Keypoints (PCK30) metric by 1.94%. In the case of distant targets (>25m), our method achieves even greater improvements, with the MPJPE metric reducing by around 16.55mm and the PCK30 metric improving by 5.64%. These results demonstrate the effectiveness of our approach in utilizing the coherence cues contained in the neighborhood to accurately predict and capture human motion, even in challenging environments. To summarize, this work has the following key contributions: • We deeply investigate 3D neighbor in the background (3BN), which effectively leverages spatial coherence cues to predict reliable 3D human motion. In addition, we propose a 3D scanning neighbor (3SN), which creatively employs the contexture coherence cues to compensate for the data degradation caused by increasing distance, enabling accurate estimation of human motion even at long ranges. • We introduce a CoherenceFuse module to build consistent temporal relationships across human motions and efficiently integrate the information encompassed by 3BN, 3SN, and human point clouds. Thus, CoherenceFuse can enable these diverse cues to complement each other. • Our method by fully utilizing spatial and structure coherence cues contained in the neighborhood significantly outperforms the baseline method in both close and distant ranges. It offers a simple yet effective way to perform 3D LiDAR-based 3HPE. Related Work The past ten years have witnessed a rapid development of 3HPE. Inertial methods (Huang et al. 2018a; Yi, Zhou, and Xu 2021b; Yi et al. 2022) use Inertial Measurement Units (IMUs) to recover human motion with environmentindependent properties. Image-based methods (Xie, Bhatnagar, and Pons-Moll 2023; Zanfir, Marinoiu, and Sminchisescu 2018; Xu et al. 2018, 2020a; Habermann et al. 2020; Kocabas, Athanasiou, and Black 2020) reconstruct 3D humans from images, those methods are more practical and attractive when applied in sufficient light and moderate distance condition. RGBD-based methods (Su et al. 2020, 2021; Bhatnagar et al. 2022) using RGBD sensors are feasible for short-range human motion capture. Since we target 3HPE in long-range outdoor scenes with a monocular LiDAR, we review previous works that are most related to our method. LiDAR-Based Methods for 3HPE The growing interest in the convenient capture of human motions under long-range scenario settings has led to the growing popularity of LiDAR-based motion capture methods. LidarCap (Li et al. 2022a) leverages point clouds of the human body collected by a single static LiDAR sensor within a range of 30 meters to predict corresponding human The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7170 Kinematics Module P P ST-GCN SMPL Optimizer 3D keypoints Concat Joint rotations Joint location labels Output Joint rotation labels CoherenceFuse Module Self-Attention Cross-Attention 3SN 3BN Human P FC Global feature Input Self-Attention Self-Attention Cross-Attention Concat Concat Cross-Attention Concat Cross-Attention Concat Concat Regreesion k q v MatMul Softmax MatMul N&L N&L N&L N&L N&L N&L k q v MatMul Softmax MatMul Figure 2: Overall pipeline. Given input point clouds of the 3BN, the 3SN and an individual human with XYZ coordinates, PointNet++ (Qi et al. 2017) extracts features separately, and then those features are aggregated by a fully connected layer. Multihead self-attention layers and multi-head cross-attention layers are utilized to predict joints’ location, where n=8. ST-GCN (Yan, Xiong, and Lin 2018) is utilized to predict joints’ rotation. At last, SMPL (Bogo et al. 2016) template corrects the final joints’ location. motions. LIP (Ren et al. 2022) extracts rough global human pose from point clouds, then utilizes IMUs to refine local dynamic motions. In addition, there are many LiDAR-camerabased sensor fusion methods for 3D HPE. FusionPose (Cong et al. 2022) exploits the inherent geometry constraints of point clouds and 2D keypoints on images for self-supervision. HPERL (F¨urst et al. 2020) integrates features of images and point clouds for superior precision. Zheng et al. (Zheng et al. 2021) utilize a 2D keypoints heatmap predicted from 2D images to augment the point clouds. Although those methods have shown excellent performance in long-range 3D human motion capture, they only utilize the collected data of the human body and ignore coherence cues contained in the environment, which we assume is critical information to estimate human motion when human body data is degraded severely. Scene-Aware 3D Human Motion Capture Many existing algorithms resort to multi-stage optimization to estimate global human pose and human-scene interaction. SLOPER4D (Dai et al. 2023) provides reconstructed scene point clouds, and they use scene geometry with several physic-based terms to perform joint optimization. CIMI4D (Yan et al. 2023) concentrates on off-grounding action and facilitates a detailed exploration of human-scene interaction by using a blending optimization process. However, those multi-stage optimization methods are inappropriate for time-critical applications. Luo et al. (Luo et al. 2022) propose a one-stage embodied scene-aware 3HPE method based on a simulated agent’s proprioception and scene awareness, along with external third-person observations. PORX (Hassan et al. 2019) formulates the inter-penetration constraint and concat constraint to make use of the 3D scene information. Nonetheless, those methods need to obtain a prescanned environment, which is unsuitable for the large-scale scenario. Transformer-Based 3D Human Motion Capture There is already much research about transformer-based methods for estimating 3D human motion (Xu et al. 2022; Yi, Zhou, and Xu 2021a). LPFormer (Ye et al. 2023) employ many blocks of a multi-head self-attention to regress 3D human joints location from LiDAR point clouds by fusing points voxel features, points features, and box features. MHFormer (Li et al. 2021) presented to relieve an inverse problem where multiple feasible solutions exist. It utilizes self-attention to capture relationships across solutions features, and then cross-attention is applied to aggregate the multi-hypothesis features and predict the final 3D pose. Methodology Our task is to estimate human motion using monocular LiDAR. The input of our method is sequential point clouds and the output is the 3D human motion sequences in terms of joint angles, global joint locations, and global rotation. The overall pipeline is shown in Fig.2, which incorporates two modules. (1) the CoherenceFuse module: a transformer-based feature fusion module, which incorporates the 3D scanning neighbor (3SN) and the 3D neighbor in the background (3BN) into the 3HPE network. The 3SN and 3BN shown in Fig. 1. (2) the kinematics module: a neural motion estimator. With the purpose of providing more convincing evidence to prove the benefits brought by incorporating 3SN and 3BN, the structure of the kinematics module follows LidarCap (Li et al. 2022a), which includes ST-GCN (Yan, Xiong, and Lin 2018) and SMPL optimizer. 3D Neighbor in the Background (3BN) Given individual human LiDAR point clouds at t frame pht = {p1t, p2t, ..., pnt}, we calculate the global position of the human pct by pct = 1 n ∗P pit, where i=1 to n. Any point in the environment within a two-meter radius of pct is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7171 LidarHuman26M CIMI4D SLOPER4D Input GT Ours LidarCap P4T Input GT Ours LidarCap P4T Figure 3: Quanlitive results on 3 public datasets. The blue points mean human points. The red points mean 3BN. The green points mean 3SN. Method Dataset LidarHuman26M CIMI4D SLOPER4D Metric MPJPE PA- PCK3 PCK5 AE MPJPE PAPCK3 PCK5 AE MPJPE PA- PCK3 PCK5 AE LidarCap 79.31 66.72 86.0 95.0 45.2 130.43 97.95 70.22 87.44 31.11 101.89 78.93 78.15 89.77 40.09 HSC2 79.48 68.9 85.33 94.35 29.49 P4T 127.77 98.44 72.58 85.64 151.67 146.93 108.94 65.26 83.96 97.13 103.56 79.34 77.42 90.04 66.62 CLIFF 80.55 60.03 86.03 94.64 92.48 Ours 72.23 61.67 87.94 95.79 38.28 121.77 93.15 72.81 89.03 25.81 96.80 76.70 79.22 90.51 38.55 Table 1: Comparison results on LidarHuman26M, CIMI4D and SLOPER4D with learning-based method LidarCap, P4T and optimized-based method HSC2. defined as the 3BN (pbt). The two-meter radius parameter is defined based on the human arm span being approximately 1 meter. Therefore, the area within a 1-meter radius of pct is the interaction region. The spatial coherence cues contained in this region can provide physical references for 3HPE, such as the positions of foot and palm joints. The area, which is outside the 1-meter radius but within the 2-meter radius, is where interaction with the human is about to occur. The spatial coherence cues in this region provide interaction priors for 3HPE and constrain motion targets. 3D Scanning Neighbor (3SN) Given a set of 3D points pht representing a human body in Cartesian coordinates at t frame, we can transform each point pit = (xit, yit, zit) to its corresponding spherical coordinate (rit, θit, δit) as follows: rit = p x2 it + y2 it + z2 it, polar angle θit = arctan zit √ x2 it+y2 it and azimuthal angle δit = arctan yit xit . Then, we obtain the minimum and maximum values for 3D scanning neighbors’ spherical coordinates as follows: θmin = min pit∈pht(θit) −1◦, θmax = max pit∈pht(θit) + 1◦; δmin = min pit∈pht(δit) −1◦, δmax = max pit∈pht(δit) + 1◦; (1) Finally, we define the set of 3SN points pst as the subset of points in the environment that satisfy the following conditions: min pit∈pht(rit) < rst, θmin < θst < θmax, δmin < δst < δmax (2) 1◦is determined based on the angular resolution (0.2◦) of the LiDAR sensor. We adopt the Cartesian coordinates of the 3SN as input. Overall Pipeline We establish the LiDAR coordinate system as the global coordinate system, with the origin located at the position of the LiDAR device. For input, we utilize normalized temporal sequence p′ bt, p′ st and p′ ht, which are calculated as follows: p′ bt = pbt −pct, p′ st = pst −pct, p′ ht = pht −pct. The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7172 predicted joint location generated by the entire network is initially in the local coordinate system, and we subsequently transform them into the global coordinate system using pct. CoherenceFuse module. To encode p′ bt, p′ st and p′ ht, we employed three separate PointNet++ (Qi et al. 2017) to obtain 3BN features f t b, 3SN features f t s and human features f t h, the size of those three features are all 512 dims. Subsequently, we generated a 1024-dim global frame-wise descriptor f t by leveraging a fully connected layer to aggregate f t b, f t s, and f t h. To disentangle motion cues and structural edge coherence cues from f t s, we employed self-attention layers. This was followed by a concatenation operation, yielding abundant 512-dimensional 3SN features denoted as f t sa. Further interactions among f t b, f t sa, and f t h were modeled using crossattention layers. After applying a concatenation operation and self-attention layers, we predicted the corresponding joint locations ˆJt ∈R24×3. Kinematics module. Specifically, following STGCN (Yan, Xiong, and Lin 2018), we set ˆJt as a graph node, and the node feature Qt ∈R24×(3+1024) is obtained by concatenating the frame-wise global feature f t with joint locations ˆJt. The output of ST-GCN is the joint rotations Rt 6D ∈R24×6. The joint rotations Rt 6D are fed into an off-the-shelf SMPL model to obtain the 24 joints ˆJt SMP L ∈R24×3 on the SMPL mesh. Loss function. To sum up, our pipeline can be trained through optimizing the united loss function L formulated as below in an end-to-end way: L = LJ + LΘ + LJSMP L, LJ = T X t ∥Jt GT −ˆJt∥2 LΘ = T X t ∥θt GT −ˆθ t∥2, LJSMP L = T X t ∥Jt GT −ˆJt SMP L∥2 (3) where, Jt GT is the ground truth joint locations of each frame, θt GT is the ground truth pose parameters of the t-th frame. Experiments Implementation Details The training process takes around 100 epochs with Adam optimizer (Kingma and Ba 2015) on one NVIDIA GeForce RTX 3090 Graphics Card. The batch size is set to 8 and the sequence length is set to 16, while the learning rate is set to be 1 × 10−4. The decay rate is 1 × 10−4. We set the dropout ratio as 0.1 for the CoherenceFuse module and 0.5 for the ST-GCN module. Dataset LidarHuman26M (Li et al. 2022a), CIMI4D (Yan et al. 2023) and SLOPER4D (Dai et al. 2023) are multi-modal datasets captured using a markless motion capture system, camera, and LiDAR. LidarHuman26M records 13 actors performing 20 daily motions. The location of LiDAR is fixed. The collected human data is idealistic without occlusion. We adopt the same dataset split as LidarCap. CIMI4D focuses on climbing with heavy self-occlusion. The location of LiDAR is fixed. We randomly split the train and test set based on the sequence. SLOPER4D is collected in realistic environments with occlusion and multi-persons standing beside each other. The spatial position of LiDAR changes as the gatherer moves around. We divide each sequence according to 16 frames as a patch, then randomly scramble the patches, and divide the training and testing datasets according to the ratio of 7:3. Waymo Open Dataset v2.0 (Sun et al. 2019) annotates the location of 14 key points for a single person. It provides 3D point clouds collected by LiDAR and provides RGB images. Metrics To evaluate the performance of pose estimation, we report 1) MPJPE↓: Mean per root-relative joint position error in millimeters. 2) PA-MPJPE (PA-)↓: Aligning predicted skeleton and label with the transformation matrix acquired by Procrusted Analysis, after alignment, calculate MPJPE. 3) PCK30 (PCK3)↑: Percentage of Correct Keypoints with distance to GT lower than 30cm. 4) PCK-50 (PCK5)↑: Percentage of Correct Keypoints with distance to GT lower than 50cm. 5) Accel-error (AE)↓: Acceleration error between predicted joint and corresponding label point, whose unit is (cm/s2). 6) CD↓: the chamfer distance between the vertices of predicted SMPL mesh and raw point cloud in millimeters. Comparions Quantitative. We compare our method to the state-of-theart learning-based method LidarCap (Li et al. 2022a) which also targets motion capture from LiDAR point clouds and P4T (Fan, Yang, and Kankanhalli 2021) which extracts spatiotemporal features from raw point cloud video to capture motion information. We also compare our method to the optimized-based method HSC2 (Dai et al. 2023) and the image-based method CLIFF (Li et al. 2022b). The results on LidarHuman26M, CIMI4D, and SLOPER4D are shown in Tab.1. For a fair comparison with the HSC2, we set the predicted human motion of the LidarCap as initialized human motion instead of collected human motion from the motion capture system. As described by LidarHuman26M (Li et al. 2022a), due to poor lighting conditions and the low resolution of human bodies, the accuracy of the 2D pose obtained by OpenPose is low. To prevent the misleading optimization direction caused by using an inaccurate 2D pose for projection loss, we removed the projection loss during the training of CLIFF. On the unoccluded dataset LidarHuman26M, our method improves the MPJPE index by 7.08mm compared with LidarCap. On the dataset CIMI4D with severe self-occlusion, our method improves the MPJPE index by 8.66mm compared with LidarCap. On the more complex dataset SLOPER4D, our method improves the MPJPE index by 5.9mm compared to LidarCap. Since HSC2 uses the distance between the predicted human motion and the raw point clouds, and the physical constraints between humans and scenes to optimize the global location again in the post-optimization stage, our method is inferior to HSC2 in terms of AE index by 8.79cm/s2, but it is also better than LidarCap 6.92cm/s2. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7173 3BN 3SN self-attention CF MPJPE (mm)↓ PA- (mm)↓ PCK3 (%)↑ PCK5 (%)↑ AE (cm/s2)↓ × × × × 79.31 66.72 86.00 95.00 45.20 ✓ × × × 75.30 64.57 86.92 95.35 43.50 × ✓ × × 75.99 64.75 86.74 95.24 43.37 ✓ ✓ × × 74.67 63.33 87.20 95.55 43.84 Stack × × 77.87 66.60 86.18 94.98 44.62 ✓ ✓ ✓ × 73.68 62.90 87.40 95.67 39.21 ✓ ✓ × ✓ 72.23 61.67 87.94 95.79 38.28 Table 2: Ablation study of different components of our framework. CF means CoherenceFuse module. Based on the metrics, shows that each component in the neighborhood-enhanced method is effective. Radius MPJPE↓ PA-↓ PCK3↑ PCK5↑ AE↓ 2m 74.47 63.34 87.31 95.71 41.59 3m 76.31 64.45 86.68 95.39 44.04 5m 76.03 64.89 86.69 95.26 43.19 10m 75.24 63.71 87.06 95.50 42.77 20m 75.71 64.67 86.83 95.27 43.05 Table 3: Ablation study of radius parameter in 3BN on Lidarhuman26m. It shows that the two-meter radius parameter is reasonable. Generalization. We also compare our method with LidarCap and P4T on the Waymo Open Dataset v2.0. Because Waymo Open Dataset v2.0 does not provide the rotation matrix of each joint which is recorded by the motion capture system, we train our method, LidarCap and P4T on the LidarHuman26M train set and validate it on the Waymo validation set. We extract human points by utilizing the 3D detection box provided by Waymo. In addition, because the number and category of skeleton joints defined by Waymo are inconsistent with SMPL, CD is selected as a quantitative indicator. At the same time, the frame rate of the Waymo Open Dataset v2.0 is 1, while the frame rate of LidarHuman26M is 10. In order to eliminate the influence of different frame rates on the generalization ability, we repeat each frame of Waymo Open Dataset v2.0 16 times as input, and we select the 9th frame of each sequence as output. The results are shown in Fig.4 (A). On the Waymo Open Dataset v2.0, our method improves the CD index by 4.79mm compared with LidarCap and improves the CD index by 7.02mm compared with P4T, which shows that our algorithm has a stronger generalization ability in real scenes. Distance analysis. We compare LidarCap and HSC2 with our method in different distances, the results shown in Fig.4 (B). The network achieves significant improvements in both short and long-range motion capture accuracy by simply incorporating 3BN and 3SN. It further highlights the importance of leveraging neighborhood coherence cues in LiDAR-based 3HPE. In the first and last cases, our method estimates better global orientations. We select 4 examples at different distances for visualization which is shown in Fig.4 (C). In (B) (A) (C) 26.3m 22.4m 17.3m 14.8m Point clouds GT Ours LidarCap HSC2 distance(m) <15 15-20 20-25 >25 130 120 90 80 70 100 MPJPE(mm) 110 LidarCap HSC2 Ours Figure 4: (A): Comparison results on Waymo Open Dataset v2.0 with LidarCap. (B): Comparison with LidarCap, HSC2, and our method in different distances. We show different MPJPE values of different methods. (C): Visualization of methods at different distances. the second and third cases, the arm motion predicted by our method is more reliable. Although the estimated human motion slightly differs from the ground truth, our results still outperform others. Based on the metrics and visualization, we conclude that the neighborhood-enhanced method achieves better motion capture results both at close-range and longrange distances. Qualitative. In Fig.3, we show the qualitative results on LidarHuman26M, CIMI4D, and SLOPER4D with LidarCap and P4T. In the first and second rows, our method estimates reliable motion when compared with the other two methods. Especially the LiDAR point clouds degrade heavily due to distant distance and self-occlusion, the major part of arm and foot point clouds is missing, but our method still manages to estimate more stable human motion. This is achieved by leveraging the structural edge coherence cues in the 3SN. Even though insufficient information is captured on the human surface, the laser rays in contact with the human’s edges continue to propagate forward until they collide with the distant environment. As a result, the information about the missing part is recorded in the 3SN. In all cases, our method estimates the joints, which contact with the environment, better than the LidarCap and P4T, we attribute this superiority The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7174 to the spatial coherence cues contained in the 3BN. Ablation Study Effect of model components. In Tab.2, we ablate the main components of our framework. R1 (Row 1) means baseline method LidarCap (Li et al. 2022a). R2 (Row 2) uses an additional PointNet++ to extract the feature of 3BN and feed it to LidarCap. R3 (Row 3) compared to R2, only replaces the 3BN with the 3SN. Through analysis of the results of R1, R2, and R3, we conclude the 3BN and the 3SN provide more effective cues for the 3HPE network. R4 (Row 4) differs from LidarCap only in the input part, where two independent PointNet++ are added to encode the 3BN and the 3SN, respectively. Then, the features of both are aggregated with the original human feature to obtain global input features. R6 (Row 6) replaces the GRU in LidarCap with a multi-head self-attention layer to fuse these three diverse features, the input is the same as R4, and the whole network is named LidarCaps. R7 (Row 7) replaces the GRU in LidarCap with the CoherenceFuse module. When comparing R2, R3, R4, and R6, we can see that utilizing a multi-head self-attention layer for information fusion of the three branches achieves the best performance. In R5 (Row 5), compared to LidarCap, there is no difference in the structure of the network. The only difference is that the inputs from the three branches are stacked together and one PointNet++ is used to extract global features. So it is necessary to extract the features of 3BN, 3SN, and individual human points respectively. By comparing R5 and R6, we found the CoherenceFuse module is effective. Impact of parameters in 3BN. We removed the 3SN branch in the final experimental architecture and experimented by changing the radius range of 3BN on LidarHuman26M. We set 2m, 3m, 5m, 10m, and 20m, a total of 5 radius values, and the results are shown in Tab.3. Because when the radius is set to 1m, the background point clouds that can be extracted are less, so we do not set the experiment with a radius of 1m. By analyzing the data in the table, we can conclude that the information covered in the background has a limited effect on 3HPE. The larger coverage of the background point does not mean that it has more useful information for 3HPE. Evaluations To further investigate the information contained in the 3SN which is not merely the outline of human body movements, we conducted a set of comparative experiments with shadows. At first, we obtain plane formulate ax + by + cz −d = 0 by fitting a plane in the 3BN. Given a set of points ps = {ps1, ps2, ...psn} in the 3SN, where psi = (xsi, ysi, zsi), we obtain shadow points pn = {pn1, pn2, ...pnm}, where pni = (k ∗xsi, k ∗ysi, k ∗zsi) and k = d (a∗xsi+b∗ysi+x∗zsi). In order to investigate the roles played by the 3BN and the 3SN in enhancing the performance of 3HPE, we analyzed the experimental results, as shown in Fig. 5. The curve named 3BN represents 3BN as an additional input to LidarCapS, which is defined in the ablation study. The curve 3BN & 3SN signifies the inclusion of both the 3SN and the 3BN as inputs. The curve 3BN & shadow refers to the inclusion of 16 18 20 22 24 26 28 30 Distance(m) 4 6 8 10 12 14 16 Reduction(mm) The Reduction of MPJPE 3BN 3BN & 3SN 3BN & shadow ours Figure 5: The role of the 3SN and 3BN in different distance scenes. Reduction calculated using the formulation: MPJPELidarCap - MPJPEeachcurve. shadow and 3BN as inputs. The spatial coherence cues in the 3BN contribute to the improvement of the entire scene, while the structural edge coherence cues in the 3SN exhibit a more pronounced enhancement at long distances. Especially in the distant distance, 3SN performs better than shadow claims 3SN is not merely another representation of motion contours. Meanwhile, we discover that the incremental information introduced by 3SN is not as effective as the shadow in close-range scenarios. However, the curve of our method outperforms the other 3 curves. So the CoherenceFuse module enables 3SN to perform well both at close and distant distances. Conclusion In this study, we introduced the concept of 3D neighbor in the background (3BN), which leverages spatial coherence cues to enhance the reliability of 3D human pose estimation. Furthermore, we extract 3D scanning neighbors (3SN) based on the inherent properties of LiDAR sensors. These neighbors contribute structural edge coherence cues that facilitate accurate 3D human pose estimation over extended distances, where human LiDAR point clouds are too sparse to supply enough information for reliable estimation. To better integrate the inputs with diverse features, we propose a Transformerbased module named the CoherenceFuse module. Through comprehensive contrast experiments with shadow points, we demonstrate 3SN is not just the outline of motion, but rather a unique characteristic of LiDAR. In addition, our investigations demonstrate that the coherenceFuse module is effective in effectively disentangling the motion cues and structural edge coherence cues within the 3SN. Quantitative and qualitative experiments show that our method outperforms baseline methods, regardless of whether the human subject is at a close or distant distance. However, our current algorithm focuses on processing ideal foreground human point clouds as input, without engaging in comprehensive segmentation or detection tasks across the entire data frame. To address this, our future work will integrate a binary segmentation network into the LiDAR-based 3HPE method, allowing more comprehensive and accurate analysis. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7175 References Bhatnagar, B. L.; Xie, X.; Petrov, I. A.; Sminchisescu, C.; Theobalt, C.; and Pons-Moll, G. 2022. BEHAVE: Dataset and Method for Tracking Human Object Interactions. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15914–15925. Bogo, F.; Kanazawa, A.; Lassner, C.; Gehler, P. V.; Romero, J.; and Black, M. J. 2016. Keep It SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image. In ECCV. Cong, P.; Xu, Y.; Ren, Y.; Zhang, J.; Xu, L.; Wang, J.; Yu, J.; and Ma, Y. 2022. Weakly Supervised 3D Multi-person Pose Estimation for Large-scale Scenes based on Monocular Camera and Single LiDAR. ArXiv, abs/2211.16951. Dai, Y.; Lin, Y.; Lin, X.; Wen, C.; Xu, L.; Yi, H.; Shen, S.; Ma, Y.; University, C. W. X.; China; University, S. J. T.; for the Physics of Complex Systems, M. P. I.; and Germany. 2023. SLOPER4D: A Scene-Aware Dataset for Global 4D Human Pose Estimation in Urban Environments. ArXiv, abs/2303.09095. Dai, Y.; Lin, Y.; Wen, C.; Shen, S.; Xu, L.; Yu, J.; Ma, Y.; and Wang, C. 2022. HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. Fan, H.; Yang, Y.; and Kankanhalli, M. S. 2021. Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14199–14208. F¨urst, M.; Gupta, S. T. P.; Schuster, R.; Wasenm¨uller, O.; and Stricker, D. 2020. HPERL: 3D Human Pose Estimation from RGB and LiDAR. 2020 25th International Conference on Pattern Recognition (ICPR), 7321–7327. Habermann, M.; Xu, W.; Zollhofer, M.; Pons-Moll, G.; and Theobalt, C. 2020. DeepCap: Monocular Human Performance Capture Using Weak Supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Hassan, M.; Choutas, V.; Tzionas, D.; and Black, M. J. 2019. Resolving 3D Human Pose Ambiguities With 3D Scene Constraints. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Huang, Y.; Kaufmann, M.; Aksan, E.; Black, M. J.; Hilliges, O.; and Pons-Moll, G. 2018a. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG), 37(6): 1–15. Huang, Y.; Kaufmann, M.; Aksan, E.; Black, M. J.; and PonsMoll, G. 2018b. Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time. Jin, S.; Xu, L.; Xu, J.; Wang, C.; Liu, W.; Qian, C.; Ouyang, W.; and Luo, P. 2020. ZoomNAS: Searching for Whole-Body Human Pose Estimation in the Wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45: 5296–5313. Joo, H.; Neverova, N.; and Vedaldi, A. 2020. Exemplar FineTuning for 3D Human Model Fitting Towards In-the-Wild 3D Human Pose Estimation. 2021 International Conference on 3D Vision (3DV), 42–52. Kim, H.-W.; Lee, G.-H.; Oh, M.-S.; and Lee, S. 2022. CrossView Self-fusion for Self-supervised 3D Human Pose Estimation in the Wild. In Asian Conference on Computer Vision. Kim, W.; Ramanagopal, M. S.; Barto, C.; Yu, M.-Y.; Rosaen, K.; Goumas, N.; Vasudevan, R.; and Johnson-Roberson, M. 2019. PedX: Benchmark Dataset for Metric 3-D Pose Estimation of Pedestrians in Complex Urban Intersections. IEEE Robotics and Automation Letters, 4: 1940–1947. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. CoRR, abs/1412.6980. Kocabas, M.; Athanasiou, N.; and Black, M. J. 2020. VIBE: Video Inference for Human Body Pose and Shape Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Li, J.; Xu, C.; Chen, Z.; Bian, S.; Yang, L.; and Lu, C. 2020. HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3382–3392. Li, J.; Zhang, J.; Wang, Z.; Shen, S.; Wen, C.; Ma, Y.; Xu, L.; Yu, J.; and Wang, C. 2022a. LiDARCap: Long-range Markerless 3D Human Motion Capture with LiDAR Point Clouds. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 20470–20480. Li, W.; Liu, H.; Tang, H.; Wang, P.; and Gool, L. V. 2021. MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13137–13146. Li, Z.; Liu, J.; Zhang, Z.; Xu, S.; and Yan, Y. 2022b. CLIFF: Carrying Location Information in Full Frames into Human Pose and Shape Estimation. In European Conference on Computer Vision. Luo, Z.; Iwase, S.; Yuan, Y.; and Kitani, K. 2022. Embodied Scene-aware Human Pose Estimation. In Advances in Neural Information Processing Systems. Marcard, T. V.; Rosenhahn, B.; Black, M. J.; and Pons-Moll, G. 2017. Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs. Computer Graphics Forum. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, 5099–5108. Ren, Y.; Zhao, C.; He, Y.; Cong, P.; Liang, H.; Yu, J.; Xu, L.; and Ma, Y. 2022. LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors. IEEE Transactions on Visualization and Computer Graphics, 29: 2337–2347. Robosense. 2023. RS-LiDAR-M1 Brochure EN. https:// www.robosense.ai/en/resources. Accessed: 2023-7-14. Su, Z.; Xu, L.; Zheng, Z.; Yu, T.; Liu, Y.; and Fang, L. 2020. RobustFusion: Human Volumetric Capture with Data-Driven Visual Cues Using a RGBD Camera. In ECCV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7176 Su, Z.; Xu, L.; Zhong, D.; Li, Z.; Deng, F.; Quan, S.; and Fang, L. 2021. RobustFusion: Robust Volumetric Performance Reconstruction Under Human-Object Interactions From Monocular RGBD Stream. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45: 6196–6213. Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; Vasudevan, V.; Han, W.; Ngiam, J.; Zhao, H.; Timofeev, A.; Ettinger, S. M.; Krivokon, M.; Gao, A.; Joshi, A.; Zhang, Y.; Shlens, J.; Chen, Z.; and Anguelov, D. 2019. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2443–2451. Sun, Y.; Bao, Q.; Liu, W.; Fu, Y.; Black, M. J.; and Mei, T. 2020. Monocular, One-stage, Regression of Multiple 3D People. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 11159–11168. Wan, Z.; Li, Z.; Tian, M.; Liu, J.; Yi, S.; and Li, H. 2021. Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 13013–13022. Wang, J.; Liu, L.; Xu, W.; Sarkar, K.; Luvizon, D. C.; and Theobalt, C. 2022. Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13147–13156. Xie, X.; Bhatnagar, B. L.; and Pons-Moll, G. 2023. Visibility Aware Human-Object Interaction Tracking from Single RGB Camera. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xu, L.; Xu, W.; Golyanik, V.; Habermann, M.; Fang, L.; and Theobalt, C. 2020a. EventCap: Monocular 3D Capture of High-Speed Human Motions Using an Event Camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Xu, W.; Chatterjee, A.; Zollh¨ofer, M.; Rhodin, H.; Mehta, D.; Seidel, H.-P.; and Theobalt, C. 2018. MonoPerfCap: Human Performance Capture From Monocular Video. ACM Transactions on Graphics (TOG), 37(2): 27:1–27:15. Xu, X.; Chen, H.; Moreno-Noguer, F.; Jeni, L. A.; and De la Torre, F. 2020b. 3D Human Shape and Pose from a Single Low-Resolution Image with Self-Supervised Learning. In ECCV. Xu, X.; Chen, H.; Moreno-Noguer, F.; Jeni, L. A.; and De la Torre, F. 2021. 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos. TPAMI. Xu, Y.; Zhang, J.; Zhang, Q.; and Tao, D. 2022. ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation. ArXiv, abs/2204.12484. Yan, M.; Wang, X.; Dai, Y.; Shen, S.; Wen, C.; Xu, L.; Ma, Y.; and Wang, C. 2023. CIMI4D: A Large Multimodal Climbing Motion Dataset under Human-scene Interactions. ArXiv, abs/2303.17948. Yan, S.; Xiong, Y.; and Lin, D. 2018. Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition. In AAAI. Ye, D.; Xie, Y.; Chen, W.; Zhou, Z.; and Foroosh, H. 2023. LPFormer: LiDAR Pose Estimation Transformer with MultiTask Network. ArXiv, abs/2306.12525. Yi, X.; Zhou, Y.; Habermann, M.; Shimada, S.; Golyanik, V.; Theobalt, C.; and Xu, F. 2022. Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion Tracking from Sparse Inertial Sensors. Yi, X.; Zhou, Y.; and Xu, F. 2021a. TransPose. ACM Transactions on Graphics (TOG), 40: 1 – 13. Yi, X.; Zhou, Y.; and Xu, F. 2021b. TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors. ACM Trans. Graph., 40: 86:1–86:13. Zanfir, A.; Marinoiu, E.; and Sminchisescu, C. 2018. Monocular 3D Pose and Shape Estimation of Multiple People in Natural Scenes: The Importance of Multiple Scene Constraints. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2148–2157. Zhan, Y.-W.; Li, F.; Weng, R.; and Choi, W. 2022. Ray3D: ray-based 3D human pose estimation for monocular absolute 3D localization. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13106–13115. Zhang, J.; Tu, Z.; Yang, J.; Chen, Y.; and Yuan, J. 2022a. MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13222–13232. Zhang, J.; Wang, J.; Shi, Y.; Gao, F.; Xu, L.; and Yu, J. 2022b. Mutual Adaptive Reasoning for Monocular 3D Multi-Person Pose Estimation. Proceedings of the 30th ACM International Conference on Multimedia. Zhang, Y.; Wang, C.; Wang, X.; Liu, W.; and Zeng, W. 2021. VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the Wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45: 2613–2626. Zheng, J.; Shi, X. Y.; Gorban, A. N.; Mao, J.; Song, Y.; Qi, C.; Liu, T.; Chari, V.; Cornman, A.; Zhou, Y.; Li, C.; and Anguelov, D. 2021. Multi-modal 3D Human Pose Estimation with 2D Weak Supervision in Autonomous Driving. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 4477–4486. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7177
2024
797
18,625
NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance Fields Junge Zhang1, Feihu Zhang2, Shaochen Kuang3, Li Zhang1* 1Fudan University 2University of Oxford 3South China University of Technology jgzhang17@fudan.edu.cn, lizhangfd@fudan.edu.cn Abstract Labelling LiDAR point clouds for training autonomous driving is extremely expensive and difficult. LiDAR simulation aims at generating realistic LiDAR data with labels for training and verifying self-driving algorithms more efficiently. Recently, Neural Radiance Fields (NeRF) have been proposed for novel view synthesis using implicit reconstruction of 3D scenes. Inspired by this, we present NeRF-LIDAR, a novel LiDAR simulation method that leverages real-world information to generate realistic LIDAR point clouds. Different from existing LiDAR simulators, we use real images and point cloud data collected by self-driving cars to learn the 3D scene representation, point cloud generation and label rendering. We verify the effectiveness of our NeRF-LiDAR by training different 3D segmentation models on the generated LiDAR point clouds. It reveals that the trained models are able to achieve similar accuracy when compared with the same model trained on the real LiDAR data. Besides, the generated data is capable of boosting the accuracy through pre-training which helps reduce the requirements of the real labeled data. Code is available at https://github.com/fudanzvg/NeRF-LiDAR Introduction LiDAR sensor plays a crucial role in autonomous driving cars for 3D perception and planning. However, labelling the 3D point clouds for training 3D perception models is extremely expensive and difficult. In view of this, LiDAR simulation that aims at generating realistic LiDAR point clouds for different types of LiDAR sensors becomes increasingly important for autonomous driving cars. It can generate useful LiDAR data with labels for developing and verifying the self-driving system. Many previous works have studied the LiDAR simulation, which can be mainly categorized into two types: the virtual environment creation method and the reconstructionbased method. The former creates the 3D virtual world by graphics-based 3D modeling and then generates the 3D LiDAR point clouds by physics-based simulation (ray tracing). These kinds of works (Dosovitskiy et al. 2017; Koenig and *Li Zhang is the corresponding author with School of Data Science, Fudan University. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) CARLA (b) LiDARGen (c) Our NeRF-LiDAR (d) Real LiDAR sensor Figure 1: Comparisons of results between our NeRF-LiDAR and other existing LiDAR simulation methods. (a) Method (Dosovitskiy et al. 2017) that creates virtual world for LiDAR simulation. (b) Diffusion model used for LiDAR generation (Zyrianov, Zhu, and Wang 2022). (c) Our NeRFLiDAR can generate realistic point clouds that is nearly the same as the real LiDAR point clouds (d). Howard 2004) have natural limitations as it’s impossible for 3D modelers to create a virtual world that is the same as the complex real world. The simulated LiDAR points have significant domain differences from the real LiDAR points and cannot be used to train robust deep neural network models. The latter (Manivasagam et al. 2020; Fang et al. 2020) relies on multiple LiDAR scans to densely reconstruct the street background and then place the foreground objects into the background. However, it’s expensive to collect dense LiDAR scans which may need special devices (Fang et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7178 2020). Moreover, it’s expensive to generate point-wise semantic labels for simulated LiDAR data. It still requires human annotations on the 3D scans. Recently, Neural Radiance Fields (NeRF) (Mildenhall et al. 2020; Barron et al. 2021) have been proposed for implicit reconstruction of the 3D object/scenes with multiple images as inputs. NeRF can render photo-realistic novel views along with dense depth maps. Inspired by this, we proposed to learn a NeRF representation for real-world scenes and render LiDAR point clouds along with accurate semantic labels. Different from existing reconstruction-based LiDAR-simulation methods (Manivasagam et al. 2020; Fang et al. 2020) or the virtual world creation (Dosovitskiy et al. 2017) (Fig. 1a), our method takes full use of the multi-view images to implicitly reconstruct the labels and 3D real-world spaces. The multiview images can assist the simulation system to learn more accurate 3D geometry and real-world details and generate more accurate point labels. The proposed NeRF-LiDAR model consists of two important modules: 1) the reconstruction module that uses NeRF to reconstruct the real world along with labels; 2) the generation module that learns to generate realistic point clouds through a point-wise alignment and a feature-level alignment. Since our NeRF-LiDAR can generate realistic LiDAR point clouds along with accurate semantic labels, we verify the effectiveness of our NeRF-LiDAR by training different 3D segmentation models on the generated data. The trained 3D segmentation models are shown able to achieve competitive performance when compared with the same model trained on the real LiDAR data which implies that the generated data can be directly used to replace the real labeled LiDAR data. Besides, by using the generated LiDAR data for pre-training and a small number of real data (e.g., 1/10) for fine-tuning, the accuracy can be significantly improved by a large margin which is even better than the model trained on a 10 times larger real LiDAR dataset. Related Work LiDAR point-cloud simulation has been studied for many years from the initial engine based rendering to the state-ofthe-art real-world reconstruction-based LiDAR rendering. LiDAR Simulation The first type of the LiDAR simulation method (Dosovitskiy et al. 2017; Koenig and Howard 2004; Yue et al. 2018; Gschwandtner et al. 2011) relies on creating 3D virtual world and rendering the point clouds with physics-based simulation. However, the generated virtual data have large domain gaps with the real data when used for training deep neural networks. This is because the 3d virtual world cannot simulate the complexity and details of the real world. Another point-cloud simulation method (Achlioptas et al. 2018; Luo and Hu 2021; Yang et al. 2019; Zyrianov, Zhu, and Wang 2022) generates 3D point clouds based on generative models. However, these generated data cannot be used to train models as they also have significant domain differences with real point clouds. Moreover, it’s difficult to generate labels for the point clouds. State-of-the-art LiDAR simulation methods (Fang et al. 2020; Manivasagam et al. 2020) first reconstruct the realworld driving scenes into 3D meshes and then run the physics-based simulation. In order to achieve dense accurate reconstruction results, these methods need to scan the street many times using expensive LiDAR devices (Fang et al. 2020). More importantly, it’s still expensive to generate point-wise semantic labels for simulated LiDAR data as it requires human annotations on the reconstructed 3D scenes. Instead of simulating whole LiDAR scenes, others, e.g., (Fang et al. 2021) use the real-world 3D scenes and propose a rendering-based LiDAR augmentation framework to enrich training data and boost performance of LiDAR-based 3D object detection. Our method also leverages real-world information for learning LiDAR simulation. Our NeRF-LiDAR creates an implicit neural-radiance-field representation of the real world for both point clouds and label rendering. Neural Radiance Fields Recently, Neural radiance fields (NeRF) (Mildenhall et al. 2020) have been proposed as an implicit neural representation of the 3D real world for novel view synthesis. NeRFs can take multiple 2D images and their camera-view directions to represent the whole 3D space. However, early NeRFs are only applicable to small object-centric scenes. Many recent NeRFs have been proposed to address the challenges of large-scale outdoor scenes (Barron et al. 2021, 2022; Tancik et al. 2022; Zhang et al. 2020). There are also some methods (Rematas et al. 2022; Deng et al. 2022) leveraging depth supervision to help create more accurate 3D geometry of scenes. Panoptic or semantic label synthesis for novel views is also explored in (Zhi et al. 2021; Kundu et al. 2022; Fu et al. 2022). They utilize the density of the volume to render image labels along with the novel view synthesis. Inspired by these works, our method reconstructs the accurate 3D geometry using the NeRF methodology in the driving scene and generates 3D point clouds along with accurate semantic labels for the LiDAR simulation. NeRF-LiDAR In this section, we present our NeRF-based LiDAR simulation framework (as shown in Fig. 2). The method consists of three key components: 1) NeRF reconstruction of the driving scenes, 2) realistic LiDAR point clouds generation and 3) point-wise semantic label generation. We formulate the three components into end-to-end deep neural network models for learning LiDAR simulation. Neural Radiance Fields Neural Radiance Fields learn implicit representation for the scenes and render novel view synthesis through volume rendering. It learns a function f : (x, v) →(c, σ) for mapping coordinates x and viewing directions v to color c and density σ. The volume rendering is based on discrete rays r = o + td in the space and applying numerical integration along the rays to query color: ˆC(r) = N X i=1 Ti(1 −e−σiδi)ci, Ti = exp  − i−1 X j=1 σjδj  , (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7179 Spherical Projection LiDAR & Label Rendering RGB images Semantic labels ℒrgb ℒsemantic ℒsemantic Spherical Projection Supervision Multi-channels: Ranges, Semantic,RGB … U-Net Simulated LiDAR Points Feature Alignment ℒmask ℒfeature fθ NeRF Reconstruction Learning Raydrop& Alignment ℒdepth Figure 2: Schematic illustration of NeRF-LiDAR. Image sequences along with the predicted weak semantic labels are used as inputs to reconstruct the implicit NeRF model. LiDAR signals are also used to help create more accurate 3D geometry. Initial coarse point clouds are generated by the NeRF reconstruction through Eq. (5)∼(7). The initial point clouds are projected into 2D equirectangular images. We then utilize a U-Net to learn raydrop and the alignment (detailed in Fig. 3) to make the generated point clouds more realistic. where o is the origin of the ray, d is the direction, Ti is the accumulated transmittance along the ray, and ci and σi are the corresponding color and density at the sampled point ti. δj = tj+1 −tj refers to the distance between the adjacent sampled points. NeRF Reconstruction State-of-the-art LiDAR simulation methods (Manivasagam et al. 2020) rely on dense LiDAR scans for scene reconstruction. To achieve the dense reconstruction of the street, (Fang et al. 2021) uses a special (expensive) LiDAR device to collect dense depth maps. (Manivasagam et al. 2020) scans the street many times to accumulate much denser point clouds. These dense depth maps or point clouds are then used to extract the meshes of the street. Finally, the meshes are used to generate point clouds of different types of LiDAR sensors. In this paper, we present a new method takes multi-view images and sparse LiDAR signals to reconstruct the street scenes and represent the 3D scenes as an implicit NeRF model. We propose to use the driving-scene data to learn the NeRF reconstruction. NeRF reconstruction of the unbounded large-scale driving scenes is challenging. This is because most of NeRFs (Mildenhall et al. 2020) are designed for small scene reconstruction with object-centric camera views. However, the driving data are often collected in the unbounded outdoor scenes without object-centric camera views settings (e.g., nuScenes (Caesar et al. 2020)). Moreover, since the ego car moves fast during the data collection, the overlaps between adjacent camera views are too small to be effective for building multi-view geometry. We reconstruct the NeRF representation based on the multi-view images and leverage the LiDAR points to provide extra depth supervision to create more accurate 3D geometries. Besides, the real LiDAR point clouds are used as supervision to learn more realistic simulated LiDAR data. To reconstruct the driving scenes, we use the unbounded NeRF (Barron et al. 2022) with a modified supervision of: Lrgb = ∥ˆC(r) −C(r)∥2, (2) Ldepth = ∥ˆD(r) −D(r)∥1. (3) Here, ˆD is the rendered depth by the volume rendering in Eq. (1): ˆD(r) = N X i=1 Ti 1 −e−σiδi zi, (4) where zi is the depth value at the sampled point ti on the ray r. Since the original unbounded NeRF (Barron et al. 2022) is extremely slow which takes about one day to train each NeRF block, we adopt hash-encoding NeRF (Barron et al. 2023) to speed up the simulation process. Point-cloud Generation After learning the implicit NeRF representation of the driving scenes, we set a virtual LIDAR to simulate the real LIDAR sensor. The virtual LIDAR shares the same parameter settings with the real LiDAR sensors. For example, nuScenes (Caesar et al. 2020) uses Velodyne HDL32E LiDAR sensor, of which the spinning speed is 20Hz and the field of view ranges from -30.67 degree to 10.67 degree (consists of 32 channels). We can therefore simulate NeRF-LiDAR rays with the LiDAR center (o) and direction d accordingly: d = (cos θ cos ϕ, sin θ sin ϕ, cos ϕ)T , (5) where θ, ϕ represent the azimuth and vertical angle of the ray, determined by the time interval of the lasers and the settings of the LiDAR sensor. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7180 The origin of the rays o changes according to the defined motion of ego cars. o = o0 + ∆t · v. (6) Here, v is the velocity of the ego vehicle and ∆t means the time interval from the previous state. ∆t is decided by the frame rate of the LiDAR sensors (e.g., 20Hz for nuScenes LiDAR). Each simulated point p = {x, y, z} can be then calculated by the pre-defined directions d of rays and the distances ˆD from the LiDAR sensor to the real world objects: p = o + ˆDd. (7) There are about 20∼40k points in one frame of a standard 32-channel LiDAR point clouds. To simulate the whole point clouds, for each point, we render a ray to compute the exact 3D location. Label Generation To achieve the point-wise semantic labels of the simulated LiDAR points, we use the 2D semantic labels of the images to learn the 3D label generation. Semantic NeRF (Zhi et al. 2021) proposes to use the semantic logits that could be rendered through volume rendering (Eq. (1)) like RGB color: ˆS(r) = N X i=1 Ti(1 −e−σiδi)si, (8) where Ti, σi, δi follows the definition of Eq. 1, si is the semantic logit of the sampled point. Here, we first consider the most difficult cases where there is no annotated label from the collected driving data (images and LiDAR points). Given unlabeled real images collected from multiple cameras of the self-driving cars, we train a SegFormer (Xie et al. 2021) model, on the mixture of other datasets including Cityscapes (Cordts et al. 2016), Mapillary (Neuhold et al. 2017), BDD (Yu et al. 2020), IDD (Varma et al. 2019) to compute weak labels that serve as inputs to the NeRF reconstruction model. To achieve better cross-dataset generalization of the SegFormer and avoid conflicts in the label definition, we utilize the learning settings and label merging strategy in the (Lambert et al. 2020). Considering that the generated weak labels may have many outliers, to reduce the influence of these outliers and generate more accurate 3D point labels, we take full use of multi-view geometric and video spacial temporal consistency in our NeRF reconstruction. In the NeRF training, we combine the image-label supervision into the reconstruction learning by constructing the semantic radiance fields: ˆLl = CE(ˆS(r), S(r)), (9) where CE is the cross-entrophy loss, r represents pixel-wise camera rays corresponding to each image pixel, ˆS(r) is the rendered labels by the NeRF model (Eq.(8)) and S(r) is the label predicted by the image segmentation model. In some other cases, when there is a small number of labeled images or LiDAR frames, we can also leverage the existing ground-truth labels for more robust label generation. For example, in the nuScenes dataset, a small part of the LiDAR frames (about 1/10) was labeled with semantic annotations. We take the sparse 3D point labels along with the weak 2D image labels to learn more accurate semantic radiance fields. Ll = CE(ˆS(rLiDAR), S(rLiDAR)). (10) Here rLiDAR represents the point-wise rays emitted by the LIDAR sensor. The total loss for learning our NeRF reconstruction can be represented as: Lrec = Ldepth + wrgbLrgb + ˆwl ˆLl + Ll, (11) where wrgb and wl balance the RGB geometry reconstruction, the LiDAR rendering and the semantic label rendering. Learning Raydrop & Alignment In the real world, the LiDAR sensor cannot receive all beams emitted by itself, influenced by the reflectance ratio of different materials (e.g., glasses), the incidence angle and many other factors (Manivasagam et al. 2020; Fang et al. 2020). Points are usually dropped when the reflected intensity is below the perception threshold. To make generated LIDAR points closer to the real LIDAR points, we learn a raydrop processing on the generated dense points. NeRF-LiDAR allows us to render depths (3D points) at arbitrary positions and directions. We use the ground-truth LiDAR frames as supervision to learn the raydrop. Given one ground-truth LiDAR frame P, we render the simulated LiDAR frame ˆP at the same location accordingly. The ground-truth P and the simulated ˆP should have strong point-wise correspondence. P ≃ˆP. (12) We adopt such point-to-point correspondence as the learning target. Equirectangular Image Projection It’s difficult to create a point-to-point correspondence between the two irregular 3D point clouds. To better leverage the point-wise correspondence, we first render all generated points into a 2D equirectangular image (a panorama sparse depth image, as illustrated in Fig. 2). For example, in nuScenes dataset, the resolution of the 32-channel LiDAR equirectangular image is set as 32 × 1024. To project the irregular LiDAR points, we adopt the spherical projection (Geiger, Lenz, and Urtasun 2012) to project our points into the equirectangular image grids (as illustrated in Figure 3). Similarly, the real LiDAR frame is also transferred into a 2D 32 × 1024 equirectangular image. In this way, we can easily create the correspondence in 2D grids. Point-to-point Alignment To learn the drop probabilities for each 2D grid location in the equirectangular image, we employ a standard U-Net which encodes the depth ranges, semantic labels, RGB textures, and depth variances between neighborhoods into a feature representation. The U-Net outputs a 2D probability map (a binary mask) to represent the raydrop results. We take the corresponding real LiDAR equirectangular image as the learning target. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7181 Features: Range, RGB, Semantic, Variance Simulated point cloud U-Net Projection Real point cloud Projection ℒfeature ℒmask Prediction Mask Rendered point cloud Ground-truth Mask (x,y,z) Mapped points (u,v) Cylinder plane Unprojection point-wise alignment feature-level alignment Figure 3: Illustration of learning raydrop and alignment. The initial coarse point clouds are projected into 2D equirectangular images. We use the projected depth, RGB texture, and depth variances as input to a standard U-Net. The UNet learns the raydrop mask to improve the initial coarse point clouds through the point-wise alignment (Eq. (13)) and the feature-level alignment (Eq. (14)). Finally, the refined equirectangular images are back-projected to 3D space to achieve the expected LiDAR point clouds. Lmask = CE( ˆ M, Mgt), (13) where ˆ M, Mgt represent predicted and true mask respectively. Extra points/grids are dropped through the learned drop mask. The expected 3D point clouds can be achieved through back-projection of the equirectangular image. Feature-level Alignment The above point-wise alignment aims at making the generated points more realistic or spatially closer to the real point locations. However, the generated LiDAR data will finally be used to train deep neural network models for 3D perceptions. To make the generated data more effective and able to achieve better accuracy when training deep neural networks (e.g., 3D perception models), we propose the feature-level alignment to further regulate the distribution of the generated point clouds. Lfeat = n X k=1 wk∥F(ˆI)k −F(Igt)k∥1, (14) where F is a pre-trained and fixed feature extractor (e.g., VGG (Simonyan and Zisserman 2015), Point-cloud segmentation Net (Milioto et al. 2019)). We use n-level pyramidal features to compute the feature distance. n = 4 is the number of feature levels, wk = 2k−n is the weights for kth-level features. The loss Lfeat measures the feature-level similarity between the real and the simulated point clouds. To enable the back-propagation from the feature loss to the previous raydrop module, we apply the gumble-softmax (Jang, Gu, and Poole 2016) on the ray drop processing. The whole generation target can be represented as: Lgen = Lmask + wfeatLfeat. (15) The feature-level distribution alignment can make the generated data more effective and achieve better accuracy in training segmentation networks. Besides, we find that it also helps to remove extra outliers in the generated point clouds (examples are shown in Fig. 4 and the supplementary). Experiment Experimental Settings Dataset We use the standard nuScenes self-driving dataset for training and evaluation. NuScenes contains about 1000 scenes collected from different cities. Each scene consists of about 1000 images in six views that cover the 360◦field of view (captured by the front, right-front, right-back, back, left-back and left-front cameras). 32-channel LiDAR data are also collected at 20 Hz. Human annotations are given in key LiDAR frames (one frame is labeled in every 10 frames). We use both unlabeled images and LiDAR data for training our NeRF-LiDAR model. The labeled key LiDAR frames are used for evaluations. Limited by computing resources, we take 30 nuScenes sequences from the whole dataset. Each sequence covers a street scene with a length of 100∼200m, and the total length is about 4km. Namely, we reconstruct about 4km of driving scenes for training and evaluation. Evaluation Settings To avoid conflicts in label definitions, we remap image segmentation labels into five categories (road, vehicles, terrain, vegetation and man-made) in accordance with the nuScenes LiDAR segmentation labels. In the training set, we use a total of 7000 unlabeled LiDAR frames and 30000 images for training our NeRFLiDAR model. There are extra 1000 labeled LiDAR frames provided in these nuScenes scenes. We mainly use these labeled data for testing and fine-tuning in the experiments. To evaluate the quality of the generated point clouds and point-wise labels, we train different LiDAR segmentation models (Cylinder3D (Zhou et al. 2021) and RangeNet++ (Milioto et al. 2019)) on the generated data and compare the segmentation model with those models trained on the real nuScenes LiDAR data(25k iterations). We use two evaluation sets to evaluate the 3D segmentation results. The first Test Set 1 consists of 400 labeled real point clouds that is extracted from the 30 reconstructed scenes which are not used for training. This validation set is from the same scenes as the simulation data. The second Test Set 2 is the whole nuScenes validation set which consists of ∼5700 LiDAR point clouds from other nuScenes scenes (not including the 30 selected scenes). This is used to test the quality and generalization/transfer abilities of the simulation data. We test the trained model in other unseen scenes (results are available in the supplementary). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7182 Test Set 1 Training Set road vege. terrain vehicle manmade mIoU CARLA 76.4 47.3 # 33.7 54.4 52.9∗ Real 1k 96.2 83.6 83.1 83.0 86.4 86.5 Real 10k 97.0 83.6 84.5 89.3 87.8 88.4 Sim20k 93.5 70.4 77.6 79.1 80.7 80.3 Sim20k + real1k 97.1 84.1 85.3 92.2 86.9 89.1 Table 1: Evaluation an comparisons with the real LiDAR data and CARLA. ∗mean IoU of four classes. Raydrop Feature loss mIoU no raydrop random learning vgg rangenet ! 65.7 ! 63.5 ! 66.3 ! ! 66.5 ! ! 69.9 Table 2: Ablations on different settings of ray drop and feature loss. Models are evaluated on validation set Test Set 2. Ablation Study To demonstrate the effectiveness of our method components, in Table 2, we conduct experiments on different components of our NeRF-LiDAR. We conduct ablation studies on 25 sequences of all data. Effects of Raydrop We compare three different raydrop settings in Table 2 and Fig. 4. Without any raydrop, the generated LiDAR data report a mIoU of 65.7 after training the 3D segmentation model (Zhou et al. 2021). By adding a random drop strategy, the mIoU is dropped to 63.5. And with our learning-based raydrop, the mIoU is improved to 66.3. Effects of Feature Loss We also evaluate the effects of the feature loss in Table 2. We use two different feature extractors (VGG (Simonyan and Zisserman 2015) and RangeNet++ (Milioto et al. 2019)) to implement the featurelevel alignment. Without feature loss for feature-level alignment, our method reports a mIoU of 66.3. By using the pretrained VGG Net(Simonyan and Zisserman 2015) for feature alignment, the result is improved to 66.5. By implementing a pre-trained 3D segmentation network as the feature extractor, the results are significantly improved to 69.9. Label Quality In Fig. 5, we visualize the generated LiDAR labels and compare them with ground-truth labels in the real data. We can observe that the generated point clouds by our NeRF-LiDAR have strong point-to-point correspondence with the real data and labels. The labels are accurate and close to manual annotations. In the supplementary, we also evaluate the quality of the labels by computing the mIoU by comparing it with road vege. terrain vehicle manmade mIoU LiDAR + pseudo Seg 70.6 39.4 30.7 20.9 52.0 42.7 NeRF-LiDAR 10k 91.6 61.1 59.2 69.7 68.0 69.9 Table 3: Comparisons between NeRF-LiDAR data and unlabeled real LiDAR data with pseudo segmentation labels. the manual labels. NeRF-LiDAR can generate accurate labels with a high mIoU of 80∼95 under different settings. Pseudo Segmentation of Real LiDAR Scenes. Compared with the pseudo segmentation labels on the real data, our NeRF-LiDAR can generate harder cases which have not been be collected by the dataset. These data significantly improve the diversity of the training dataset and boost the accuracy in training 3D segmentation models (Table 3). Comparisons with State of the Art CARLA and Real LiDAR In Table 1, we evaluate the quality of the generated data by training 3D segmentation model (Zhou et al. 2021). Mean IoU is used as evaluation metric. In Table 1, we use the predicted weak image segmentation labels and 1000 labeled LiDAR frames to train our NeRFLiDAR. We use the Test Set 1 which is extracted from the same scenes as the simulation data to evaluate the accuracy of the 3D segmentation models. CARLA simulator (Dosovitskiy et al. 2017) and different real LiDAR sets are taken as baselines. When we train the point cloud segmentation network on the 20k simulation data, it achieves a mIoU of 80.3, which is close to real 1k data and far better than the model trained on the 20k CARLA data. If we use 1k real data for fine-tuning, the mIoU can be further improved to 89.1, which exceeds real 10k data. Reconstruction-based Simulators Our NeRF-LiDAR possesses apparent advantages over other reconstruction-based LiDAR simulators (Manivasagam et al. 2020; Fang et al. 2020). First, RGB images are used to assist the reconstruction in our NeRF-LiDAR, providing useful multi-view geometry information for reconstruction and label generation. Secondly, no manual annotations on the point clouds are required. We use NeRF representation to learn label generation. Multi-view images provide useful label information and geometry consistency to reduce the outlier labels. Finally, NeRF-LiDAR does not require dense point clouds for reconstruction of the real world. LiDARSim (Manivasagam et al. 2020) uses a 64-channel LiDAR and scans the street many times to achieve dense point clouds for simulation. Augmented LiDAR simulator (Fang et al. 2020) utilizes a special and expensive LiDAR device to achieve the dense point clouds. As a comparison, NeRF-LiDAR relies more on 2D images to achieve 3D geometry accuracy. We compare our method with LiDARSim (Manivasagam et al. 2020) in Table 4 and Figure 6 . Considering that the official code of LiDARSim is not published, we tried our best to reproduce the procedures, i.e., accumulating the LiDAR points, calculating normals, building meshes and doing raycasting and ray-dropping. However, the aggregated points are not dense enough to build high-quality meshes and thus The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7183 (a) Without RayDrop (b) Random RayDrop (c) Learning RayDrop (d) W/ Feature Alignment (e) Real LiDAR Figure 4: Comparisons of different settings for LiDAR rendering. (a) Point clouds without raydrop, (b) Point clouds after random raydrop. (c) Point clouds after our learning based raydrop but without using the feature-level alignment. (d) The final generated point clouds with both learning based raydrop and the feature-level alignment. (e) the real LiDAR point clouds. (a) Our Data & Label Rendering (b) Ground-truth Labels of the Real Data Figure 5: Comparisons between the data and label generated by the NeRF-LiDAR and the real LiDAR data with human annotations. For better visualization, we project the 3D point cloud as 2D equirectangular image with colorized labels. Our NeRF-LiDAR (a) is shown able to generate accurate labels and realistic point clouds that is almost the same as the real data (b). (a) LiDARSim (Manivasagam et al. 2020) (b) Our NeRF-LiDAR (c) Real LiDAR sensor Figure 6: Comparison between our NeRF-LiDAR and LiDARSim (Manivasagam et al. 2020). Sparse aggregated point cloud leads to poor-quality mesh when reconstructing scenes thus the simulated LiDAR points lose reality compared to NeRF-LiDAR. road vege. terrain vehicle manmade mIoU LiDARSim 83.1 55.1 39.1 36.7 75.2 57.8 NeRF-LiDAR 92.5 69.9 70.1 74.7 84.8 78.4 Table 4: Comparison with LiDARSim. We reconstruct 25 sequences and generate 10k LiDAR frames to train 3D segmentation models. produce poor results. Combining Real and Simulated Data We combine real data with NeRF-LiDAR data generated from ground-truth scenes to see if simulated data can further boost performance for training. As shown in Table 1, simulated data along with 10% real data (real 1k) perform better (Sim10k + real 1k: 89.1) than 100% real data (real 10k: 88.4). Conclusion NeRF-LiDAR is proposed to generate realistic LIDAR point clouds via neural implicit representation. Images are utlized to achieve accurate 3D geometry and labels rendering and real data are also adopted as supervision. The effectiveness of NeRF-LiDAR is verified by training 3D segmentation networks. Models trained on the generated LiDAR data can achieve similar mIoU as models trained on real LiDAR data. Acknowledgments This work was supported in part by STI2030-Major Projects (Grant No. 2021ZD0200204), National Natural Science Foundation of China (Grant No. 62106050 and 62376060), Natural Science Foundation of Shanghai (Grant No. 22ZR1407500) and USyd-Fudan BISA Flagship Research Program. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7184 References Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; and Guibas, L. 2018. Learning representations and generative models for 3d point clouds. In ICML. Barron, J. T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; and Srinivasan, P. P. 2021. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. In ICCV. Barron, J. T.; Mildenhall, B.; Verbin, D.; Srinivasan, P. P.; and Hedman, P. 2022. Mip-NeRF 360: Unbounded AntiAliased Neural Radiance Fields. In CVPR. Barron, J. T.; Mildenhall, B.; Verbin, D.; Srinivasan, P. P.; and Hedman, P. 2023. Zip-NeRF: Anti-aliased grid-based neural radiance fields. In ICCV. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuscenes: A multimodal dataset for autonomous driving. In CVPR. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In CVPR. Deng, K.; Liu, A.; Zhu, J.-Y.; and Ramanan, D. 2022. Depthsupervised NeRF: Fewer Views and Faster Training for Free. In CVPR. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; and Koltun, V. 2017. CARLA: An Open Urban Driving Simulator. In Proceedings of the 1st Annual Conference on Robot Learning. Fang, J.; Zhou, D.; Yan, F.; Zhao, T.; Zhang, F.; Ma, Y.; Wang, L.; and Yang, R. 2020. Augmented lidar simulator for autonomous driving. IEEE Robotics and Automation Letters. Fang, J.; Zuo, X.; Zhou, D.; Jin, S.; Wang, S.; and Zhang, L. 2021. LiDAR-Aug: A General Rendering-based Augmentation Framework for 3D Object Detection. In CVPR. Fu, X.; Zhang, S.; Chen, T.; Lu, Y.; Zhu, L.; Zhou, X.; Geiger, A.; and Liao, Y. 2022. Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation. In International Conference on 3D Vision (3DV). Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In CVPR. Gschwandtner, M.; Kwitt, R.; Uhl, A.; and Pree, W. 2011. BlenSor: Blender sensor simulation toolbox. In International Symposium on Visual Computing. Jang, E.; Gu, S.; and Poole, B. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Koenig, N.; and Howard, A. 2004. Design and use paradigms for gazebo, an open-source multi-robot simulator. In IROS. Kundu, A.; Genova, K.; Yin, X.; Fathi, A.; Pantofaru, C.; Guibas, L. J.; Tagliasacchi, A.; Dellaert, F.; and Funkhouser, T. 2022. Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation. In CVPR. Lambert, J.; Liu, Z.; Sener, O.; Hays, J.; and Koltun, V. 2020. MSeg: A composite dataset for multi-domain semantic segmentation. In CVPR. Luo, S.; and Hu, W. 2021. Diffusion probabilistic models for 3d point cloud generation. In CVPR. Manivasagam, S.; Wang, S.; Wong, K.; Zeng, W.; Sazanovich, M.; Tan, S.; Yang, B.; Ma, W.-C.; and Urtasun, R. 2020. Lidarsim: Realistic lidar simulation by leveraging the real world. In CVPR. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV. Milioto, A.; Vizzo, I.; Behley, J.; and Stachniss, C. 2019. Rangenet++: Fast and accurate lidar semantic segmentation. In IROS. Neuhold, G.; Ollmann, T.; Rota Bulo, S.; and Kontschieder, P. 2017. The mapillary vistas dataset for semantic understanding of street scenes. In ICCV. Rematas, K.; Liu, A.; Srinivasan, P. P.; Barron, J. T.; Tagliasacchi, A.; Funkhouser, T.; and Ferrari, V. 2022. Urban Radiance Fields. In CVPR. Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR. Tancik, M.; Casser, V.; Yan, X.; Pradhan, S.; Mildenhall, B.; Srinivasan, P. P.; Barron, J. T.; and Kretzschmar, H. 2022. Block-nerf: Scalable large scene neural view synthesis. In CVPR. Varma, G.; Subramanian, A.; Namboodiri, A.; Chandraker, M.; and Jawahar, C. 2019. IDD: A dataset for exploring problems of autonomous navigation in unconstrained environments. In WACV. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. In NeurIPS. Yang, G.; Huang, X.; Hao, Z.; Liu, M.-Y.; Belongie, S.; and Hariharan, B. 2019. Pointflow: 3d point cloud generation with continuous normalizing flows. In ICCV. Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; and Darrell, T. 2020. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In CVPR. Yue, X.; Wu, B.; Seshia, S. A.; Keutzer, K.; and Sangiovanni-Vincentelli, A. L. 2018. A LiDAR Point Cloud Generator: From a Virtual World to Autonomous Driving. In ICMR. Zhang, K.; Riegler, G.; Snavely, N.; and Koltun, V. 2020. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492. Zhi, S.; Laidlow, T.; Leutenegger, S.; and Davison, A. J. 2021. In-place scene labelling and understanding with implicit scene representation. In ICCV. Zhou, H.; Zhu, X.; Song, X.; Ma, Y.; Wang, Z.; Li, H.; and Lin, D. 2021. Cylinder3d: An effective 3d framework for driving-scene lidar semantic segmentation. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7185 Zyrianov, V.; Zhu, X.; and Wang, S. 2022. Learning to Generate Realistic LiDAR Point Clouds. In ECCV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7186
2024
798
18,626
Point Cloud Part Editing: Segmentation, Generation, Assembly, and Selection Kaiyi Zhang1, Yang Chen1, Ximing Yang1, Weizhong Zhang1,2, Cheng Jin1,2 1School of Computer Science, Fudan University, Shanghai, China 2Innovation Center of Calligraphy and Painting Creation Technology, MCT, China {zhangky20, chen yang19, xmyang19, weizhongzhang, jc}@fudan.edu.cn Abstract Ideal part editing should guarantee the diversity of edited parts, the fidelity to the remaining parts, and the quality of the results. However, previous methods do not disentangle each part completely, which means the edited parts will affect the others, resulting in poor diversity and fidelity. In addition, some methods lack constraints between parts, which need manual selections of edited results to ensure quality. Therefore, we propose a four-stage process for point cloud part editing: Segmentation, Generation, Assembly, and Selection. Based on this process, we introduce SGAS, a model for part editing that employs two strategies: feature disentanglement and constraint. By independently fitting part-level feature distributions, we realize the feature disentanglement. By explicitly modeling the transformation from object-level distribution to part-level distributions, we realize the feature constraint. Considerable experiments on different datasets demonstrate the efficiency and effectiveness of SGAS on point cloud part editing. In addition, SGAS can be pruned to realize unsupervised part-aware point cloud generation and achieves state-of-the-art results. Introduction In the context of 3D object modeling, parts are considered the fundamental units. Recently, part-based methods (Wang et al. 2018; Mo et al. 2019a; Li, Niu, and Xu 2020; Jones et al. 2020; Gal et al. 2021; Li, Liu, and Walder 2022) have become more and more prevailing. These methods typically involve obtaining different parts first and then assembling them. Although many works have explored procedural content generation (Liu et al. 2021), which is often used to make material maps and game maps, the rapid development of game scenes still relies heavily on the generation of 3D objects. Part-based methods enable part editing, which involves replacing some parts of an object to create a new one, thereby enhancing the diversity of 3D modeling. Ideal part editing should make edited parts diverse while keeping unedited parts unchanged to form a reasonable object. These correspond to three important properties of the edited results: diversity, fidelity, and quality. However, when previous part-based methods are applied to part editing, they have two problems. As shown in Figure 1(a), on the one Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. hand, methods such as MRGAN (Gal et al. 2021) and SPGAN (Li et al. 2021) do not realize radical disentanglement between parts. Therefore, when some parts are modified, other parts will also change, which means they do not guarantee the fidelity to the remaining parts. Similarly, multimodal shape completion methods such as MSC-cGAN (Wu et al. 2020a), which can be regarded as a subset of part editing, also do not disentangle parts. This not only changes the input parts but also makes the completion results less diverse. Someone may argue that the adjacent parts may change to accommodate the edited parts, but this will not affect the requirement of radical disentanglement, since all changed parts can be regarded as edited parts. On the other hand, some methods (Schor et al. 2019; Li, Niu, and Xu 2020; Li, Liu, and Walder 2022) do not implement constraints between parts effectively. This may result in poor part assembly and mismatched parts to be used in the formation of an object. Although these methods attempt to achieve assembly by moving parts, the changed parts still need a manual selection to ensure high-quality edited results that lead to a reasonable object. To address these issues, as shown in Figure 1(b), we first propose a four-stage process for point cloud part editing: segmentation, generation, assembly, and selection. For the three properties of edited results, segmentation can guarantee the fidelity to the remaining parts by isolating the parts; generation can guarantee the diversity of edited parts by exploring different variations of the parts; assembly and selection can guarantee the quality of the results by choosing the most appropriate edited parts and assembling them in a coherent manner. Based on this process, we introduce a model SGAS for part editing that employs two strategies: feature disentanglement and constraint. We first use unsupervised shape co-segmentation methods (Chen et al. 2019; Zhu et al. 2020; Zhang et al. 2022) or manual segmentation to obtain Ground Truth parts. Then we pre-train several autoencoders at the part level. Finally, by adversarially supervising part-level feature transformations, we realize the feature disentanglement during generation. Since each part is generated separately, this not only ensures the diversity of edited parts but also the fidelity to the remaining parts. In addition, we make the distribution of each part transformed from the same Gaussian distribution and adversarially supervise the generations of all parts simultaneously to ensure that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7187 Segmentation Generation Assembly Selection (a) Two problems with existing part editing methods (b) A four-stage point cloud part editing process Figure 1: (a) Top: w/o disentanglement between parts. When changing the chair base, the other parts such as the chair back, arm, and seat will also change. Bottom: w/o constraints between parts. The changed chair base is not only poorly assembled but also does not match other parts, resulting in the generation of an unreasonable object. (b) It includes four stages: segmentation, generation, assembly, and selection, which can guarantee fidelity, diversity, and quality of the edited results respectively. edited parts can assemble well and form a reasonable object. This strategy is called feature constraint. It guarantees the quality of the edited results. By adding a part selection module to the final output part features, which allows SGAS to autonomously choose which parts do not need to be output, the quality of the edited results can be further improved. Our main contributions are the following: • We propose a novel point cloud part editing process. It inlcudes four stages: segmentation, generation, assembly, and selection. • Based on the proposed process, we introduce SGAS, a model for part editing that employs two strategies of feature disentanglement and constraint. Experiments show that SGAS achieves excellent quantitative and qualitative part editing results. • A new diversity metric of edited results: Total Mutual Difference Surface (TMDS). • SGAS can be pruned to realize unsupervised part-aware point cloud generation and achieves state-of-the-art results on the ShapeNet-Partseg dataset. Related Work Part-based Shape Generation Unsupervised Part-aware Point Cloud Generation The fine-grained improvement of generative results can be achieved through local generation. Therefore, many methods attempt to explore the generation of multiple parts and combine them into a final shape. Since part-level ground truth data is often unavailable, these methods typically involve unsupervised segmentation of parts. For example, TreeGAN (Shu, Park, and Kwon 2019) first designs the generation of the point cloud as a tree growth process and then combines the various parts at the leaf nodes. To achieve controllable point cloud generation, SP-GAN (Li et al. 2021) is proposed. Similar to FoldingNet (Yang et al. 2018), SPGAN transforms a sphere in 3D space into a target point cloud, where different parts of the 3D sphere correspond to different parts of the target point cloud. MRGAN (Gal et al. 2021) explicitly realizes part disentanglement by using multiple branches of tree-structured graph convolution layers. Instead of supervising each part respectively, it conducts overall supervision after assembling all the parts. Considering that the parts of MRGAN lack semantic meaning, Li, Liu, and Walder (2022) propose EditVAE, which can achieve semantics-aware point cloud generation. Each branch of EditVAE generates not only parts but also additional part offsets and primitives for auxiliary supervision. In addition, ( ¨Ong¨un and Temizel 2020; Postels et al. 2021; Li and Baciu 2022; Cheng et al. 2022) also play important roles in promoting part-aware point cloud generation. Assembly-based Shape Generation Many datasets, such as ShapeNet-Partseg (Yi et al. 2016) and PartNet (Mo et al. 2019b), provide part-level semantics. Therefore, many works explore shape generation by assembling parts. Specifically, these works can be roughly categorized into three groups: (1) assemble without generation (Schor et al. 2019; Dubrovina et al. 2019; Yin et al. 2020; Hui et al. 2022; Wu et al. 2023). These methods only reconstruct the parts and achieve the diversity of results by assembling different parts. For example, CompoNet (Schor et al. 2019) synthesizes ”unseen” but reasonable point clouds by varying both the parts and their compositions. Dubrovina et al. (2019) propose a semantic-part-aware embedding space to realize shape composition and decomposition. PartAttention (Wu et al. 2023) uses a part-wise attention framework to achieve affine transformation of the decoded parts. (2) assemble after generation (Li et al. 2017; Wang et al. 2018; Wu et al. 2019; Li, Niu, and Xu 2020). In contrast to the first category, the parts used for assembling are generated from a Gaussian distribution. These methods mainly focus on designing part offset networks to efficiently assemble parts. For example, G2LGAN (Wang et al. 2018) uses global and local GANs to supervise the correlation between the parts and the quality of each part respectively. It also adds a Part Refiner to optimize the generated results, such as removing outliers and completing missing regions. PAGENet (Li, Niu, and Xu 2020) generates parts using part-level VAEs and designs a Part Assembler to translate parts based on some anchor parts. (3) assemble while generating (Zou et al. 2017; Mo et al. 2019a, 2020; Wu et al. 2020b; Jones et al. 2020; Wang et al. 2022; Zhuang 2022). This type of method The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7188 does not assemble the parts after generating all of them, but rather assembles them progressively during generation. For example, 3D-PRNN (Zou et al. 2017) proposes a generative recurrent neural network that synthesizes multiple plausible shapes step-by-step based on primitives. This progressive process preserves long-range structural coherence. PQ-Net (Wu et al. 2020b) adopts RNN structure and learns 3D shape representations as a sequential part assembly. ShapeAssembly (Jones et al. 2020) achieves 3D shape structure synthesis by generating domain-specific language programs. The transformation of the statements in these programs enables ShapeAssembly to control the generated results. Multimodal Shape Completion Shapes with missing semantics can lead to a variety of completion results. For example, Wu et al. (2020a) propose MSC-cGAN. Based on pcl2pcl (Chen, Chen, and Mitra 2020), it adds an additional Gaussian distribution during the transformation from partial to complete point cloud features and an encoder to guarantee completion fidelity to the input partial. Different samples on Gaussian distribution correspond to different completion results. Zhou, Du, and Wu (2021) introduce PVD, a unified probabilistic formulation, to achieve multimodal shape completion by progressively removing noise from the samples. AutoSDF (Mittal et al. 2022) utilizes a transformer-based autoregressive model to generate patch embeddings extracted independently by VQVAE (van den Oord, Vinyals, and kavukcuoglu 2017) stepby-step. The idea of ShapeFormer (Yan et al. 2022) is similar to AutoSDF. The difference is that AutoSDF embeds the whole 3D space while ShapeFormer introduces a compact 3D representation VQDIF that embeds only the space occupied by 3D shapes, making it more efficient. There are also many other works (Arora et al. 2022; Zhang et al. 2021; Zhao et al. 2021; Jiang and Daniilidis 2022; Cheng et al. 2023) exploring multimodal shape completion. Method In this section, we first describe the architecture of SGAS according to the proposed four-stage point cloud part editing process, and then give the loss functions of SGAS. Segmentation To achieve the disentanglement between parts which can guarantee fidelity in part editing, SGAS is designed to have multiple branches. Each branch generates a part and requires a Ground Truth part for supervision. In our opinion, parts can be semantic parts, such as a chair back, seat, and base, and can also be local areas of a shape’s surface. The former can directly use some datasets (Yi et al. 2016; Mo et al. 2019b) with part semantic labels to obtain Ground Truth parts, while the latter need some unsupervised shape cosegmentation methods (Chen et al. 2019; Zhu et al. 2020; Zhang et al. 2022) to obtain Ground Truth parts. We follow the idea of l-GAN (Achlioptas et al. 2017), which demonstrates that generating on features is better than directly generating on point clouds. Therefore, using the segmented Ground Truth parts, we pre-train an autoencoder for each semantic part to convert point clouds into features. The encoder is the same as PointNet (Qi et al. 2017) encoder. The decoder uses a fully connected network. Earth Mover’s Distance(EMD) (Fan, Su, and Guibas 2017) is used to supervise the training of these autoencoders. As shown in Figure 2, the trained encoders Epi, i = 1...n and decoders Dpi, i = 1...n are used to build SGAS. They do not update parameters during SGAS training. Generation The input of SGAS includes not only Gaussian noise but also unedited parts. The purpose is to realize that the style of generated parts matches that of unedited parts, thereby ensuring the quality of the final edited results. We use AdaIN Layer (Huang and Belongie 2017) to integrate the unedited parts into the Gaussian distribution. Specifically, a PointNet encoder E is used to encode the unedited parts to the mean µ and standard deviation Σ of a Gaussian distribution. The µ and Σ are then applied to the standard Gaussian distribution to obtain a new Gaussian distribution N(µ, Σ2). Based on this new distribution, we design several part-level GANs to generate parts. As shown in Figure 2, a Gaussian noise z ∈R128 ∼N(µ, Σ2) is transformed by generators Gpi, i = 1...n into part latent features to realize feature disentanglement. The generator uses a 3-layer fully connected network (256, 512, 128). The dimension of the part latent feature is 128. Part features are then sent to discriminators Fpi, i = 1...n to distinguish real parts and generated parts. The discriminator uses a 3-layer fully connected network (256, 512, 1). These part-level discriminators ensure the quality of each part. Assembly Since the branches used for part generation are independent, the generated parts may not be assembled into a reasonable object. To solve this problem, as shown in Figure 2, we add a global discriminator F in SGAS to supervise all generated parts simultaneously, which realizes feature constraint. The discriminator uses a 3-layer fully connected network (256, 512, 1). Compared to methods (Schor et al. 2019; Dubrovina et al. 2019; Yin et al. 2020; Li, Niu, and Xu 2020) that use affine transformation to realize part assembly, our global discriminator can further ensure matching between the generated parts. Since some parts of the target point cloud already exist, we use Part Mask to replace some generated parts. Specifically, we first use pre-trained encoders Epi, i = 1...n to obtain the part latent features of unedited parts. These part latent features are then used to replace the corresponding generated part features. Finally, the replaced features are sent to discriminator F. Selection In a real scene, an object does not necessarily contain all semantic parts. For example, a chair without arms and a lamp without a holder. In order to realize this, we perform Part Select on the part features processed after Part Mask. Part Select uses a threshold τ to filter the parts that SGAS thinks do not need to be output. It does not need training. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7189 Edited Results Part Mask Part Select Unedited Parts 𝐷!"…$ ⊛ ⨁ 𝑧∈ℝ"%&~𝒩(0, 𝐼) AdaIN Σ 𝜇 𝐺!"…$ 𝐹!"…$ 𝐸 𝐹 Segmentation 𝐸!"…$ 𝐷!"…$ 𝐸!"…$ Generated Results Figure 2: The architecture of SGAS. The inputs are Gaussian noise and unedited parts. The outputs are diverse generated or edited results. SGAS obtains Ground Truth parts through segmentation and uses them to pre-train part-level autoencoders, which convert point clouds into features. By incorporating the style of unedited parts into the Gaussian distribution using an AdaIN layer and performing part-level GAN supervision, SGAS can generate new parts. To constrain each part to form a reasonable object, SGAS applies part masking and uses a global discriminator. Finally, SGAS performs part selection on each part feature, allowing the model to autonomously choose which parts do not need to be output. Specifically, since some parts might not exist in the Ground Truth point clouds, we set the latent features corresponding to these parts to zero. Therefore, the trained SGAS can automatically determine whether a part needs to be output. It forces the features of parts that do not need to be output as close to zero as possible. In this way, By not decoding the parts whose features are within the threshold τ, we realize part selection in the output point clouds. The filter conditions for Part Select are given as: 1 n n X i=1 Gpi(z) ≤τ (1) where |·| represents absolute value, n is the number of parts. Loss Functions We adopt the loss function introduced in Wasserstein GAN (Arjovsky, Chintala, and Bottou 2017) with gradient penalty (Gulrajani et al. 2017). Network E, Gpi, i = 1...n, Fpi, i = 1...n, and F need training. The losses are given as: LG = −α ∗1 n n X i=1 Ez∼Z[Fpi(Gpi(z))] −β ∗Ez∼Z[F( n [ i=1 Gpi(z))] (2) LFp = 1 n n X i=1 (Ez∼Z[Fpi(Gpi(z))] −Exi∼Rxi[Fpi(xi)] +λgpE ˆ xi[(∥∇ˆ xiFpi( ˆxi)∥2 −1)2]) (3) LF = Ez∼Z,xi∼Pxi[F( n [ i=1 MiGpi(z)) + (1 −Mi)Epi(xi)] −Exi∼Rxi[F( n [ i=1 xi)] + λgpE ˆ xi[(∥∇ˆ xiF( n [ i=1 ˆxi)∥2 −1)2] (4) where LG, LFp, and LF represent the loss functions of the part-level generator, the part-level discriminator, and the global discriminator respectively. α and β are hyperparameters that control the proportion of the part-level to the global GAN. N is the number of parts. xi, i = 1...n are parts. The formulas after λgp are the gradient penalty terms proposed by Gulrajani et al. (2017). Z = N(µ, Σ2), where µ and Σ are from encoder E. M is determined by the unedited parts. Experiments Datasets and Implementation Details We evaluate SGAS on PartNet (Mo et al. 2019b) dataset. By merging fine-grained semantic labels and removing some special objects, we create a new dataset called PartNet.v0.Merged for point cloud part editing. Following previous works (Gal et al. 2021; Li, Liu, and Walder 2022), we perform unsupervised part-aware point cloud generation on ShapeNet-Partseg (Yi et al. 2016) dataset. We do not use semantic labels in the ShapeNet-Partseg dataset. Adam optimizers are used for SGAS with a learning rate of α = 0.0005, coefficients β1 = 0.5 and β2 = 0.99. All the experiments are performed on a single NVIDIA TITAN Xp for 2000 epochs with a batch size of 200. In loss functions, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7190 Model Chair Lamp Table Average MMD ↓ MSC-cGAN 1.62 3.41 1.39 2.14 SGAS 1.33 2.26 1.06 1.55 TMD ↑ MSC-cGAN 5.45 3.94 5.14 4.84 SGAS 4.36 4.48 8.04 5.63 Table 1: Diversity part editing performance. MMD and TMD measure the quality and diversity respectively. (b) Lamp (c) Table (a) Chair Figure 3: Performance on the new metric Total Mutual Difference Surface (TMDS). Red represents SGAS; blue represents MSC-cGAN. The smaller the thresholds of MMD and UHD are, the more referential the calculated TMD is. α and β are set to 1 and 1. λgp is set to 10. The threshold in Part Select is set to 0.5. We update the discriminator 5 times for each update of the generator. Each shape has 2048 points while each part has  2048 n  points. n is the number of parts. During training, the input unedited parts for SGAS are obtained by randomly removing 1 to n −1 parts of objects in PartNet.v0.Merged dataset. Point Cloud Part Editing Diversity part editing is a commonly used operation in point cloud part editing that involves generating some parts multiple times to obtain various results. Previous part editing methods (Li, Niu, and Xu 2020; Gal et al. 2021; Li et al. 2021; Li, Liu, and Walder 2022) lack related evaluation metrics. Since diversity part editing has some intersects with multimodal shape completion, here we use the metrics MMD, TMD, and UHD adopted by Wu et al. (2020a) to measure the quality, diversity, and fidelity of the edited results respectively. Our SGAS disentangles each part, so the input unedited parts can remain unchanged. Therefore, we only compare MMD and TMD. Considering previous part editing methods can only perform part editing on generated objects, which makes it impossible to obtain Gaussian noise corresponding to existing objects for editing. Hence, we use the representative multimodal shape completion method MSC-cGAN (Wu et al. 2020a) as the baseline for the this study. As shown in Table 1, SGAS achieves excellent results in three representative categories. During experiments, we find that the diversity metric TMD only measures the difference between edited results without considering fidelity and quality, which means two incorrect situations that can also result in high TMD: (a) the change of input unedited parts (corresponds to large UHD); (b) the edited parts with large differences but poor quality (corresponds to large MMD). Therefore, we further propose α : β 1:10 1:2 1:1 2:1 5:1 10:1 MMD ↓ 1.47 1.39 1.33 1.35 1.36 1.40 TMD ↑ 1.89 2.65 4.36 4.75 5.34 7.68 Table 2: Ablation results for the hyperparameters α and β. The set of α : β can be determined by requirement. a new metric TMDS (TMD Surface) to solve the problems. Each point value on the surface is calculated as: TMDS(τuhd, τmmd) = mean p∈P              TMD(s1, .., sk), if ∃si, i = 1...k, UHD(p, si) ≤τuhd MMD(si, D) ≤τmmd 0, otherwise TMD(s1, .., sk) = k X i=1 1 k −1 k X j̸=i,j=1 CD(si, sj) (5) where p is input unedited parts, si, i = 1...k are K (e.g. 10) edited results. P is the unedited parts test dataset and D is orignal test dataset. CD is Chamfer Distance (Fan, Su, and Guibas 2017). τuhd and τmmd are thresholds of UHD and MMD respectively. TMDS requires that each point value on the surface is calculated when K edited results are guaranteed to satisfy the corresponding UHD and MMD thresholds. The smaller the thresholds of MMD and UHD are, the more referential the calculated TMD is. Therefore, as shown in Figure 3, we can find that the editing diversity of SGAS is better than that of MSC-cGAN in the Chair category. We use SGAS to perform various part editing operations on PartNet.v0.Merged dataset. Figure 4(a) is the visualized comparison of the diversity edited results. We also perform our SGAS on three new categories: Display, Knife, and Mug. It can be found that SGAS can not only keep the input unchanged but also have higher editing diversity and quality. Figure 4(b) is a multi-round editing workflow achieved by SGAS. It demonstrates SGAS’s ability to re-edit unsatisfactory edited results. Through three rounds of re-editing, a chair with right-angle arms and wheels is edited into a chair with circular arms and a circular base. Figure 4(c) shows some interpolation part editings. If the chairs in the left and right box are generated by Gaussian noise zs and zt respectively, the chairs between boxes are generated by Gaussian noise z = (1−α)zs+αzt. α increases from 0 to 1. From top to bottom, each row represents an interpolation of one, two, and three parts. We can clearly observe the gradual change process of the parts. Figure 4(d) demonstrates a style mixing part editing realized by SGAS. We first select four chairs with different styles. Then we edit different parts (colored) in different chairs. Finally, these edited parts are assembled to obtain chairs with styles from different chairs. This editing operation helps to create more diverse results. As the hyperparameters α and β represent diversity and quality respectively, and have a significant impact on the edited results. Hence, we conduct an ablation study on them. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7191 (a) Diversity Editing (b) Multi-round Editing (c) Interpolation Editing (d) Style Mixing Editing increasing interpolation weight 𝛼 Figure 4: Various part editing operations. (a) Diversity editing comparison. The unedited parts are boxed, followed by five different edited results. The results of MSC-cGAN are uncolored while ours are colored by parts. (b) Re-editing of unsatisfactory edited results. The parts above the arrow are edited in each round. (c) Continuous transformation of selected parts (colored) through interpolation of two input Gaussian noises. Each row represents an interpolation of different number of parts. (d) Style Mixing of different parts in different objects. The mixed results are boxed. value dim Figure 5: Edited parts with their corresponding latent features (128 dim). The chair without outputting arms has its corresponding latent feature of the chair arm near zero. The results are presented in Table 2. It is observed that as α : β increase, the TMD also increase. However, the MMD, which measures the quality of the edited results, initially improves but then worsens. Therefore, we finally choose α : β = 1 : 1 to realize part editing. However, if diversity is more important for the edited results, a higher α : β can also be used. To demonstrate SGAS’s ability to automatically select parts, we visualize the latent features of the edited parts. As shown in Figure 5, it can be found that the latent feature of the chair arm is near zero, which means Figure 6: Performance on ScanNet. The leftmost column is incomplete input, followed by five diversity editing results. SGAS believes that the newly generated chair arm is inappropriate. Therefore, the Part Select module in SGAS will filter this latent feature to prevent the chair arm from being output. To further prove the generalization ability of SGAS, we train SGAS on PartNet.v0.Merged dataset and test it on ScanNet (Dai et al. 2017) dataset. The results can be found in Figure 6. For parts with high missing rates, we will regenerate them, such as the chair back in the 1st row, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7192 Category Model JSD ↓ MMD ↓ COV %, ↑ CD EMD CD EMD Chair TreeGAN 0.119 0.0016 0.101 58 30 MRGAN 0.246 0.0021 0.166 67 23 EditVAE (M=7) 0.063 0.0014 0.082 46 32 EditVAE (M=3) 0.031 0.0017 0.101 45 39 SGAS (N=7) 0.047 0.0020 0.076 60 58 Airplane TreeGAN 0.097 0.0004 0.068 61 20 MRGAN 0.243 0.0006 0.114 75 21 EditVAE (M=6) 0.043 0.0004 0.024 39 30 EditVAE (M=3) 0.044 0.0005 0.067 23 17 SGAS (N=6) 0.036 0.0004 0.039 61 58 Table TreeGAN 0.077 0.0018 0.082 71 48 MRGAN 0.287 0.0020 0.155 78 31 EditVAE (M=5) 0.081 0.0016 0.071 42 27 EditVAE (M=3) 0.042 0.0017 0.130 39 30 SGAS (N=5) 0.057 0.0020 0.069 65 65 Table 3: Generative performance. The optimal and suboptimal results are highlighted in bold and italics respectively. M and N represent the number of parts. Figure 7: Point clouds generated by SGAS, colored by parts. the newly generated parts are compatible with the existing incomplete parts. For parts with low missing rates, we will keep them directly. It can be found that even on unseen objects, SGAS’s diversity editing results are still good. Unsupervised Part-aware Point Cloud Generation By pruning SGAS, including removing unedited parts input, AdaIN Layer, Part Mask, and Part Select, SGAS can be applied to realize unsupervised part-aware point cloud generation. It includes two steps: (a) modifying an unsupervised shape co-segmentation method AXform (Zhang et al. 2022) to get Ground Truth part datasets; (b) training SGAS on these part datasets. Specifically, we first modify the multi-branch AXform to output one structure point per branch. Second, we use these structure points to co-segment the Ground Truth point clouds into n part datasets. Third, the segmented parts are pre-encoded into latent features. Finally, these latent features are used to supervise the training of SGAS. However, we found that there might be some large gaps between parts during generation. Therefore, to achieve seamless generation, during unsupervised part segmentation, we further expand the number of points per part from  2048 n  to (1 + γ)  2048 n  . Here γ = 0.1. In the final #Parts (n) JSD ↓ MMD ↓ COV %,↑ CD EMD CD EMD 2 0.042 0.0004 0.045 61 47 3 0.039 0.0004 0.043 60 52 5 0.040 0.0005 0.042 60 50 6 0.036 0.0004 0.039 61 58 8 0.040 0.0005 0.044 60 46 13 0.035 0.0005 0.040 60 53 Table 4: Ablation results for the number of parts. The optimal and suboptimal results are highlighted in bold and italics respectively. n = 6 is a suitable number of parts. output, we downsample the point cloud to 2048 points. The quantitative results are shown in Table 3. The metrics are proposed by Achlioptas et al. (2017), and the results of previous methods are obtained from EditVAE (Li, Liu, and Walder 2022). MMD and COV represent the quality and diversity of the generated results respectively. It can be found that SGAS achieves excellent results overall, with the most number of top two metrics. Especially on COV-EMD, which represents diversity, SGAS has a significant improvement. Figure 7 gives visualized results of the generated point clouds. Different colors correspond to different parts. It intuitively illustrates the diversity and quality of the results generated by SGAS. We also conduct an ablation study on the number of parts in the Airplane category. As shown in Table 4, more or fewer parts are not necessarily beneficial to the results. Therefore, we chose the number of parts N = 6. Conclusion Previous methods do not disentangle each part completely or lack constraints between parts, which leads to poor diversity, fidelity, and quality when performing part editing. In this work, to solve these problems, we first propose a novel four-stage point cloud part editing process. Then based on this process and two new strategies: feature disentanglement and constraint, we propose a part editing model SGAS. It can realize various part editing operations. By introducing metrics from multimodal completion and proposing a new metric TMDS, we establish quantitative evaluations for diversity part editing. In addition, SGAS can be pruned to realize unsupervised part-aware point cloud generation. Experiments show that it performs better than previous methods. Limitation Since we do not design part offset networks for the generated parts but instead utilize the relatively fixed spatial positions of each generated part to ensure good assembly, SGAS can only achieve part editing for objects with relatively consistent prototypes. For example, SGAS cannot handle part editing for both ceiling lamps and table lamps simultaneously as the spatial order of the parts is opposite. In addition, we also find that the performance of SGAS is limited by the pre-trained autoencoders. The embedded features are better when the parts are normalized. Therefore, it will be beneficial to first generate normalized parts and then design part offset networks to align them in the future. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7193 Acknowledgments This work was supported by National Natural Science Fund of China (62176064). Cheng Jin is the corresponding author. References Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; and Guibas, L. J. 2017. Learning Representations and Generative Models For 3D Point Clouds. arXiv preprint arXiv:1707.02392. Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasserstein Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70, 214–223. Arora, H.; Mishra, S.; Peng, S.; Li, K.; and MahdaviAmiri, A. 2022. Multimodal Shape Completion via Implicit Maximum Likelihood Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2958–2967. Chen, X.; Chen, B.; and Mitra, N. J. 2020. Unpaired Point Cloud Completion on Real Scans using Adversarial Training. In Proceedings of the International Conference on Learning Representations (ICLR). Chen, Z.; Yin, K.; Fisher, M.; Chaudhuri, S.; and Zhang, H. 2019. BAE-NET: Branched Autoencoder for Shape CoSegmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Cheng, A.-C.; Li, X.; Liu, S.; Sun, M.; and Yang, M.-H. 2022. Autoregressive 3d shape generation via canonical mapping. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part III, 89–104. Cheng, Y.-C.; Lee, H.-Y.; Tuyakov, S.; Schwing, A.; and Gui, L. 2023. SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Dai, A.; Chang, A. X.; Savva, M.; Halber, M.; Funkhouser, T.; and Nießner, M. 2017. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. Dubrovina, A.; Xia, F.; Achlioptas, P.; Shalah, M.; Groscot, R.; and Guibas, L. J. 2019. Composite Shape Modeling via Latent Space Factorization. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Fan, H.; Su, H.; and Guibas, L. J. 2017. A Point Set Generation Network for 3D Object Reconstruction From a Single Image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gal, R.; Bermano, A.; Zhang, H.; and Cohen-Or, D. 2021. MRGAN: Multi-Rooted 3D Shape Representation Learning With Unsupervised Part Disentanglement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2039–2048. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; and Courville, A. C. 2017. Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems, volume 30. Huang, X.; and Belongie, S. 2017. Arbitrary Style Transfer in Real-Time With Adaptive Instance Normalization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Hui, K.-H.; Li, R.; Hu, J.; and Fu, C.-W. 2022. Neural Template: Topology-Aware Reconstruction and Disentangled Generation of 3D Meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18572–18582. Jiang, W.; and Daniilidis, K. 2022. Probabilistic Shape Completion by Estimating Canonical Factors with Hierarchical VAE. arXiv preprint arXiv:2212.03370. Jones, R. K.; Barton, T.; Xu, X.; Wang, K.; Jiang, E.; Guerrero, P.; Mitra, N. J.; and Ritchie, D. 2020. ShapeAssembly: Learning to Generate Programs for 3D Shape Structure Synthesis. ACM Transactions on Graphics (TOG), Siggraph Asia 2020, 39(6): Article 234. Li, J.; Niu, C.; and Xu, K. 2020. Learning Part Generation and Assembly for Structure-Aware Shape Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07): 11362–11369. Li, J.; Xu, K.; Chaudhuri, S.; Yumer, E.; Zhang, H.; and Guibas, L. 2017. GRASS: Generative Recursive Autoencoders for Shape Structures. ACM Transactions on Graphics (Proc. of SIGGRAPH 2017), 36(4). Li, R.; Li, X.; Hui, K.-H.; and Fu, C.-W. 2021. SPGAN:Sphere-Guided 3D Shape Generation and Manipulation. ACM Transactions on Graphics (Proc. SIGGRAPH), 40(4). Li, S.; Liu, M.; and Walder, C. 2022. EditVAE: Unsupervised Parts-Aware Controllable 3D Point Cloud Shape Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2): 1386–1394. Li, Y.; and Baciu, G. 2022. SG-GAN: Adversarial SelfAttention GCN for Point Cloud Topological Parts Generation. IEEE Transactions on Visualization and Computer Graphics, 28(10): 3499–3512. Liu, J.; Snodgrass, S.; Khalifa, A.; Risi, S.; Yannakakis, G. N.; and Togelius, J. 2021. Deep learning for procedural content generation. Neural Computing and Applications, 33(1): 19–37. Mittal, P.; Cheng, Y.-C.; Singh, M.; and Tulsiani, S. 2022. AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 306–315. Mo, K.; Guerrero, P.; Yi, L.; Su, H.; Wonka, P.; Mitra, N.; and Guibas, L. 2019a. StructureNet: Hierarchical Graph Networks for 3D Shape Generation. ACM Transactions on Graphics (TOG), Siggraph Asia 2019, 38(6): Article 242. Mo, K.; Guerrero, P.; Yi, L.; Su, H.; Wonka, P.; Mitra, N. J.; and Guibas, L. J. 2020. StructEdit: Learning Structural Shape Variations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Mo, K.; Zhu, S.; Chang, A. X.; Yi, L.; Tripathi, S.; Guibas, L. J.; and Su, H. 2019b. PartNet: A Large-Scale Benchmark The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7194 for Fine-Grained and Hierarchical Part-Level 3D Object Understanding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). ¨Ong¨un, C.; and Temizel, A. 2020. LPMNet: Latent part modification and generation for 3D point clouds. Comput. Graph., 96: 1–13. Postels, J.; Liu, M.; Spezialetti, R.; Gool, L. V.; and Tombari, F. 2021. Go with the Flows: Mixtures of Normalizing Flows for Point Cloud Generation and Reconstruction. In 2021 International Conference on 3D Vision (3DV), 1249–1258. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Schor, N.; Katzir, O.; Zhang, H.; and Cohen-Or, D. 2019. CompoNet: Learning to Generate the Unseen by Part Synthesis and Composition. In The IEEE International Conference on Computer Vision (ICCV). Shu, D. W.; Park, S. W.; and Kwon, J. 2019. 3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). van den Oord, A.; Vinyals, O.; and kavukcuoglu, k. 2017. Neural Discrete Representation Learning. In Advances in Neural Information Processing Systems, volume 30. Wang, H.; Schor, N.; Hu, R.; Huang, H.; Cohen-Or, D.; and Huang, H. 2018. Global-to-Local Generative Model for 3D Shapes. ACM Trans. Graph., 37(6). Wang, K.; Guerrero, P.; Kim, V. G.; Chaudhuri, S.; Sung, M.; and Ritchie, D. 2022. The shape part slot machine: Contact-based reasoning for generating 3D shapes from parts. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part III, 610–626. Wu, C.; Zheng, J.; Pfrommer, J.; and Beyerer, J. 2023. Attention-based Part Assembly for 3D Volumetric Shape Modeling. arXiv preprint arXiv:2304.10986. Wu, R.; Chen, X.; Zhuang, Y.; and Chen, B. 2020a. Multimodal shape completion via conditional generative adversarial networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, 281–296. Wu, R.; Zhuang, Y.; Xu, K.; Zhang, H.; and Chen, B. 2020b. PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Wu, Z.; Wang, X.; Lin, D.; Lischinski, D.; Cohen-Or, D.; and Huang, H. 2019. SAGNet: Structure-Aware Generative Network for 3D-Shape Modeling. ACM Trans. Graph., 38(4). Yan, X.; Lin, L.; Mitra, N. J.; Lischinski, D.; Cohen-Or, D.; and Huang, H. 2022. ShapeFormer: Transformer-Based Shape Completion via Sparse Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6239–6249. Yang, Y.; Feng, C.; Shen, Y.; and Tian, D. 2018. FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Yi, L.; Kim, V. G.; Ceylan, D.; Shen, I.-C.; Yan, M.; Su, H.; Lu, C.; Huang, Q.; Sheffer, A.; and Guibas, L. 2016. A Scalable Active Framework for Region Annotation in 3D Shape Collections. ACM Trans. Graph., 35(6). Yin, K.; Chen, Z.; Chaudhuri, S.; Fisher, M.; Kim, V. G.; and Zhang, H. 2020. Coalesce: Component assembly by learning to synthesize connections. In 2020 International Conference on 3D Vision (3DV), 61–70. Zhang, D.; Choi, C.; Kim, J.; and Kim, Y. M. 2021. Learning to Generate 3D Shapes with Generative Cellular Automata. In International Conference on Learning Representations. Zhang, K.; Yang, X.; Wu, Y.; and Jin, C. 2022. AttentionBased Transformation from Latent Features to Point Clouds. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3): 3291–3299. Zhao, Y.; Zhou, Y.; Chen, R.; Hu, B.; and Ai, X. 2021. MMFlow: Multi-Modal Flow Network for Point Cloud Completion. In Proceedings of the 29th ACM International Conference on Multimedia, 3266–3274. Zhou, L.; Du, Y.; and Wu, J. 2021. 3D Shape Generation and Completion Through Point-Voxel Diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 5826–5835. Zhu, C.; Xu, K.; Chaudhuri, S.; Yi, L.; Guibas, L. J.; and Zhang, H. 2020. AdaCoSeg: Adaptive Shape CoSegmentation With Group Consistency Loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Zhuang, Y. 2022. Progressive Multimodal Shape Generation via Contextual Part Reasoning. In 2022 The 6th International Conference on Machine Learning and Soft Computing, 173–178. Zou, C.; Yumer, E.; Yang, J.; Ceylan, D.; and Hoiem, D. 2017. 3D-PRNN: Generating Shape Primitives With Recurrent Neural Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7195
2024
799
18,627
Deep Quantum Error Correction Yoni Choukroun, Lior Wolf The Blavatnik School of Computer Science Tel Aviv University choukroun.yoni@gmail.com, wolf@cs.tau.ac.il Abstract Quantum error correction codes (QECC) are a key component for realizing the potential of quantum computing. QECC, as its classical counterpart (ECC), enables the reduction of error rates, by distributing quantum logical information across redundant physical qubits, such that errors can be detected and corrected. In this work, we efficiently train novel end-toend deep quantum error decoders. We resolve the quantum measurement collapse by augmenting syndrome decoding to predict an initial estimate of the system noise, which is then refined iteratively through a deep neural network. The logical error rates calculated over finite fields are directly optimized via a differentiable objective, enabling efficient decoding under the constraints imposed by the code. Finally, our architecture is extended to support faulty syndrome measurement, by efficient decoding of repeated syndrome sampling. The proposed method demonstrates the power of neural decoders for QECC by achieving state-of-the-art accuracy, outperforming for small distance topological codes, the existing end-to-end neural and classical decoders, which are often computationally prohibitive. Introduction Error Correcting Codes (ECC) are required in order to overcome computation and transmission corruption in almost every computation device (Shannon 1948; MacKay 2003). Quantum systems are known for being extremely noisy, thereby requiring the use of error correction (Lidar and Brun 2013; Ballance et al. 2016; Huang et al. 2019; Foxen et al. 2020). However, adapting existing classical ECC methods to the quantum domain (QECC) is not straightforward (Raimond and Haroche 1996). The first difficulty in applying ECC-based knowledge to QECC arises from the no-cloning theorem for quantum states (Wootters and Zurek 1982), which asserts that it is impossible to clone a quantum state and thus add arbitrarily redundant parity information, as done in classical ECC. The second challenge is the need to detect and correct quantum continuous bit-flips, as well as phase-flips (Cory et al. 1998; Schindler et al. 2011), while classical ECC addresses only bit-flip errors. A third major challenge is the wave function collapse phenomenon: any direct measurements (while being standard in ECC) of the qubits would cause the wave Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. function to collapse and erase the encoded quantum information (Neumann, Wigner, and Hofstadter 1955). Shor (Shor 1995) proposed the first quantum error correction scheme, demonstrating that these challenges can be overcome. Subsequently, threshold 1 theorems have shown that increasing the distance of a code will result in a corresponding reduction in the logical error rate, signifying that quantum error correction codes can arbitrarily suppress the logical error rate (Aharonov and Ben-Or 1997; Kitaev 1997a; Preskill 1998). This distance increase is obtained by developing encoding schemes that reliably store and process information in a logical set of qubits, by encoding it redundantly on top of a larger set of less reliable physical qubits (Nielsen and Chuang 2002). Most current QECC methods fall in the category of stabilizer codes, which can be seen as a generalization of the classical linear codes (Gottesman 1997). Similarly to the classical parity check constraints, a group of stabilizer operators can provide a syndrome, while preserving the logical quantum states and enabling error detection. Optimal decoding is defined by the unfeasible NP-hard maximum likelihood rule (Dennis et al. 2002; Kuo and Lu 2020). Considerable research dedicated to the design of codes with some additional algebraic structure have been done in order to support efficient decoding (Calderbank and Shor 1996; Schindler et al. 2011). One of the most promising categories of codes are topological codes, particularly surface codes, which originated from Toric codes (Bravyi and Kitaev 1998; Kitaev 1997a,b; Dennis et al. 2002; Kitaev 2003). Given boundary conditions, the idea is to encode every logical qubit in a 2D lattice of physical qubits. This local design of the code via nearest-neighbors coupled qubits allows the correction of a wide range of errors,and, under certain assumptions, surface codes provide an exponential reduction in the error rate (Kitaev 1997a,c). In this work, we present a novel end-to-end neural quantum decoding algorithm that can be applied to any stabilizer code. We first predict an approximation of the system noise, turning the quantum syndrome decoding task into a form that is compliant with the classical ECC Transformer decoder (Choukroun and Wolf 2022), which is based on the 1A threshold is the minimal error rate such that adding more physical qubits results in fewer logical errors. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 64 parity-check matrix. This initial estimate of the noise is reminiscent of Monte Carlo Markov Chains methods (Wootton and Loss 2012), which start the decoding process with a random error that is compatible with the syndrome, and then refine it iteratively. Second, in order to support logical error optimization, we develop a novel differentiable way of learning under the Boolean algebra constraints of the logical error rate metric. Finally, we propose a new architecture that is capable of performing quantum error correction under faulty syndrome measurements. As far as we can ascertain, this is the first time that (i) a Transformer architecture (Vaswani et al. 2017) is applied to the quantum syndrome decoding setting by augmenting syndrome decoding with noise prediction, (ii) a decoding algorithm is optimized directly over a highly non-differentiable finite field metric, (iii) a deep neural network is applied to the faulty syndrome decoding task in a time-invariant fashion. Applied to a variety of code lengths, noise, and measurement errors, our method outperforms the state-of-theart method. This holds even when employing shallow architectures, providing a sizable speedup over the existing neural decoders, which remain computationally inefficient (Edmonds 1965; Dennis et al. 2002; Higgott 2022). Related Work Maximum Likelihood quantum decoding is an NP-hard problem (Kuo and Lu 2020) and several approximation methods have been developed as practical alternatives, trading off the accuracy for greater complexity. The MinimumWeight Perfect Matching (MWPM) algorithm runs in polynomial time and is known to nearly achieve the optimal threshold for independent noise models (Bose and Dowling 1969; Micali and Vazirani 1980). It is formulated as a problem of pairing excitation and can be solved using Blossom’s algorithm (Kolmogorov 2009). Approximations of this algorithm have been developed by parallelizing the group processing or removing edges that are unlikely to contribute to the matching process. However, MWPM implementations and modifications generally induce degradation in accuracy or remain too slow even for the current generation of superconducting qubits (Fowler, Whiteside, and Hollenberg 2012; Fowler 2013; Meinerz, Park, and Trebst 2022). Among other approaches, methods based on Monte Carlo Markov Chains (MCMC) (Wootton and Loss 2012; Hutter, Wootton, and Loss 2014) iteratively modify the error estimate, to increase its likelihood with respect to the received syndrome. Renormalization Group methods (DuclosCianci and Poulin 2010, 2013) perform decoding by dividing the lattice into small cells of qubits. Union-Find decoders (Huang, Newman, and Brown 2020; Delfosse and Nickerson 2021) iteratively turn Pauli errors into losses that are corrected by the Union-Find data structure and are more suited to other types of noise. Multiple neural networks-based quantum decoders have emerged in the last few years (Varsamopoulos, Criger, and Bertels 2017; Torlai and Melko 2017; Krastanov and Jiang 2017; Chamberland and Ronagh 2018; Andreasson et al. 2019; Wagner, Kampermann, and Bruß 2020; Sweke et al. 2020; Varona and Martin-Delgado 2020; Meinerz, Park, and Trebst 2022). These methods are amendable to parallelization and can offer a high degree of adaptability. Current contributions make use of multi-layer perceptrons (Varsamopoulos, Criger, and Bertels 2017; Torlai and Melko 2017; Krastanov and Jiang 2017; Wagner, Kampermann, and Bruß 2020) or relatively shallow convolutional NNs (Andreasson et al. 2019; Sweke et al. 2020), or couple local neural decoding with classical methods for boosting the decoding accuracy (Meinerz, Park, and Trebst 2022). In parallel, deep learning methods have been improving steadily for classical ECC, reaching state-of-the-art results for several code lengths. Many of these methods rely on augmenting the Belief-propagation algorithm with learnable parameters (Pearl 1988; Nachmani, Be’ery, and Burshtein 2016; Lugosch and Gross 2017; Nachmani and Wolf 2019; Buchberger et al. 2020), while others make use of more general neural network architectures (Cammerer et al. 2017; Gruber et al. 2017; Kim et al. 2018; Bennatan, Choukroun, and Kisilev 2018; Choukroun and Wolf 2023b). Recently, (Choukroun and Wolf 2022) have proposed a transformer-based architecture (Vaswani et al. 2017) that is currently the state of the art in neural decoders for classical codes. We address QECC by expanding the ECCT architecture to account for the challenges arising from the transition from classical to quantum neural decoding. By using adapted masking obtained from the stabilizers, the Transformer based decoder is able to learn dependencies between related qubits. However, it is important to note that analogous expansions can be straightforward to apply to other neural decoder architectures. Background We provide the necessary background on classical and quantum error correction coding and a description of the state-ofthe-art Error Correction Code Transformer (ECCT) decoder. Classical Error Correction Code A linear code C is defined by a binary generator matrix G of size k × n and a binary parity check matrix H of size (n −k) × n defined such that GHT = 0 over the order 2 Galois field GF(2). The input message m ∈{0, 1}k is encoded by G to a codeword x ∈C ⊂{0, 1}n satisfying Hx = 0 and transmitted via a symmetric (potentially binary) channel,e.g., an additive white Gaussian noise (AWGN) channel. Let y denote the channel output represented as y = xs + ε ∈ S ⊆Rn, where xs denotes the modulation of x, e.g. Binary Phase Shift Keying (BPSK), and ε is random noise independent of the transmitted x. The main goal of the decoder f : S →{0, 1}n is to provide an approximation of the codeword ˆx = f(y). An important notion in ECC is the syndrome, which is obtained by multiplying the binary mapping of y with the parity check matrix over GF(2) such that s := s(y) = Hyb := H(x ⊕εb) = Hεb, (1) where ⊕denotes the XOR operator, and yb and εb denote the hard-decision vectors of y and ε, respectively. Quantum Error Correction Code The fundamental transition to the quantum realm is defined by the shift from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 65 the classical bit to the quantum bit (qubit), whose quantum state |ψ⟩is defined by |ψ⟩= α|0⟩+ β|1⟩, s.t. α, β ∈C, |α|2 + |β|2 = 1 (2) A coherent quantum error process E can be decomposed into a sum of operators from the Pauli set {I, X, Z, XZ}, where the Pauli basis is defined by the identity mapping I, the quantum bit-flip X and the phase-flip Z, such that I|ψ⟩=|ψ⟩ X|ψ⟩=αX|0⟩+ βX|1⟩= α|1⟩+ β|0⟩ Z|ψ⟩=αZ|0⟩+ βZ|1⟩= α|0⟩−β|1⟩ (3) and where the single-qubit error is defined as E|ψ⟩=αI|ψ⟩+ αXX|ψ⟩+ αZZ|ψ⟩+ αXZXZ|ψ⟩ (4) with αI, αX, αZ, αXZ ∈C being the expansion coefficients of the noise process. According to the no-cloning theorem, a quantum state |ψ⟩cannot be copied redundantly (i.e. |ψ⟩⊗· · · ⊗|ψ⟩), where ⊗· · · ⊗ | {z } n denotes the n-fold tensor product. However, quantum information redundancy is possible through a logical state encoding |ψ⟩n of a given state |ψ⟩via quantum entanglement and a unitary operator U such that |ψ⟩n = U |ψ⟩⊗|0⟩⊗. . . |0⟩)  . An example of such a unitary operator is the GHZ state (Greenberger, Horne, and Zeilinger 1989), which is generated with CNOT gates. In |ψ⟩n, the logical state is defined within a subspace of the expanded Hilbert space, which determines both the codespace C and its orthogonal error space F defined such that E|ψ⟩n ∈F. The orthogonality of C and F makes it possible to determine the subspace occupied by the logical qubit through projective measurement, without compromising the encoded quantum information. In the context of quantum coding, the set P of non-destructive measurements of this type are called stabilizer measurements and are performed via additional qubits (ancilla bits). The result of all of the stabilizer measurements on a given state is called the syndrome, such that for a given stabilizer generator P ∈P we have P|ψ⟩n = |ψ⟩n, and PE|ψ⟩n = −E|ψ⟩n, ∀|ψ⟩n ∈C given an anti-commuting (i.e. −1 eigenvalue) and thus detectable error E. If the syndrome measurement is faulty, it might be necessary to repeat it to improve confidence in the outcome (Dennis et al. 2002, Section IV.B). An important class of Pauli operators is the class of logical operators. These operators are not elements of the stabilizer group but commute with every stabilizer. While stabilizers operators act trivially in the code space, i.e. P|ψ⟩n = |ψ⟩n, logical operators ℓ∈L act non-trivially in it, i.e. ∃|ϕ⟩n ∈C s.t. ℓ|ψ⟩n = |ϕ⟩n. Such operators commute with the stabilizers but can also represent undetectable errors (Brun 2020). Thus, similarly to the classical information bits, QECC benchmarks generally adopt logical error metrics, which measure the discrepancy between the predicted projected noise Lˆε and the real one Lε, where L is the discrete logical operators’ matrix. QECC from the ECC perspective Another way to represent stabilizer codes is to split the stabilizer Figure 1: Illustration of the (a) classical and (b) quantum ECC system. Our work focuses on the latter. See Appendix (Choukroun and Wolf 2023a) A for a detailed illustration. operators into two independent parity check matrices, defining the block parity check matrix H such that H =  HZ 0 0 HX  , separating phase-flip checks HZ and bit-flip checks HX. The syndrome s is then computed as s = Hε, H being the check-matrix defined according to the code stabilizers in P, and ε the binary noise. The main goal of the quantum decoder f : {0, 1}|P| →{0, 1}|L| is to provide a noise approximation given only the syndrome. Therefore, the quantum setting can be reduced to its classical counterpart as follows. The k logical qubits are similar to the classical k information bits, and the n physical qubits are similar to the classical codeword. The syndrome of the quantum state can be computed or simulated similarly to the classical way, by defining the binary parity check matrix built upon the code quantum stabilizers. The main differences from classical ECC are: (i) no access to the current state is possible, while arbitrary measurement of y is standard in the classical world; (ii) we are interested in the logical qubits, predicting the code up to the logical operators mapping L; and finally, (iii) repetitive sampling of the syndrome due to the syndrome measurement error. These differences are at the core of our contributions. An illustration of the classical and quantum coding and decoding framework is given in Figure 1. The goal of our method is to learn a decoder parameterized by weights θ such that ˆε = fθ(s). Error Correction Code Transformer The SOTA ECCT (Choukroun and Wolf 2022) has been recently proposed for classical error decoding. It consists of a transformer architecture (Vaswani et al. 2017) with several modifications. Following (Bennatan, Choukroun, and Kisilev 2018), the model’s input h(y) is defined by the concatenation of the codeword-independent magnitude and syndrome, such that h(y) := [|y|, 1 −2s(y)] ∈R2n−k where [·, ·] denotes vector concatenation. Each element is then embedded into a highdimensional space for more expressivity, such that the initial positional embedding Φ is given by Φ = h(y) · 1T d  ⊙W, where W ∈R(2n−k)×d is the learnable embedding matrix and ⊙is the Hadamard product. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 66 The interaction between the bits is performed naturally via the self-attention modules coupled with a binary mask derived from the parity-check matrix AH(Q, K, V ) = Softmax(d−1/2(QKT + g(H)))V, (5) where g(H) is a binary masking function designed according to the parity-check matrix H, and Q, K, V are the classical self-attention projection matrices. Masking enables the incorporation of sparse and efficient information about the code while avoiding the loop vulnerability of belief propagation-based decoders (Pearl 1988). Finally, the transformed embedding is projected onto a one-dimensional vector for noise prediction. The computational complexity is O(N(d2n + hd)), where N denotes the number of layers, and n is the code length. Here, h << n2 denotes the number of elements in the mask, generally very small for sparse codes, including Toric and Surface codes. Method We present in this Section the elements of the proposed decoding framework, the complete architecture, and the training procedure. From now on, the binary block parity check matrix is denoted by H, the binary noise by ε, the syndrome by s = Hε, and the logical operators’ binary matrix by L. Overcoming Measurement Collapse by Prediction While syndrome decoding is a well-known procedure in ECC, most popular decoders, and especially neural decoders, assume the availability of arbitrary measurements of the channel’s output. In the QECC setting, only the syndrome is available, since classical measurements are not allowed due to the wave function collapse phenomenon. We thus propose to extend ECCT, by replacing the magnitude of the channel output y with an initial estimate of the noise to be further refined by the code-aware network. In other words, we replace the channel’s output magnitude measurement h(y) = [|y|, 1 −2s] by hq(s) = [gω(s), s]. Denoting the ECCT decoder by fθ we have ˆz = fθ hq(s)  = fθ [gω(s), s]  , (6) where gω : {0, 1}ns →Rn is the initial noise estimator, given by a shallow neural network parameterized by ω. This way, the ECCT can process the estimated input and perform decoding by analyzing the input-syndrome interactions via the masking. As a non-linear transformation of the syndrome, gω(s) is also independent of the quantum state/codeword and thus robust to overfitting. The estimator gω(s) is trained via the following objective Lg = BCE gω(s), ε  , (7) where BCE is the binary cross entropy loss. This shift from magnitude to initial error estimation is crucial for overcoming quantum measurement collapse and, as explored in the analysis section, leads to markedly better performance. Logical Decoding Contrary to classical ECC, quantum error correction aims to restore the noise up to a logical generator of the code, such that several solutions can be valid error correction. Accordingly, the commonly used metric is the logical error rate (LER), which provides valuable information on the practical decoding performance (Lidar and Brun 2013). Given the code’s logical operator in its matrix form L ∈{0, 1}k×n, we wish to minimize the following LER objective LLER = BCE Lfθ(s), Lε  . (8) where the multiplications are performed over GF(2) (i.e. binary modulo 2) and are thus highly non-differentiable. We propose to optimize the objective using a differentiable equivalence mapping of the XOR (i.e., sum over GF(2)) operation as follows. Defining the bipolar mapping ϕ : {0, 1} →{±1} over GF(2) as ϕ(u) = 1 −2u, u ∈{0, 1}, we obtain the following property ϕ(u ⊕v) = ϕ(u)ϕ(v), ∀u, v ∈{0, 1}. Thus, with Li the i-th row of L and x a binary vector, we have ∀i ∈{1 . . . k} Λ(L, x)  i := Li ⊕x = ϕ−1  Πjϕ (L)ij · xj  . (9) Thus, as a composition of differentiable functions Λ(L, x) is differentiable and we can redefine our objective as folllows LLER = BCE  Λ L, bin(fθ(s))  , Lε  , (10) where bin denotes the binarization of the soft prediction of the trained model. While many existing works make use of the straight-through estimator (STE) (Bengio, L´eonard, and Courville 2013) for the binary quantization of the activations, we opt for its differentiable approximation with the sigmoid function (i.e., bin(x) = σ(x) = (1 + e−x)−1). As shown in our ablation analysis, the performance of the STE is slightly inferior to the sigmoid approach. In addition to directly minimizing the LER metric, we are interested in noise prediction solutions that are close to the real system noise. We, therefore, suggest regularizing the objective with the classical and popular Bit Error Rate (BER) objective defined as LBER = BCE fθ(s), ε  . Combining the loss terms, the overall objective is given by L = λBERLBER + λLERLLER + λgLg, (11) where λBER,LER,g denote the weights of each objective. Noisy Syndrome Measurements In the presence of measurement errors, each syndrome measurement is repeated T times. This gives the decoder input an additional time dimension. Formally, given binary system noises {εt}T t=1 and binary measurement noises {˜εt}T t=1, we have the syndrome st at a given time t ∈N+ defined as st = H(x ⊕ε1 ⊕· · · ⊕εt)  ⊕˜εt (12) To remain invariant to the number of measurements, we first analyze each measurement separately and then perform global decoding by applying a symmetric pooling function, e.g. an average, in the middle of the neural decoder. Given a NN decoder with N layers and the hidden activation tensor φ ∈RT ×n×dl at layer l = ⌊N/2⌋, the new The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 67 Figure 2: The proposed QECCT. The pooling layer is performed after the middle self-attention block of the model. M(H) is the mask derived from the parity-check matrix. pooled embedding is given by summation along the first dimension ˜φ = 1 T PT t=1 φt. The loss Lg is thus defined as the distance between the pooled embedding and the noise, i.e., Lg = BCE X t gω(st)/T, ε  (13) with ε the cumulative binary system noise. Since we extend the ECCT architecture to its quantum counterpart (QECCT), the hidden activation tensor is now of shape φ ∈RT ×h×n×dl with h the number of self-attention heads, and pooling is performed at the ⌊N/2⌋Transformer block. An advantage of this approach is its low computational cost since the analysis and comparison are performed in parallel at the embedding level. Architecture and Training The initial encoding is defined as a d dimensional one-hot encoding of the n + ns input elements where n is the number of physical qubits and ns the length of the syndrome. The network gω is defined as a shallow network with two fully connected layers of hidden dimensions equal to 5ns and with a GELU non-linearity. The decoder is defined as a concatenation of N decoding layers composed of self-attention and feed-forward layers interleaved with normalization layers. In case of faulty syndrome measurements, the ⌊N/2⌋th layer performs average pooling over the time dimension. The output is obtained via two fully connected layers. The first layer reduces the element-wise embedding to a onedimensional n + ns vector and the second to an n dimensional vector representing the soft decoded noise, trained over the objective given Eq. 11. An illustration of the proposed QECCT is given in Figure 2. The complexity of the network is linear with the code length n and quadratic with the embedding dimension d and is defined by O(Nd2n) for sparse (e.g. topological) codes. The acceleration of the proposed method (e.g. pruning, quantization, distillation, low-rank approximation) (Wang et al. 2020; Lin et al. 2021) is out of the scope of this paper and left for future work. For example, low-rank approximation would make the complexity also linear in d (Wang et al. 2020). Also, no optimization of the sparse selfattention mechanism was employed in our implementation. The Adam optimizer (Kingma and Ba 2014) is used with 512 samples per minibatch, for 200 to 800 epochs depending on the code length, with 5000 minibatches per epoch. The training is performed by randomly sampling noise in the physical error rate testing range. The default weight parameters are λg = 0.5, λLER = 1, λBER = 0.5. Other configurations and longer training can be beneficial but were not tested due to a lack of computational resources. The default architecture is N = 6, d = 128. We initialized the learning rate to 5 · 10−4 coupled with a cosine decay scheduler down to 5 · 10−7 at the end of training. No warmup (Xiong et al. 2020) was employed. Training and experiments were performed on a 12GB Titan V GPU. The training time ranges from 153 to 692 seconds per epoch for the 32 to 400 code lengths, respectively with the default architecture. Test time is in the range of 0.1 to 0.6 milliseconds per sample. The number of testing samples is set to 106, enough to obtain a small standard deviation (∼10−4) between experiments. Application to Topological Codes While our framework is universal in terms of the code, we focus on the popular Surface codes and, more specifically, on Toric codes (Kitaev 1997a,b, 2003), which are their variant with periodic boundary conditions. These codes are among the most attractive candidates for quantum computing experimental realization, as they can be implemented on a two-dimensional grid of qubits with local check operators (Bravyi et al. 2018). The physical qubits are placed on the edges of a two-dimensional lattice of length L, such that the stabilizers are defined with respect to the code lattice architecture that defines the codespace, where k = 2, n = 2L2. The stabilizers are defined in two groups: vertex operators are defined on each vertex as the product of X operators on the adjacent qubits and plaquette operators are defined on each face as the product of Z operators on the bordering qubits. Therefore, there exist a total of 2L2 stabilizers, L2 for each stabilizer group. Assuming that a qubit is associated with every edge of the lattice, for a given vertex v we have the vertex operator defined as Xv = Πi∈vXi, and for a given plaquette p, the plaquette operator defined as Zp = Πi∈pZi. An illustration is given in Appendix G. The mask is defined such that the self-attention mechanism only takes into consideration bits related to each other in terms of the stabilizers (i.e., the parity-check matrix). The parity-check matrices and their corresponding masks for several Toric codes are provided in Figure 3, where one can observe the high locality induced by the code architecture (the mask only reflects stabilizers-related elements). Experiments We evaluate our method on various Toric code lengths, considering the two common noise models: independent and depolarization. In independent (uncorrelated) noise, X and Z errors occur independently and with equal probabilities; therefore, decoding can be performed on the X or Z stabilizers separately. Depolarization noise assigns equal probability p/3 to all three Pauli operators The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 68 Figure 3: (a) The parity-check matrix and (b) the induced masking for the 2-Toric code. (c,d) the corresponding matrix and mask for the 4-Toric code. The parity-check matrix comprises two block matrices for the X and Z stabilizers. We can observe the high sparsity of the Toric code. P(X) = P(Z) = P(Y ) = p/3, P(I) = 1 −p, where Y is the Pauli operator defined as Y = iXZ. In the experiments where measurement errors are incorporated, each syndrome measurement is repeated T = n times and the probability of the measurement error is the same as the probability of the syndrome error, i.e., the distributions of ε and ˜ε in Eq. 12 are the same as in (Dennis et al. 2002; Higgott 2022; Wang, Harrington, and Preskill 2003). The implementation of the Toric codes is taken from (Krastanov and Jiang 2017). As a baseline, we consider the Minimum-Weight Perfect Matching (MWPM) algorithm with the complexity of O((n3 + n2)log(n)), also known as the Edmond or Blossom algorithm, which is the most popular decoder for topological codes. Its implementation is taken from (Higgott and Gidney 2022; Higgott 2022) and is close to quadratic average complexity. In the experiments, we employ code lengths similar to those for which the existing end-to-end neural decoders were tested, i.e., 2 < L ≤10 (Varsamopoulos, Criger, and Bertels 2017; Torlai and Melko 2017; Krastanov and Jiang 2017; Chamberland and Ronagh 2018; Andreasson et al. 2019; Wagner, Kampermann, and Bruß 2020). It is worth noting that none of these previous methods outperform MWPM. The physical error ranges are taken around the thresholds of the different settings, as reported in (Dennis et al. 2002; Wang, Harrington, and Preskill 2003; Krastanov and Jiang 2017). On the tested codes and settings, the Union-find decoder (Park and Meinerz 2022) was not better than the MWPM algorithm. As metrics, we present both the bit error rates (BER) and the LER, see the Logical Decoding section. The LER metric here is a word-level error metric, meaning there is an error if at least one qubit is different from the ground truth. Plain lines denote MWPM and dashed lines denote QECCT. Results Figure 4 depicts the performance of the proposed method and the MWPM algorithm for different Toric code lengths under the independent noise model and without noisy measurements. Figure 5 presents a similar comparison with noisy measurements with T = L and uniformly distributed syndrome error. Figure 6 and 7 compare our method with MWPM for the depolarized noise model, with and without noisy measurements, respectively. We also provide the obtained threshold values. As can be seen, the proposed QECCT outperforms the SOTA MWPM algorithm by a large margin: (i) QECCT Figure 4: LER performance for various physical error rates and lattice length on the Toric code with independent noise and without faulty syndrome measurements. Figure 5: LER performance for various physical error rates and lattice length on the Toric code with independent noise and with faulty syndrome measurements. outperforms the MWPM algorithm on independent noise, where MWPM is known to almost reach ML’s threshold (Dennis et al. 2002), and (ii) QECCT outperforms the SOTA MWPM on the challenging depolarization noise setting by a large margin, where the obtained threshold is 0.178 compared to 0.157 for MWPM and 0.189 for ML (Bombin et al. 2012). The very large gaps in BER imply that the proposed method is able to better detect exact corruptions. The threshold is slightly lower for L = 10 with depolarization noise while the BER is much lower, denoting a potential need for tuning the regularization parameter for larger code. In Appendix B we present performance statistics for the vanilla MLP decoder with similar capacity, highlighting both the contribution of our architecture and the importance of the proposed regularization term. Further Validation and an Ablation Study We extend our experiments beyond what is commonly done in the relevant ML work. To check that the method is applicable not just for Toric codes, Figure 8 shows the performance for different Surface code lengths of the proposed method and the MWPM algorithm under the depolarization noise model. The same parameters as were used for Toric codes are used here. As can be seen, the method is able to similarly outperform MWPM for other codes as well. The large gap in BER in favor of the QECCT probably means that the gap in LER can be made larger with other hyperparameters of the objective. To explore the generality with respect to noise models, Figure 9 shows the performance under the circuit noise The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 69 Figure 6: LER performance for various error physical rates and lattice length on the Toric code with depolarization noise and without faulty syndrome measurements. Figure 7: LER performance for various error physical rates and lattice length on the Toric code with depolarization noise and with faulty syndrome measurements. model of the proposed method and the MWPM algorithm for different Surface code lengths. The channel is simulated using the STIM (Gidney 2021) simulator of quantum stabilizer circuits where the same depolarization error probability is applied after every single and two-qubit Clifford operation, before every stabilizer measurement, and before the syndrome measurement. Evidently, the proposed method is able to consistently outperform MWPM for this type of channel noise as well. The impact of the noise estimator gω is studied in Appendix C, where it is shown that the initial noise estimator is critical for performance. The impact of the various objectives in Eq. 11 is provided in Appendix D. It is demonstrated via the gradient norm that training solely with the LLER objective converges rapidly to a bad local minimum. Training with LBER only produces far worse results than combining it with the LLER objective. For high SNR, a model trained with LBER objective yields a 27 times higher LER than the model trained with combined objectives. Moreover, optimizing with the noise estimator objective Lg results, for high SNR, in a 46% improvement over not employing regularization. The impact of Pooling is explored in Appendix E, where various scenarios are compared. Empirical evidence is provided to support pooling in the middle layer, as suggested. Finally, the impact of the mask and the architecture is explored in Appendix F. Specifically, masking, the model’s capacity, and the STE as bin function from Eq 10 are being evaluated. We can observe that while less important than with classical codes (Choukroun and Wolf 2022), the mask still substantially impacts performance. Also, we note increasing the capacity of Figure 8: LER performance for various error physical rates and lattice length on the Surface code with depolarization noise. Figure 9: LER performance for various error physical rates and lattice length on the Surface code with circuit noise without repetitions. Performance on randomly simulated syndromes. See (Choukroun and Wolf 2023a) for performance on s ̸= 0 only. the network enables better representation and decoding. Limitations While the proposed method has a smaller complexity than classical SOTA, its implementation in a straightforward way makes it difficult to train for larger codes or more repetitions, e.g. unstructured sparse self-attention is not easy to implement on general-purpose DL accelerators. Larger architectures and longer training times would enable larger code correction and are expected to also improve accuracy, deepening the gap from other methods. As a point of reference, GPT-3 (Brown et al. 2020) successfully operates on 2K inputs with a similar Transformer model but with N = 96, d = 12K. Conclusion We present a novel Transformer-based framework for decoding quantum codes, offering multiple technical contributions enabling the effective representation and training under the QECC constraints. First, the framework helps overcome the measurement collapse phenomenon by predicting the noise and then refining it. Second, we present a novel paradigm for differentiable training of highly nondifferentiable functions, with far-reaching implications for ML-based error correction. Finally, we propose a timeefficient and size-invariant pooling for faulty measurement scenarios. Since the lack of effective and efficient error correction is a well-known limiting factor for the development of quantum computers, our contribution can play a role in using machine learning tools to overcome the current technological limitations of many-qubit systems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 70 Acknowledgements This project has received funding from the Tel Aviv University Center for AI and Data Science (TAD) and the Blavatnik Computer Science Research Fund. The contribution of the first author is part of a PhD thesis research conducted at Tel Aviv University. References Aharonov, D.; and Ben-Or, M. 1997. Fault-tolerant quantum computation with constant error. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, 176–188. Andreasson, P.; Johansson, J.; Liljestrand, S.; and Granath, M. 2019. Quantum error correction for the toric code using deep reinforcement learning. Quantum, 3: 183. Ballance, C.; Harty, T.; Linke, N.; Sepiol, M.; and Lucas, D. 2016. High-fidelity quantum logic gates using trapped-ion hyperfine qubits. Physical review letters, 117(6): 060504. Bengio, Y.; L´eonard, N.; and Courville, A. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Bennatan, A.; Choukroun, Y.; and Kisilev, P. 2018. Deep learning for decoding of linear codes-a syndrome-based approach. In 2018 IEEE International Symposium on Information Theory (ISIT), 1595–1599. IEEE. Bombin, H.; Andrist, R. S.; Ohzeki, M.; Katzgraber, H. G.; and Martin-Delgado, M. A. 2012. Strong resilience of topological codes to depolarization. Physical Review X, 2(2): 021004. Bose, R. C.; and Dowling, T. 1969. Combinatorial mathematics and its applications: proceedings of the conference held at the University of North Carolina at Chapel Hill, April 10-14, 1967. 4. University of North Carolina Press. Bravyi, S.; Englbrecht, M.; K¨onig, R.; and Peard, N. 2018. Correcting coherent errors with surface codes. npj Quantum Information, 4(1): 1–6. Bravyi, S. B.; and Kitaev, A. Y. 1998. Quantum codes on a lattice with boundary. arXiv preprint quant-ph/9811052. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877– 1901. Brun, T. A. 2020. Quantum Error Correction. Buchberger, A.; H¨ager, C.; Pfister, H. D.; Schmalen, L.; et al. 2020. Learned Decimation for Neural Belief Propagation Decoders. arXiv preprint arXiv:2011.02161. Calderbank, A. R.; and Shor, P. W. 1996. Good quantum error-correcting codes exist. Physical Review A, 54(2): 1098. Cammerer, S.; Gruber, T.; Hoydis, J.; and ten Brink, S. 2017. Scaling deep learning-based decoding of polar codes via partitioning. In GLOBECOM 2017-2017 IEEE Global Communications Conference, 1–6. IEEE. Chamberland, C.; and Ronagh, P. 2018. Deep neural decoders for near term fault-tolerant experiments. Quantum Science and Technology, 3(4): 044002. Choukroun, Y.; and Wolf, L. 2022. Error Correction Code Transformer. Advances in Neural Information Processing Systems (NeurIPS). Choukroun, Y.; and Wolf, L. 2023a. Deep Quantum Error Correction. arXiv preprint arXiv:2301.11930. Choukroun, Y.; and Wolf, L. 2023b. Denoising Diffusion Error Correction Codes. International Conference on Learning Representations (ICLR). Cory, D. G.; Price, M.; Maas, W.; Knill, E.; Laflamme, R.; Zurek, W. H.; Havel, T. F.; and Somaroo, S. S. 1998. Experimental quantum error correction. Physical Review Letters, 81(10): 2152. Delfosse, N.; and Nickerson, N. H. 2021. Almost-linear time decoding algorithm for topological codes. Quantum, 5: 595. Dennis, E.; Kitaev, A.; Landahl, A.; and Preskill, J. 2002. Topological quantum memory. Journal of Mathematical Physics, 43(9): 4452–4505. Duclos-Cianci, G.; and Poulin, D. 2010. Fast decoders for topological quantum codes. Physical review letters, 104(5): 050504. Duclos-Cianci, G.; and Poulin, D. 2013. Fault-tolerant renormalization group decoder for abelian topological codes. arXiv preprint arXiv:1304.6100. Edmonds, J. 1965. Paths, trees, and flowers. Canadian Journal of mathematics, 17: 449–467. Fowler, A. G. 2013. Minimum weight perfect matching of fault-tolerant topological quantum error correction in average O(1) parallel time. arXiv preprint arXiv:1307.1740. Fowler, A. G.; Whiteside, A. C.; and Hollenberg, L. C. 2012. Towards practical classical processing for the surface code. Physical review letters, 108(18): 180501. Foxen, B.; Neill, C.; Dunsworth, A.; Roushan, P.; Chiaro, B.; Megrant, A.; Kelly, J.; Chen, Z.; Satzinger, K.; Barends, R.; et al. 2020. Demonstrating a continuous set of two-qubit gates for near-term quantum algorithms. Physical Review Letters, 125(12): 120504. Gidney, C. 2021. Stim: a fast stabilizer circuit simulator. Quantum, 5: 497. Gottesman, D. 1997. Stabilizer codes and quantum error correction. California Institute of Technology. Greenberger, D. M.; Horne, M. A.; and Zeilinger, A. 1989. Going beyond Bell’s theorem. In Bell’s theorem, quantum theory and conceptions of the universe, 69–72. Springer. Gruber, T.; Cammerer, S.; Hoydis, J.; and ten Brink, S. 2017. On deep learning-based channel decoding. In 2017 51st Annual Conference on Information Sciences and Systems (CISS), 1–6. IEEE. Higgott, O. 2022. PyMatching: A Python package for decoding quantum codes with minimum-weight perfect matching. ACM Transactions on Quantum Computing, 3(3): 1–16. Higgott, O.; and Gidney, C. 2022. PyMatching v2. https: //github.com/oscarhiggott/PyMatching. Huang, S.; Newman, M.; and Brown, K. R. 2020. Faulttolerant weighted union-find decoding on the toric code. Physical Review A, 102(1): 012419. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 71 Huang, W.; Yang, C.; Chan, K.; Tanttu, T.; Hensen, B.; Leon, R.; Fogarty, M.; Hwang, J.; Hudson, F.; Itoh, K. M.; et al. 2019. Fidelity benchmarks for two-qubit gates in silicon. Nature, 569(7757): 532–536. Hutter, A.; Wootton, J. R.; and Loss, D. 2014. Efficient Markov chain Monte Carlo algorithm for the surface code. Physical Review A, 89(2): 022326. Kim, H.; Jiang, Y.; Rana, R.; Kannan, S.; Oh, S.; and Viswanath, P. 2018. Communication algorithms via deep learning. In Sixth International Conference on Learning Representations (ICLR). Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kitaev, A. Y. 1997a. Quantum computations: algorithms and error correction. Russian Mathematical Surveys. Kitaev, A. Y. 1997b. Quantum computations: algorithms and error correction. Russian Mathematical Surveys. Kitaev, A. Y. 1997c. Quantum error correction with imperfect gates. In Quantum communication, computing, and measurement, 181–188. Springer. Kitaev, A. Y. 2003. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1): 2–30. Kolmogorov, V. 2009. Blossom V: a new implementation of a minimum cost perfect matching algorithm. Mathematical Programming Computation, 1(1): 43–67. Krastanov, S.; and Jiang, L. 2017. Deep neural network probabilistic decoder for stabilizer codes. Scientific reports. Kuo, K.-Y.; and Lu, C.-C. 2020. On the hardnesses of several quantum decoding problems. Quantum Information Processing, 19(4): 1–17. Lidar, D. A.; and Brun, T. A. 2013. Quantum error correction. Cambridge university press. Lin, T.; Wang, Y.; Liu, X.; and Qiu, X. 2021. A survey of transformers. arXiv preprint arXiv:2106.04554. Lugosch, L.; and Gross, W. J. 2017. Neural offset min-sum decoding. In 2017 IEEE International Symposium on Information Theory (ISIT), 1361–1365. IEEE. MacKay, D. J. 2003. Information theory, inference and learning algorithms. Cambridge university press. Meinerz, K.; Park, C.-Y.; and Trebst, S. 2022. Scalable neural decoder for topological surface codes. Physical Review Letters, 128(8): 080505. Micali, S.; and Vazirani, V. V. 1980. An O (v— v— c— E—) algoithm for finding maximum matching in general graphs. In 21st Annual Symposium on Foundations of Computer Science (sfcs 1980), 17–27. IEEE. Nachmani, E.; Be’ery, Y.; and Burshtein, D. 2016. Learning to decode linear codes using deep learning. In 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 341–346. IEEE. Nachmani, E.; and Wolf, L. 2019. Hyper-graph-network decoders for block codes. In Advances in Neural Information Processing Systems, 2326–2336. Neumann, J.; Wigner, E. P.; and Hofstadter, R. 1955. Mathematical foundations of quantum mechanics. Princeton university press. Nielsen, M. A.; and Chuang, I. 2002. Quantum computation and quantum information. Park, C.-Y.; and Meinerz, K. 2022. Open-source C++ implementation of the Union-Find decoder, https://github.com/ chaeyeunpark/UnionFind. Physical Review Letters. Pearl, J. 1988. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan kaufmann. Preskill, J. 1998. Reliable quantum computers. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 454(1969): 385–410. Raimond, J.; and Haroche, S. 1996. Quantum computing: dream or nightmare. DARK MATTER IN COSMOLOGY QUANTUM MEASUREMENTS EXPERIMENTAL GRA VITA Tl ON, 341. Schindler, P.; Barreiro, J. T.; Monz, T.; Nebendahl, V.; Nigg, D.; Chwalla, M.; Hennrich, M.; and Blatt, R. 2011. Experimental repetitive quantum error correction. Science, 332(6033): 1059–1061. Shannon, C. E. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3): 379–423. Shor, P. W. 1995. Scheme for reducing decoherence in quantum computer memory. Physical review A, 52(4): R2493. Sweke, R.; Kesselring, M. S.; van Nieuwenburg, E. P.; and Eisert, J. 2020. Reinforcement learning decoders for faulttolerant quantum computation. Machine Learning: Science and Technology, 2(2): 025005. Torlai, G.; and Melko, R. G. 2017. Neural decoder for topological codes. Physical review letters, 119(3): 030501. Varona, S.; and Martin-Delgado, M. A. 2020. Determination of the semion code threshold using neural decoders. Physical Review A, 102(3): 032411. Varsamopoulos, S.; Criger, B.; and Bertels, K. 2017. Decoding small surface codes with feedforward neural networks. Quantum Science and Technology, 3(1): 015004. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Advances in neural information processing systems, 5998–6008. Wagner, T.; Kampermann, H.; and Bruß, D. 2020. Symmetries for a high-level neural decoder on the toric code. Physical Review A, 102(4): 042411. Wang, C.; Harrington, J.; and Preskill, J. 2003. Confinement-Higgs transition in a disordered gauge theory and the accuracy threshold for quantum memory. Annals of Physics, 303(1): 31–58. Wang, S.; Li, B. Z.; Khabsa, M.; Fang, H.; and Ma, H. 2020. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768. Wootters, W. K.; and Zurek, W. H. 1982. A single quantum cannot be cloned. Nature, 299(5886): 802–803. Wootton, J. R.; and Loss, D. 2012. High threshold error correction for the surface code. Physical review letters, 109(16): 160503. Xiong, R.; et al. 2020. On layer normalization in the transformer architecture. arXiv:2002.04745. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 72
2024
8
18,628
DocFormerv2: Local Features for Document Understanding Srikar Appalaraju1 *, Peng Tang1, Qi Dong1, Nishant Sankaran1, Yichu Zhou2 †, R. Manmatha1 1AWS AI Labs 2School of Computing at University of Utah {srikara, tangpen, qdon, nishsank, manmatha}@amazon.com, flyaway@cs.utah.edu Abstract We propose DocFormerv2, a multi-modal transformer for Visual Document Understanding (VDU). The VDU domain entails understanding documents (beyond mere OCR predictions) e.g., extracting information from a form, VQA for documents and other tasks. VDU is challenging as it needs a model to make sense of multiple modalities (visual, language and spatial) to make a prediction. Our approach, termed DocFormerv2 is an encoder-decoder transformer which takes as input - vision, language and spatial features. DocFormerv2 is pre-trained with unsupervised tasks employed asymmetrically i.e., two novel document tasks on encoder and one on the auto-regressive decoder. The unsupervised tasks have been carefully designed to ensure that the pre-training encourages local-feature alignment between multiple modalities. DocFormerv2 when evaluated on nine challenging datasets shows state-of-the-art performance on all over strong baselines - On TabFact (+4.3%), InfoVQA (+1.4%), FUNSD (+1.0%). Furthermore, to show generalization capabilities, on three VQA tasks involving scene-text, DocFormerv2 outperforms previous comparably-sized models and even does better than much larger models (such as GIT2, PaLI and Flamingo) on these tasks. Extensive ablations show that due to its novel pre-training tasks, DocFormerv2 understands multiple modalities better than prior-art in VDU. Introduction Documents have become ubiquitous carriers of information, including forms, tables, invoices, and other structured documents. Many such documents require visual and layout understanding to make sense (just the text string is insufficient). Visual Document Understanding (VDU) is the task of leveraging machine learning techniques to comprehend such scanned documents, such as PDFs or images. Popular VDU tasks include Document and Tables VQA (Mathew et al. 2020; Chen et al. 2019), sequence labeling for keyvalue identification in forms (Jaume, Ekenel, and Thiran 2019), entity extraction (Seunghyun et al. 2019), and document classification (Harley, Ufkes, and Derpanis 2015). While modern deep-learning based OCR models (Litman et al. 2020) have proven to be effective in extracting text *Corresponding author. †Work conducted during an internship at Amazon. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Visual Document Understanding: Snippet of a document receipt from DocVQA (Mathew, Karatzas, and Jawahar 2021). VDU tasks could include a model asked to predict ”SOLD TO” address (VQA) or predict all relations (”SOLD TO” →<address>, ”SHIP TO” →<address>) or asked to infer info from table (at the top). from documents, the naive approach of linearizing the OCRtext and feeding it to a language model is sub-optimal. This is because the content of a document is presented according to a visual layout and structure that must be taken into account for accurate understanding. Naively linearizing the text from left-to-right will result in sub-optimal performance as the semantic meaning alters based on layout, as shown in Figure 1 - Table 5,4 has experiments demonstrating this. Instead, VDU requires a multi-modal approach that can comprehend text and visual features in the context of a document’s 2D layout. Multi-modal training in general entails feature alignment. Specific to vision-language learning this means aligning a piece of text with an arbitrary span of pixels in visual space (Ho et al. 2022; Kim, Son, and Kim 2021; Radford et al. 2021; Wang et al. 2022a; Alayrac et al. 2022; Biten et al. 2022; Appalaraju et al. 2021; Hao et al. 2023; Appalaraju et al. 2020; Li et al. 2022; Chen et al. 2022b). How those features are aligned makes a big difference. In VDU, a majority of the tasks require local and layout-relative understanding of the document. For example, in document VQA, semantic labeling or entity extraction, a model needs to make sense of text in-relation to where the text is placed in a document. E.g.: ”1” when placed at the top-right/bottom-left of a document is to be interpreted as a page-number vs as a number when placed anywhere else. Based on this domain understanding of VDU and its The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 709 Model Year Conf. Arch. Input Mod. LayoutLMv1 2020 KDD E T + S DocFormerv1 2021 ICCV E T + V + S LayoutLMv2 2021 ACL E T + V + S SelfDoc 2021 CVPR E LayoutLMv3 2022 ACM E T + V + S BROS 2022 AAAI E T + S XYLayoutLM 2022 CVPR E T + V + S FormNet 2022 ACL E ERNIE-Layout 2022 EMNLP E T + V + S LiLT 2022 ACL E T + S XDoc 2022 EMNLP E T TILT 2021 ICDAR E + D T + V + S DocFormerv2 2024 AAAI E + D T + V + S Table 1: VDU Related Work: In this table, a summary of VDU prior art is presented with their architecture (E: Encoder, D: Decoder), the input (T: text, V: vision, S: spatial features), the vision features branch and core idea behind the work. Figure 2: VDU Paradigms: Broad state of Visual Document Understanding (VDU) approaches. In a) E-only LayoutLM (Xu et al. 2020a) and variants. b) E+D but only languagetask TILT (Powalski et al. 2021). c) Ours. challenges, we present DocFormerv2 (DFv2) which is an encoder-decoder multi-modal transformer. In this work, we meticulously devise two novel unsupervised pre-training tasks with the objective of incorporating local semantic information of a document into the model. These pre-training tasks impart the ability to the model to accurately locate relevant information within the document. We also depart from VDU prior-art(Powalski et al. 2021; Tang et al. 2022) as we introduce a novel asymmetrical method of pre-training. i.e., multi-task pre-training on encoder (two tasks) and decoder (one task). We propose two novel pre-training tasks on encoder with the intent to enrich the encoder with local semantic information. The tasks aid in by fusing and aligning multi-modal input and generating efficient representations for the decoder. We show that these pre-training tasks are necessary for effective VDU (see ablation sec.). Furthermore, we demonstrate that a simplified linear visual layer is sufficient to encapsulate visual features, simplifying the architecture from previous VDU research (Xu et al. 2020b; Li et al. 2021c; Powalski et al. 2021) which required specific visual encoders (Dosovitskiy et al. 2020; Liu et al. 2021; He et al. 2016). Experimentally we demonstrate that DocFormerv2 achieves state-of-the-art performance on five VDU tasks. In addition, we demonstrate the versatility of DocFormerv2 by utilizing its pre-trained model and fine-tuning it on text-VQA tasks from a completely different domain. Our approach yields superior performance on three distinct text-VQA datasets, surpassing comparable models and in some datasets much bigger models like GIT2 (Wang et al. 2022a), PaLI (Chen et al. 2022b) and Flamingo (Alayrac et al. 2022). Therefore, the primary contributions of this paper are as follows: • Asymmetrical method of pre-training for VDU: Two novel tasks on the encoder which encourage local multi-modal feature collaboration (Token-to-Line task and Token-to-Grid task) and one on the decoder (see Approach sec). • Simplified Visual branch: DocFormerv2 is end-to-end trainable and it does not rely on a pre-trained object detection network for visual features simplifying its architecture. On five varied downstream VDU tasks, DocFormerv2 achieves state-of-the-art results (See experiments sec.). • We also show DocFormerv2 versatility by fine-tuning it on a totally different domain - text-VQA datasets without changing the pre-training. DocFormerv2 beats strong baselines and achieves state-of-the-art numbers on three text-VQA datasets amongst similar model sizes. Selectively, on Text-VQA it out-performs much larger models like PaLI-3B +6.8%, PaLI-15B +1.5% and Flamingo(Alayrac et al. 2022) (+9.9%) (106x DocFormerv2 size in the num. of parameters) by absolute accuracy (see TextVQA experiments). Furthermore, we conducted comprehensive ablation experiments to demonstrate the advantages of our pre-training tasks, the model’s resilience to input noise, and the efficacy of the simplified visual branch. Related Work VDU research has attracted considerable attention over the past few years (Wang et al. 2022b; Xu et al. 2020a; Fujinuma et al. 2023; Xu et al. 2020b; Appalaraju et al. 2021; Li et al. 2021c; Powalski et al. 2021; Li et al. 2021b; Huang et al. 2022; Appalaraju et al. 2023; Hong et al. 2020; Gu et al. 2022a; Tang et al. 2023b; Gu et al. 2022b; Lee et al. 2022; Wang, Jin, and Ding 2022; Chen et al. 2022a; Tang et al. 2022; Łukasz Borchmann et al. 2021; Peng et al. 2022; Li et al. 2021a; Tang et al. 2023a). Prominent published research papers in this area are catalogued in Table 1 - the research focus has been lopsided towards encoder-only models. While TILT (Powalski et al. 2021) proposed a encoderdecoder transformer for VDU, they only train it on one pretraining task (masked language modeling) and also use a bulky visual CNN. Our approach DocFormerv2 , simplifies the architecture by not using a separate visual module (CNN or Transformer based) and has multiple unsupervised pretraining tasks. See supplemental for more on prior art. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 710 Figure 3: DocFormerv2 Pre-train Architecture: After pretrain, the two prediction heads (token-to-line and grid) on encoder are removed, rest of the architecture remains the same for down-stream tasks. Read section for more details on Ts and Vs. All components are end-to-end trainable. Best viewed in color. Approach Architecture DocFormerv2 (DFv2 ) is a multi-modal encoder-decoder transformer architecture (see fig. 3). Three variations of DFv2 are designed - small, base and large variants (see supplemental material for details). DFv2 takes multi-modal inputs, the image of the document I, text T extracted by an OCR model along with OCR bounding box co-ordinates as spatial features S. DFv2 has a unified multi-modal encoder where the multi-modal features fuse and align with the help of novel pre-training tasks (see Fig. 3). Visual features: DFv2 has a simplified visual branch contrary to most VDU prior-art (fig. 2). DFv2 consumes a flattened image sequence as visual input. Specifically, let v ∈R3×h×w be the image of a document. A simple V = linear(conv2×2(v)) is used to create an image embedding. The weights are randomly initialized for pre-training. As documents tend to have lots of white-space, the linear down-sampling layer gives an opportunity for the model to only keep relevant visual features. Based on our ablation experiments (see supplemental material), this simple approach gives better results than using expensive image encoders such as Swin, ViT (Liu et al. 2021; Dosovitskiy et al. 2020; Ronneberger, Fischer, and Brox 2015) or bulky object-detection networks like FRCNN variants (Ren et al. 2015) as was used in VDU prior-art (Powalski et al. 2021; Appalaraju et al. 2021; Xu et al. 2021). Since transformer layers are permutation-invariant, a learnable 2D-positional encoding Ps is also computed. Finally, Vs = V + Ps Language features: Let t be the predicted text extracted via an OCR model for a document image. DFv2 uses a sentencepiece sub-word tokenizer (Kudo and Richardson 2018) to get tokens ttok. A maximum sequence limit s is applied during training and testing, so if the number of OCR tokens is greater than s, the rest is ignored. If the sequence length is less than s, the sequence is padded. The OCR tokens ttok are sent to a learnable embedding layer Wt to create a text embedding T = Wt(ttok). Spatial features: For each OCR word ti, the OCR model predicts its bounding-box location in the normalized form bi = (x1, y1, x3, y3). This information is encoded using four learnable spatial embedding layers - Wx for encoding a word horizontal spatial information xi, Wy for the vertical coordinate yi, Wh for word height hi and Ww for the width wi. The spatial features not only encode the location of a word in the document but also provides cues about a word’s font-size and thereby its importance in a document (via hi and wi). Specifically, spatial features S = Wx(x1, x3) + Wy(y1, y3) + Wh(y3 −y1) + Ww(x3 −x1). Finally, Ts = T + S. Other features: Ts and Vs features are different modalities (fig. 3). As the model has no idea it is being fed multimodal input, another learnable embedding Wm is used to provide cues to the model about the multi-modal input. A modality-embedding Wm learns nuances of different modalities, which generates Mv embedding for visual modality and Mt for text. Finally, V = Vs + Mv and T = Ts + Mt, whereby T and V are concatenated (V ⊙T) in the sequence dimension to form the input sequence to the DFv2 encoder. Unsupervised Document Pre-training In DocFormerv2 we follow the now well established practice of unsupervised pre-training followed by downstream task fine-tuning. Furthermore, with the intent of eliciting the maximum benefit from unsupervised pre-training, we designed the pre-training tasks as a close proxy for downstream tasks. We now describe the two novel pre-training tasks employed on the encoder and the language modeling task on decoder. All three tasks are performed at the same time and the final loss is a linear combination of all three losses for each iteration. Encoder Token-to-Line Task: We share the intuition that for VDU tasks local feature semantic alignment is important. Most of the related information for key-value prediction in a form or VQA is either on the same line or adjacent lines of a document e.g., see fig. 4, in order to predict the value for "TOTAL" (box a), the model has to look in the same line (to its right - "$4.32" box d). We teach the model the relative position information between tokens. For implementation, we randomly pick two language tokens and ask the model to predict the number of lines between them. Furthermore, as a document could have an arbitrary number of lines of text, the task is quantized. i.e., there are only three labels: {0, 1, 2}. All token pairs that are more than 2 lines apart are labelled as 2 because distant tokens are not likely related and the model should learn to ignore them. Assume that a, b, c, d (fig. 4) are lines. Let F be the DFv2 encoder head function trying to predict a label for this task. then: F(a, d) = 0; F(a, b) = 1; F(b, c) = 2 (1) Based on the ablation (table 8), this task gives +2.2% benefit on DocVQA task. The loss for this task is tracked as Ltol. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 711 Figure 4: Token-to-Line Figure 5: Token-to-Grid: 4x4 Encoder Token-to-Grid Task: Different semantic information is concentrated in different regions of the document. For example, a) In a financial document, the top block contains the header, the middle block contains information to be filled and the bottom block typically contains footer elements/instructions. b) Page numbers are typically at the top or the bottom. c) In a receipt/invoice the company name is typically at the top. The content of a document is presented according to a visual layout and structure that must be taken into account for accurate understanding. Based on this intuition, this task pairs language semantics with the location (visual, spatial or both) in a document. Specifically, the document is virtually divided into a m x n grid. Each OCR token is assigned a grid number and DFv2 is tasked with predicting the grid number for each token. For each OCR token ti, its topleft location (x1, y1) is used to determine its grid-number gi. Grids are in raster-scan order, so if a particular token falls on the boundary of multiple grids, the scan-order is used to disambiguate. If a token falls on the boundary of normalized image co-ordinates, they are ignored for prediction. See fig. 5 for viz. Specifically, we have: gi = ⌊x1 ∆x ⌋+ ⌊y1 ∆y ⌋· m, where ∆x and ∆y are the widths and heights of each grid, respectively, and m is the number of grids in a row. The loss Ltog. Decoder Language Modeling: Since VDU predictions are in the language domain, language understanding forms an important component of DFv2 pre-training. We do the denoising masked language modeling popularized by T5 (Raffel et al. 2019). During pre-training, not only are the input tokens randomly MASKED it’s spatial features (mentioned in §) are also masked. Masking the spatial features S for the masked tokens makes the grid prediction and line prediction hard because the model does not have 2D-position information of the masked tokens. It has to infer from other available context. The loss for this operation is denoted Ldlm. Final pre-training loss: The final loss is a linear combination of all three pre-training losses i.e., Lfinal = k∗Ltol+l∗ Ltog + m ∗Ldlm, where k, l, m are empirically determined. Downstream Tasks: Once pre-training is done, we remove the token-to-line and token-to-grid linear prediction heads. The rest of the pre-trained model is fine-tuned on the respective downstream train data. Experiments Implementation details: Following prior-art (Appalaraju et al. 2021; Powalski et al. 2021; Biten et al. 2022; Xu et al. 2020a, 2021; Huang et al. 2022) we use the Industrial Document Library (IDL)1 dataset for pre-training. The IDL is a collection of industry documents hosted by UCSF. It hosts millions of documents publicly disclosed from various industries like tobacco, drug, food etc. The data from the website amounts to about 13M documents, translating to about 70M pages of various document images. We further extracted OCR for each document. Data was cleaned and about 6M documents were pruned, the resulting 64M document images and OCR-text (with spatial co-ordinates) is used for unsupervised pre-training. The data distribution for IDL 64M is presented in supplemental section. Downstream experiments: The model is fine-tuned on the provided training set and numbers are reported on the corresponding validation/test set. No dataset specific hyperparameter tuning is done. This is an advantage of our approach and we believe that the numbers may be higher if dataset specific fine-tuning is done. Details about fine-tuning datasets are in the supplemental section. We used Pytorch (Paszke et al. 2019) and the Huggingface library (Thomas et al. 2019). Evaluation Metrics: A dataset specific evaluation metric is adopted. For DocVQA(Mathew et al. 2020), InfoVQA(Mathew et al. 2022), ST-VQA(Biten et al. 2019b), Average Normalized Levenshtein Similarity (ANLS) (Biten et al. 2019a) is used. ANLS measures the similarity between the predicted results and ground truth and ranges from (0,100). For FUNSD(Jaume, Ekenel, and Thiran 2019), CORD(Seunghyun et al. 2019) F1-score is used. For TextVQA (Singh et al. 2019) and OCR-VQA(Mishra et al. 2019) accuracy is used. In all metrics, higher the better. Table VQA WikiTable and TabFact (Chen et al. 2019; Łukasz Borchmann et al. 2021): These datasets study table understanding and fact verification with semi-structured evidence over tables collected from Wikipedia. Entailed and refuted statements corresponding to a single row or cell were prepared by the authors of TabFact. This task poses challenges due to the complex linguistic and spatial reasoning involved. In Table 2, we can see that DocFormerv2 out-performs prior art by a large margins (+1.1%) and (+4.3%) resp. Document VQA DocVQA (Mathew et al. 2020) and InfographicsVQA (Mathew et al. 2022) are datasets for the document VQA task. DocVQA (Mathew et al. 2020) focuses on VQA for real-world industry documents and requires that the model understand images, texts, tables, forms. InfographicsVQA (Mathew et al. 2022) focuses on VQA for infographics and 1https://www.industrydocuments.ucsf.edu/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 712 Model WikiTable Acc. (%) TabFact Acc. (%) methods based on only text / (text + spatial) features: T5large 33.3 58.9 T5large+U 38.1 76.0 T5large+2D 30.8 58.0 T5large+2D+U 43.3 78.6 methods based on image + text + spatial features: LayoutLMv3large 45.7 78.1 UDOP 47.2 78.9 DocFormerv2large 48.3(+1.1%) 83.2(+4.3%) Table 2: Comparison on Table VQA Datasets: DocFormerv2 outperforms the previous state of the art on WikiTableQuestions and TabFact datasets. bold is SOTA and underline indicates the prev SOTA. See supp. for viz. p.s. had to remove citations due to AAAI width and font rules. requires that the model understand plots/graphs, texts, layout, figures. A model needs to reason multi-modally to generate an answer for this data. Please see the supplemental for data statistics and samples. Sequence Labeling Task We study the performance of DocFormerv2 on the semantic entity-labeling task (i.e., group tokens which belong to the same class). We test the model on FUNSD dataset (Jaume, Ekenel, and Thiran 2019), which is a forms dataset containing 199 noisy documents (149 images for train, 50 images for test). There are four classes: question, answer, header, and other. We measure entity-level performance using F1 score (Table 4). The input sequence to Docformerv2 includes individual texts as prompts and all document texts as context, and the decoder sequence contains the entity texts and predicted labels. Docformerv2 achieves 88.89% F1 score (Table 4), and outperforms the existing methods without using entity box priors in pretraining and finetuning (grayed models in the table). Following common practice (Łukasz Borchmann et al. 2021; Powalski et al. 2021; Xu et al. 2020b), we train DocFormerv2 on the combination of the training and validation sets and do evaluation on the test set for each dataset. In addition, we also follow (Powalski et al. 2021; Xu et al. 2020b) to train DocFormerv2 on an extra document VQA dataset with 850k question-answer pairs and then fine-tune on DocVQA/InfographicsVQA for higher accuracy. DocFormerv2 outperforms (Table 3) the previous state of the art for document VQA even without using any extra document VQA pre-training data. After pre-training on the extra data, DocFormerv2 surpasses the previous state of the art by 0.79% on DocVQA and 1.4% on InfographicsVQA, which confirms the effectiveness of our approach. Model DocVQA test ANLS (%) InfoVQA test ANLS (%) methods based on only image: Donutbase 67.5 11.5 Pix2Structlarge 76.6 40.0 methods based on only text / (text + spatial) features: T5large 70.4 36.7 T5large+U 76.3 37.1 T5large+2D 69.8 39.2 T5large+2D+U 81.0 46.1 methods based on image + text + spatial features: LayoutLMv3large 83.4 45.1 UDOP 84.7 47.4 LayoutLMv2† large 86.7 TILT† large 87.05 DocFormerv2large 87.2 DocFormerv2† large 87.84(+0.79%) 48.8(+1.4%) Table 3: Comparison on Document VQA datasets: Our work, DocFormerv2 outperforms the previous state of the art. † indicates training with extra document VQA data. Entity Extraction Task We evaluate DocFormerv2 for the entity extraction task on the CORD dataset. CORD (Seunghyun et al. 2019) consists of 1000 receipts (800/100/100 images for train/val/test). It defines 30 fine-grained fields under 4 coarse-grained categories. To extract all entities, in the input sequence, we add a question of “What are entities of <CLASS>?” in front of all text context tokens. The output of the decoder includes all entities which are separated by a separator token. Following the standard evaluation metric for entity extraction, we measure entity-level performance using F1 score. Docformerv2 (Table 5) achieves 97.7% F1 score, and outperforms existing methods. Docformerv2 enables multiple entities decoding in an auto-regressive way which shows that the model is able to learn both intra-entity and inter-entity structures. Note that it is unfair to directly compare Docformerv2 with LayoutLMv3(LaMv3), because LaMv3 uses segment-level layout positions, while the other works use word-level layout positions 2. More importantly, the task studied in Table 5 is entity extraction: predicting words and classes of all entities, against this problem setting if one uses segment-level boxes as inputs. Generalization Experiments - Scene-Text VQA In this section, we show the strength of DocFormerv2 on a different task - Scene-Text VQA. Unlike document understanding which focuses on document images, the Scene-Text VQA task answers questions for natural images with scene 2LaMv3 (Huang et al. 2022) highlighted that using segmentlevel positions may benefit the semantic entity labeling task, so the two types of work are not directly comparable. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 713 Model Precision Recall F1 methods based on only image: Dessurtbase 65.0 methods based on only text / (text + spatial) features: BERTbase 54.69 61.71 60.26 RoBERTabase 63.49 69.75 66.48 UniLMv2base 63.49 69.75 66.48 LayoutLMv1base 76.12 81.55 78.66 BROSbase 80.56 81.88 81.21 BERTlarge 61.13 70.85 65.63 RoBERTalarge 67.80 73.91 70.72 UniLMv2large 67.80 73.91 70.72 LayoutLMv1large 75.36 80.61 77.89 StructuralLMlarge 83.52 86.81 85.14 FormNet 85.21 84.18 84.69 methods based on image + text + spatial features: LayoutLMv1base 76.77 81.95 79.27 LayoutLMv2base 80.29 85.39 82.76 LayoutLMv2large 83.24 85.19 84.20 DocFormerbase 80.76 86.09 83.34 DocFormerlarge 82.29 86.94 84.55 SelfDoc 83.36 UDoc 87.93 StrucTexT✢ 85.68 80.97 83.09 LayoutLMv3base ❅ 77.39 81.65 79.46 LayoutLMv3large ❅ 81.35 83.75 82.53 LayoutLMv3base ❍ 89.55 91.65 90.29 LayoutLMv3large ❍ 92.19 92.10 92.08 UDOP❍ 91.62 DocFormerv2base 89.15 87.6 88.37 DocFormerv2large 89.88 87.92 88.89(+1.0%) Table 4: FUNSD comparison: DocFormerv2 does better than models its size and compares well with even larger models. ✢does not use standard train/test split, and the results are not directly compared with others. ❍use OCR lines (not word box) as 2D position for words, and use entity boxes as 2D position for each word during fine-tuning and test, and thus the results are not directly comparable. ❅ are results by using the word boxes as 2D position for each word as other competitors do. text. We fine-tune our document pre-trained models on three Text-VQA datasets. We emphasize that no image-text pretraining was performed on DocFormerv2, it was merely finetuned on the respective Text-VQA training dataset. Three popular Text-VQA datasets are used - OCR-VQA (Mishra et al. 2019), TextVQA (Singh et al. 2019) and ST-VQA (Biten et al. 2019b), each with strong baselines from the vision-language community (as is standard practice by TextVQA we mean any scene text VQA dataset while TextVQA refers to a specific dataset). Please see the supplemental for a dataset breakdown. For OCR-VQA, we fine-tune our models on the training set and do evaluation on the validation and test sets. For TextVQA and ST-VQA, following the previous state-of-the-art methods (Biten et al. 2022; Yang et al. 2021), we fine-tune our models on the combination of the TextVQA and ST-VQA training sets and do evaluation on the validaModel Precision Recall F1 methods based on only text / (text + spatial) features: BERTbase 88.33 91.07 89.68 UniLMv2base 89.87 91.98 90.92 SPADE 91.50 LayoutLMv1base 94.37 95.08 94.72 BROSbase 95.58 95.14 95.36 BERTlarge 88.86 91.68 90.25 RoBERTalarge 93.80 UniLMv2large 91.23 92.89 92.05 LayoutLMv1large 94.32 95.54 94.93 FormNet 98.02 96.55 97.28 methods based on image + text + spatial features: LayoutLMv2base 94.53 95.39 94.95 LayoutLMv2large 95.65 96.37 96.01 TILTbase❍ 95.11 TILTlarge❍ 96.33 DocFormerbase 96.52 96.14 96.33 DocFormerlarge 97.25 96.74 96.99 UDoc 96.86 LayoutLMv3base❅ 92.92 94.31 93.61 LayoutLMv3large❅ 96.78 96.78 96.78 LayoutLMv3base❍ 96.56 LayoutLMv3large❍ 97.46 UDOP❍ 97.58 DocFormerv2base 97.51 96.10 96.80 DocFormerv2large 97.71 97.70 97.70(+0.89%) Table 5: CORD dataset comparison: We present entity-level Precision, Recall, F1 on test set. ❍use OCR lines (not word box) as 2D position for words, and use entity boxes as 2D position for each word during fine-tuning and testing, and thus the results are not directly comparable. ❅are results by using the word boxes as 2D position for each word as the other competitors do. tion and test sets of each dataset. Tables 6, 7, 9 show that our large size model outperforms the comparably sized previous state-of-the-art method LaTr (Biten et al. 2022) by +3.4%, +2.4% and +2.2% on the OCR-VQA, TextVQA, and ST-VQA test sets respectively. These results show that our method generalizes beyond document understanding tasks. Analysis: In an unexpected turn of events, on OCR-VQA, our model DFv2 large outperforms GIT2, despite the latter being a significantly larger model with 5.1B parameters compared to our 750M, and using a massive 12.9B data corpus for pre-training compared to our 64M. On TextVQA, DocFormerv2 does better than several vision-language models which are much bigger and have been pre-trained on much more data. On the test set, it is (+9.9%) better than Flamingo (which at 80B has 106x the number of parameters as ours). On the validation set, it is better than PaLI-3B and 15B (+2.2%, +6.8%) respectively. GIT2 and PaLI-17B do perform better than it. (GIT2 also uses 8 VQA datasets to train). DocFormerv2 gets this perThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 714 Model Val Acc. (%) Test Acc. (%) Blk+CNN+W2V 48.3 M4C 63.5 63.9 LaAP 63.8 64.1 LaTrbase 67.5 67.9 GITbase 57.3 57.5 GITlarge 62.4 62.9 GIT 67.8 68.1 GIT2✢ 70.3 DocFormerv2base 69.7 70.3 DocFormerv2large 71.1 71.5(+3.4%) Table 6: Comparison on OCR-VQA: DocFormerv2 is better than the previous SOTA by (+3.4%). Bold indicates best and underline indicates the previous state of the art. GIT2 ✢: uses extra VQA data (aggregation of 8 VQA datasets). formance without any natural image-text pre-training. We present this as evidence that DocFormerv2 is a good approach to solving this problem with a much smaller model and much less data. Ablation Experiments Ablation of DFv2 novel pre-training tasks: Table 8 shows DFv2 ablation on the proposed novel pre-training tasks and multi-modal training. The denoising language modeling task and spatial features mentioned in Approach sec. are applied to all architectures. Note, this ablation was performed on DFv2 -small with 1M doc pre-training. Robustness to OCR errors. This study examines the robustness of DocFormerv2 and LayoutLMv23 to OCR errors. Artificial noise simulating character typos is injected into text from the FUNSD dataset, capped at one error per word. While both models utilize visual features, DocFormerv2’s generative decoder demonstrates substantial resilience, experiencing only a -1.68% performance drop even with 20% OCR errors, compared to a -9.84% decline for the encoderonly LayoutLMv2. This highlights the advantage of our generative decoder approach in handling text noise. See Fig. 6. Figure 6: Induced OCR Error Ablation. F1 score perf eval on FUNSD for varying orders of injected OCR errors. 3Note that both LayoutLMv2 and ours use visual features, and thus it is fair to compare the robustness in multi-modality context. Model Val Acc. (%) Test Acc. (%) M4C 47.8 LaAP 41.0 41.4 SA-M4C 45.4 44.6 SMA 44.6 45.5 SceneGate 42.4 44.0 SC-Net 44.8 45.7 LOGOS 51.5 51.1 TAP + TAG 53.6 53.7 TAP 54.7 54.0 TAP Two-Stage 55.9 55.3 Flamingo-80B★ 57.1 54.1 PreSTU 56.3 LaTr-0.3Bbase 58.0 58.9 LaTr-0.3B† base 59.5 59.6 LaTr-0.85B† large 61.0 61.6 GIT-0.13Bbase❍ 18.8 GIT-0.4Blarge❍ 37.5 GIT-0.7B❍ 59.9 59.8 GIT2-5.1B✢ 68.4 67.3 PaLI-3B❍ 58.8 PaLI-15B❍ 64.1 PaLI-17B❍ 70.5 73.1 DocFormerv2-0.2B† base 61.6 60.0 DocFormerv2-0.75B† large 65.6 64.0(+2.4%) Table 7: Comparison on TextVQA: † indicates the model used the combination of ST-VQA and TextVQA training sets to train the model. GIT2 ✢: extra data used (aggregation of 8 VQA datasets) ★: video-text data. ❍: proprietary image-text data. The Flamingo, GIT2, and PaLI models are much bigger (# parameters ≥3x DocFormerv2large parameters) and use large amounts of external data. DocFormerv2large still outperforms Flamingo (+9.9%), PalI3B (+6.8%) and PalI-15B (+1.5%) models. Model Ablation Datasets DocVQA (ANLS) baseline B 69 B + V 70.5 (+1.5) B + V + L 71.2 (+2.2) B + V + G 71.7 (+2.7) B + V + L + G 73.0 (+4.0) Table 8: DocFormerv2 Pre-training Tasks Ablation: Impact of three pre-training tasks on four downstream tasks over baseline. B: baseline, V: only with Visual features §, L: with Token-to-Line prediction pre-training §, G: with Token-toGrid prediction pre-training §. Pre-training Impact or Better Approach? To isolate the impact of pre-training data size on DFv2’s performance, we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 715 Model Val ANLS (%) Test ANLS (%) M4C 47.2 46.2 LaAP 49.7 48.5 SA-M4C 51.2 50.4 SceneGate 52.5 51.6 LOGOS 58.1 57.9 TAP 59.8 59.7 TAP + TAG 62.0 60.2 PreSTU 65.5 LaTr-0.3Bbase 67.5 66.8 LaTr-0.3B† base 68.3 68.4 LaTr-0.85B† large 70.2 69.6 GIT-0.13Bbase 20.7 GIT-0.4Blarge 44.6 GIT-0.7B 69.1 69.6 DocFormerv2-0.2B† base 70.1 68.4 DocFormerv2-0.75B† large 72.9 71.8(+2.2%) Table 9: Comparison on ST-VQA: On ST-VQA DocFormerv2 outperforms comparable sized models like GIT and LaTr but large margin (+2.2%) in-spite of being pretrained on less data. † indicates the combination of the STVQA and TextVQA training sets is used. Model #data Datasets FUNSD CORD LayoutLMv2base 11M 82.7 94.9 DocFormerv2base 11M 86.1 (+3.4%) 96.2 (+1.3%) DocFormerv2base 64M 87.9(+5.2%) 96.8(+1.9%) DocFormerv2† base 64M 88.3 (+5.6%) 96.8 (+1.9%) Table 10: DocFormerv2 Pre-training Data Ablation: Impact of training with different # of pre-training data on various down-stream tasks. The F1 scores are reported. † indicates the combination of the ST-VQA and TextVQA training sets is used. conducted an ablation study with both models trained on the same, smaller dataset (11M documents). As shown in Table 10, even with limited data, DFv2 still outperforms LayoutLMv2, indicating the effectiveness of its novel asymmetric pre-training approach. Additionally, DFv2 demonstrates further improvement with more data (64M), solidifying its superiority for VDU tasks. Correct grid size for Token-to-Grid pre-training? In §, we presented the novel Token-to-Grid pre-training task. In this pre-training ablation §8 this task was observed to provide benefits. Here the appropriate virtual grid-size is empirically determined. From Fig. 7, 4x4 grid seems optimal. Smaller or asymmetric grid structures (4x1) seem to cause harm. On the other end, if the grid is too granular (12x12, 8x8), the performance seems to hurt as well. All models pretrained on DFv2 small and 1M documents from IDL, with the Figure 7: Token-to-Grid Ablation. How different grid sizes used for the Token-to-Grid pre-training task affects model performance on DocVQA. 4x4 seems best and was used for all final pre-training. Vision and Token-to-line enabled. More ablations Please find more ablation experiments in supplemental 4 highlighting more experiments of our approach. Conclusion Our work DocFormerv2 highlights the importance of two novel pre-training tasks and the efficacy of enriching encoder representations with local semantic information via pre-training tasks. We perform experiments on eight varied datasets (five on VDU and three on scene-text VQA) achieving state-of-the-art numbers on all datasets. Based on ablations, we also show the various design choices and its impact on downstream performance References Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; Ring, R.; Rutherford, E.; Cabi, S.; Han, T.; Gong, Z.; Samangooei, S.; Monteiro, M.; Menick, J.; Borgeaud, S.; Brock, A.; Nematzadeh, A.; Sharifzadeh, S.; Binkowski, M.; Barreira, R.; Vinyals, O.; Zisserman, A.; and Simonyan, K. 2022. Flamingo: a Visual Language Model for Few-Shot Learning. ArXiv, abs/2204.14198. Appalaraju, S.; Jasani, B.; Kota, B. U.; Xie, Y.; and Manmatha, R. 2021. Docformer: End-to-end transformer for document understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 993–1003. Appalaraju, S.; Tang, P.; Dong, Q.; Sankaran, N.; Zhou, Y.; and Manmatha, R. 2023. DocFormerv2: Local Features for Document Understanding - Full Paper and Supplemental. arXiv preprint arXiv:2306.01733. Appalaraju, S.; Zhu, Y.; Xie, Y.; and Feh´erv´ari, I. 2020. Towards Good Practices in Self-supervised Representation 4https://arxiv.org/abs/2306.01733 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 716 Learning. Neural Information Processing Systems (NeurIPS Self-Supervision Workshop 2020). Biten, A. F.; Litman, R.; Xie, Y.; Appalaraju, S.; and Manmatha, R. 2022. Latr: Layout-aware transformer for scenetext vqa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16548–16558. Biten, A. F.; Tito, R.; Mafla, A.; Gomez, L.; Rusinol, M.; Mathew, M.; Jawahar, C.; Valveny, E.; and Karatzas, D. 2019a. Icdar 2019 competition on scene text visual question answering. In 2019 International Conference on Document Analysis and Recognition (ICDAR), 1563–1570. Biten, A. F.; Tito, R.; Mafla, A.; Gomez, L.; Rusinol, M.; Valveny, E.; Jawahar, C.; and Karatzas, D. 2019b. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, 4291–4301. Chen, J.; Lv, T.; Cui, L.; Zhang, C.; and Wei, F. 2022a. XDoc: Unified Pre-training for Cross-Format Document Understanding. In Conference on Empirical Methods in Natural Language Processing. Chen, W.; Wang, H.; Chen, J.; Zhang, Y.; Wang, H.; LI, S.; Zhou, X.; and Wang, W. Y. 2019. TabFact: A Largescale Dataset for Table-based Fact Verification. ArXiv, abs/1909.02164. Chen, X.; Wang, X.; Changpinyo, S.; Piergiovanni, A. J.; Padlewski, P.; Salz, D. M.; Goodman, S.; Grycner, A.; Mustafa, B.; Beyer, L.; Kolesnikov, A.; Puigcerver, J.; Ding, N.; Rong, K.; Akbari, H.; Mishra, G.; Xue, L.; Thapliyal, A. V.; Bradbury, J.; Kuo, W.; Seyedhosseini, M.; Jia, C.; Ayan, B. K.; Riquelme, C.; Steiner, A.; Angelova, A.; Zhai, X.; Houlsby, N.; and Soricut, R. 2022b. PaLI: A Jointly-Scaled Multilingual Language-Image Model. ArXiv, abs/2209.06794. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations. Fujinuma, Y.; Varia, S.; Sankaran, N.; Appalaraju, S.; Min, B.; and Vyas, Y. 2023. A Multi-Modal Multilingual Benchmark for Document Image Classification. In Bouamor, H.; Pino, J.; and Bali, K., eds., Findings of the Association for Computational Linguistics: EMNLP 2023, 14361–14376. Singapore: Association for Computational Linguistics. Gu, J.; Kuen, J.; Morariu, V. I.; Zhao, H.; Barmpalios, N.; Jain, R.; Nenkova, A.; and Sun, T. 2022a. Unified Pretraining Framework for Document Understanding. ArXiv, abs/2204.10939. Gu, Z.; Meng, C.; Wang, K.; Lan, J.; Wang, W.; Gu, M.; and Zhang, L. 2022b. XYLayoutLM: Towards Layout-Aware Multimodal Networks For Visually-Rich Document Understanding. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4573–4582. Hao, X.; Zhu, Y.; Appalaraju, S.; Zhang, A.; Zhang, W.; Li, B.; and Li, M. 2023. MixGen: A New Multi-Modal Data Augmentation. In IEEE WACV 2023 - Pre train Workshop, volume abs/2206.08358. Harley, A. W.; Ufkes, A.; and Derpanis, K. G. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. 2015 13th International Conference on Document Analysis and Recognition (ICDAR), 991–995. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In CVPR. Ho, C.-H.; Appalaraju, S.; Jasani, B.; Manmatha, R.; and Vasconcelos, N. 2022. YORO-Lightweight End to End Visual Grounding. In European Conference on Computer Vision - ECCV CAMP Workshop. Hong, T.; Kim, D.; Ji, M.; Hwang, W.; Nam, D.; and Park, S. 2020. BROS: A Pre-trained Language Model for Understanding Texts in Document. https://openreview.net/forum?id=punMXQEsPr0. Huang, Y.; Lv, T.; Cui, L.; Lu, Y.; and Wei, F. 2022. LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking. arXiv preprint arXiv:2204.08387. Jaume, G.; Ekenel, H. K.; and Thiran, J.-P. 2019. FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents. 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), 2: 1–6. Kim, W.; Son, B.; and Kim, I. 2021. ViLT: Vision-andLanguage Transformer Without Convolution or Region Supervision. In ICML. Kudo, T.; and Richardson, J. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In Conference on Empirical Methods in Natural Language Processing. Lee, C.-Y.; Li, C.-L.; Dozat, T.; Perot, V.; Su, G.; Hua, N.; Ainslie, J.; Wang, R.; Fujii, Y.; and Pfister, T. 2022. FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction. In Annual Meeting of the Association for Computational Linguistics. Li, C.; Bi, B.; Yan, M.; Wang, W.; Huang, S.; Huang, F.; and Si, L. 2021a. StructuralLM: Structural Pre-training for Form Understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 6309–6318. Li, C.; Feh´erv´ari, I.; Zhao, X.; Macˆedo, I.; and Appalaraju, S. 2022. SeeTek: Very Large-Scale Open-set Logo Recognition with Text-Aware Metric Learning. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 587–596. Li, J.; Xu, Y.; Cui, L.; and Wei, F. 2021b. MarkupLM: Pretraining of Text and Markup Language for Visually Rich Document Understanding. In Annual Meeting of the Association for Computational Linguistics. Li, P.; Gu, J.; Kuen, J.; Morariu, V. I.; Zhao, H.; Jain, R.; Manjunatha, V.; and Liu, H. 2021c. SelfDoc: SelfSupervised Document Representation Learning. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5648–5656. Litman, R.; Anschel, O.; Tsiper, S.; Litman, R.; Mazor, S.; and Manmatha, R. 2020. SCATTER: selective context attentional scene text recognizer. In Proceedings of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 717 the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11962–11972. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 9992– 10002. Mathew, M.; Bagal, V.; Tito, R.; Karatzas, D.; Valveny, E.; and Jawahar, C. 2022. InfographicVQA. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1697–1706. Mathew, M.; Karatzas, D.; and Jawahar, C. 2021. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2200–2209. Mathew, M.; Karatzas, D.; Manmatha, R.; and Jawahar, C. V. 2020. DocVQA: A Dataset for VQA on Document Images. arXiv:2007.00398. Mishra, A.; Shekhar, S.; Singh, A. K.; and Chakraborty, A. 2019. Ocr-vqa: Visual question answering by reading text in images. In 2019 international conference on document analysis and recognition (ICDAR), 947–952. IEEE. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703. Peng, Q.; Pan, Y.; Wang, W.; Luo, B.; Zhang, Z.; Huang, Z.; Hu, T.; Yin, W.; Chen, Y.; Zhang, Y.; et al. 2022. ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding. arXiv preprint arXiv:2210.06155. Powalski, R.; Borchmann, Ł.; Jurkiewicz, D.; Dwojak, T.; Pietruszka, M.; and Pałka, G. 2021. Going full-tilt boogie on document understanding with text-image-layout transformer. In International Conference on Document Analysis and Recognition, 732–747. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In International Conference on Machine Learning. Raffel, C.; Shazeer, N. M.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. ArXiv, abs/1910.10683. Ren, S.; He, K.; Girshick, R. B.; and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39: 1137–1149. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv, abs/1505.04597. Seunghyun, P.; Seung, S.; Bado, L.; Junyeop, L.; Jaeheung, S.; Minjoon, S.; and Hwalsuk, L. 2019. CORD: A Consolidated Receipt Dataset for Post-OCR Parsing. Singh, A.; Natarajan, V.; Shah, M.; Jiang, Y.; Chen, X.; Batra, D.; Parikh, D.; and Rohrbach, M. 2019. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8317– 8326. Tang, P.; Appalaraju, S.; Manmatha, R.; Xie, Y.; and Mahadevan, V. 2023a. Multiple-Question Multiple-Answer Text-VQA. arXiv preprint arXiv:2311.08622. Tang, P.; Zhu, P.; Li, T.; Appalaraju, S.; Mahadevan, V.; and Manmatha, R. 2023b. DEED: Dynamic Early Exit on Decoder for Accelerating Encoder-Decoder Transformer Models. arXiv preprint arXiv:2311.08623. Tang, Z.; Yang, Z.; Wang, G.; Fang, Y.; Liu, Y.; Zhu, C.; Zeng, M.; Zhang, C.-Y.; and Bansal, M. 2022. Unifying Vision, Text, and Layout for Universal Document Processing. ArXiv, abs/2212.02623. Thomas, W.; Lysandre, D.; Victor, S.; Julien, C.; Clement, D.; Anthony, M.; Pierric, C.; Tim, R.; R´emi, L.; Funtowicz, M.; et al. 2019. HuggingFace’s Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Wang, J.; Jin, L.; and Ding, K. 2022. LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding. In Annual Meeting of the Association for Computational Linguistics. Wang, J.; Yang, Z.; Hu, X.; Li, L.; Lin, K.; Gan, Z.; Liu, Z.; Liu, C.; and Wang, L. 2022a. GIT: A Generative Image-totext Transformer for Vision and Language. arXiv preprint arXiv:2205.14100. Wang, Z.; Zhou, Y.; Wei, W.; Lee, C.-Y.; and Tata, S. 2022b. A Benchmark for Structured Extractions from Complex Documents. ArXiv, abs/2211.15421. Xu, Y.; Li, M.; Cui, L.; Huang, S.; Wei, F.; and Zhou, M. 2020a. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1192–1200. Xu, Y.; Xu, Y.; Lv, T.; Cui, L.; Wei, F.; Wang, G.; Lu, Y.; Florencio, D.; Zhang, C.; Che, W.; et al. 2020b. LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding. arXiv preprint arXiv:2012.14740. Xu, Y.; Xu, Y.; Lv, T.; Cui, L.; Wei, F.; Wang, G.; Lu, Y.; Florencio, D.; Zhang, C.; Che, W.; et al. 2021. LayoutLMv2: Multi-modal Pre-training for Visually-rich Document Understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2579–2591. Yang, Z.; Lu, Y.; Wang, J.; Yin, X.; Florencio, D.; Wang, L.; Zhang, C.; Zhang, L.; and Luo, J. 2021. Tap: Text-aware pre-training for text-vqa and text-caption. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8751–8761. Łukasz Borchmann; Pietruszka, M.; Stanisławek, T.; Jurkiewicz, D.; Turski, M. P.; Szyndler, K.; and Gralinski, F. 2021. DUE: End-to-End Document Understanding Benchmark. In NeurIPS Datasets and Benchmarks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 718
2024
80
18,629
CatmullRom Splines-Based Regression for Image Forgery Localization Li Zhang1,2, Mingliang Xu2, Dong Li2, Jianming Du1,†, Rujing Wang1,2† * 1 Hefei Institute of Physical Science, Chinese Academy of Sciences, China 2 University of Science and Technology of China, China Abstract IFL (Image Forgery Location) helps secure digital media forensics. However, many methods suffer from false detections (i.e., FPs) and inaccurate boundaries. In this paper, we proposed the CatmullRom Splines-based Regression Network (CSR-Net), which first rethinks the IFL task from the perspective of regression to deal with this problem. Specifically speaking, we propose an adaptive CutmullRom splines fitting scheme for coarse localization of the tampered regions. Then, for false positive cases, we first develop a novel rescoring mechanism, which aims to filter out samples that cannot have responses on both the classification branch and the instance branch. Later on, to further restrict the boundaries, we design a learnable texture extraction module, which refines and enhances the contour representation by decoupling the horizontal and vertical forgery features to extract a more robust contour representation, thus suppressing FPs. Compared to segmentation-based methods, our method is simple but effective due to the unnecessity of post-processing. Extensive experiments show the superiority of CSR-Net to existing state-of-the-art methods, not only on standard natural image datasets but also on social media datasets. Introduction Image Forgery Location (IFL), also known as image tampering detection, refers to the task of detecting the location of forged regions in a suspicious image. With the increasing availability of digital image editing software, image forgery has become a prevalent issue in various fields such as journalism, forensics, and biometrics. However, IFL has not been adequately studied due to the various techniques used to manipulate images, including removal, splicing, and cloning. Furthermore, the forgers may use carefully crafted tools to conceal the tampered regions and make the detection more difficult. Therefore, the development of accurate and efficient forgery localization algorithms is crucial to address this issue, ensuring the integrity and credibility of digital images in today’s world. Thanks to the rapid development of deep learning in recent years, many excellent methods have been introduced and continue to drive the progress of the field (Wang et al. 2022; Hu et al. *† corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The categorization of existing methods applied into IFL. Please zoom in for better visualization. 2020). However, due to the specific properties of forgery regions such as large variance in color, texture, shape, etc., there are still mainly two challenging issues that haven’t been addressed satisfactorily in Image Forgery Localization. The first issue is false positives (FPs). False positives refer to test results that indicate the presence of a satisfactory target region when in reality it is not convincing. Traditional segmentation-based methods often suffer from this situation (as shown in Fig. 2). Binarization, an indispensable and decisive strategy used in these methods, is a threshold-sensitive task that directly determines the number of foreground region blocks. An unreasonable threshold value often leads to the appearance of unexpected regions (i.e., false positive cases) in traditional segmentation methods. However, many methods, when focusing on potentially tampered regions, usually ignore the false alarm rate. This has negative implications on the propagation of digital content, impacting the profitability of relevant journalistic sources, which constrains the development of assay results in a more convincing direction. The second issue is inaccurate boundaries. As shown in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7196 Figure 2: The illustration of FPs in traditional segmentationbased methods. the results displayed in Fig. 1 (a), traditional segmentationbased methods suffer from inconsistent mask predictions between consecutive decoder layers, which leads to inconsistent optimization goals and weak coupling of feature spaces. On the other hand, the localization effect is also unsatisfactory when the general regression method is directly introduced to handle the task because the bounding boxes used can only localize the target region in a quadrilateral fashion, and often the target region appears mostly in irregular curves (Fig. 1 (b) shows the localization effect of rotation detection (Li et al. 2022), where we use the minimum bounding quadrilateral of the masked region as the Ground Truth). The increasingly elaborate tampered image poses a greater challenge, as most methods do not constrain or explicitly model forged region boundaries well, which can easily lead to the blending of other targets or incompatible backgrounds in the detection results. Recently, some regression-based strategies have made significant advances in false-positive determination in the field of object detection (You et al. 2022; Li and Koˇseck´a 2022; Chen et al. 2023). Differing from object detection tasks, IFL is a pixel-level task, which means that the directness of approach migration will bring performance degradation. To this end, some customized methods and processing need to be introduced which could bridge these two tasks effectively. Specifically, firstly, for the mask labeled GT, we introduce CatmullRom splines to transform it into polygonal frames, thus enabling regression strategies to be applied to pixel-level tasks such as IFL. Meanwhile, during the training and inference process, in order to make the polygonal labeling closer to the real label, adaptive parametric CatmullRom splines method is proposed, which can minimize the similarity gap between the predicted region and the Ground Truth and for the curvature of the target region. secondly, to go further and explicitly suppress false positives in the localization results, we propose an effective re-scoring mechanism: we directly reject false positives that do not receive a response in both branches through two independent prediction branches, each with a regional classification score and an instance score. In addition, to get more accurate boundaries, we further refine the contours of the predicted regions by decoupling horizontal texture features and vertical texture features for modeling the forged region boundaries and reducing the overlap between them and other masks. Our contributions can be summarized as three folds: • We tailor a CatmullRom Splines-based Regression Network (CSR-Net) to make the first attempt to introduce regression methods into the pixel-level task (referring IFL in this paper). • To explicitly suppress the false positive samples and to avoid unclear edges, we design two mutually complementary and reinforcing components, i.e., Comprehensive Re-scoring Algorithm (CRA) to synthetically evaluate the confidence score of each region as a tampered region, while Vertical Texture-interactive Perception (VTP) is developed to control the generation of more accurate region edges. • Extensive experiments on multiple public datasets (including natural image datasets and social media datasets) demonstrate the superiority of our method compared to state-of-the-art methods in IFL. Related Work Classic Methods in IFL The prior art in IFL mainly relies on feature extraction and matching techniques, e.g. color filter array (Ferrara et al. 2012), photo-response non-uniformity noise (Chierchia et al. 2014), illumination (Carvalho et al. 2015), JPEG artifacts (Iakovidou et al. 2018) and so on. Despite the achievements, these methods often struggle with complex forgery techniques or when the forged region is wellblended into the background of the image. In recent years, deep learning-based methods have shown great potential in IFL. Many methods have been proposed to promote the progress and development of this field. For instance, In (Liu et al. 2022), Liu et al. proposed PSCC-Net, which uses a two-path (top-down and bottom-up route) methodology to analyze the image. A self-adversarial training strategy and a reliable coarse-to-fine network (Zhuo et al. 2022) is designed which utilizes a self-attention mechanism to localize forged regions in forgery images. However, these methods are all conducted from the point of segmentation, an unlearnable hyperparameter needs to be predefined for the binarization of different regions, limiting the method’s further development. Regression Based Methods Regression-based methods have been widely used (Savran, Sankur, and Bilge 2012; Xia et al. 2021) in computer vision, particularly in tasks such as object detection (Carion et al. 2020) and localization (Choe et al. 2020). Over the years, various algorithms were proposed such as Fast RCNN (Girshick 2015), YOLO (Redmon and Farhadi 2018) and Diffusion-Det (Chen et al. 2022). Some improved algorithms can adapt quickly to more complex scenes, such as solving false-positive samples (You et al. 2022) caused by uneven sample segmentation in 3D scenes through detection frames. In certain scenarios where quadrilateral regions cannot be detected, some parametric curves have been introduced to solve the regression problem. This mechanism is achieved through an interpolation spline or an approximation spline function. For example, gesture recognition uses a customized way to fit points to Bezier curves with constant memory usage, while B-splines are used to detect lane markings and regress their 3D location (Pittner, Condurache, and Janai 2023). In this paper, we show how to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7197 tailor a customized CatmullRom detection for IFL and find that reasonable parameter values can significantly improve the model fit. Our results demonstrate that a proper balance of the tension factor (τ) can help improve the characterization performance of the CatmullRom splines, emphasizing the importance of flexible parameter adjustment in practical applications. Method Overview Fig. 3 is an overview of our framework. The input image is represented as X ∈RH×W ×3. First, we use FPN embedded with ResNet-50 as the backbone network to conduct Catmull spline detection. More specifically, our approach utilizes a parameterization method based on CatmullRom splines to fit the target segmentation area adaptively (orange part). Following (Chen et al. 2017), we adopt atrous spatial pyramid pooling (ASPP) together with ResNet-50 to capture the long-range contextual information and multi-scale features. This anchor-free convolutional neural network significantly simplifies the detection for our task and also allows us to obtain a coarse feature map. Later on, we use a re-scoring mechanism (CRA) to filter out the false positive samples for suspicious regions highlighted on the coarse feature maps (blue part). Finally, we perform texture extraction (by VTP) on the regions from both horizontal and vertical directions simultaneously in anticipation of obtaining more accurate boundaries (green part). Note that each tampered region reserved will be processed by VTP independently. CatmullRom Splines Detection Most traditional methods use the idea of segmentationbased in IFL (Liu et al. 2022; Wang et al. 2023; Wu, AbdAlmageed, and Natarajan 2019; Li et al. 2023). However, regression-based methods tend to be a more efficient approach when dealing with such mask or polygon-based datasets, e.g. (Liu et al. 2019; Pranav, Zhenggang et al. 2020; Zhang et al. 2022). In general, mainstream regressionbased methods require complex processing to fit the instance boundaries, which leads to unreliability and instability in practice. In recent years, spline curves have been used in computer graphics applications to generate curves of various shapes. For example, automatic driving lane lines (Ma et al. 2019; Yu and Chen 2017), text detection (Liu et al. 2020; Tang et al. 2022; Nguyen et al. 2021), Fault detection (Park et al. 2011; Guo and Wang 2005), etc. Among them, CatmullRom spline function is a classic interpolating spline, which is suitable for parameterization of tampered regions due to its fitting effect and inference cost (Chandra 2020; Li 2022). Specifically, CatmullRom splines are a family of cubic interpolating splines formulated such that the tangent at each point Pi is calculated using the previous and next point on the spline. Under the given control points, there are variations of CatmullRom spline functions that can be adapted to any shape desired (Li and Chen 2016; Li, Liu, and Liu 2022). In addition, only integer coefficients are involved in constructing a cubic CatmullRom spline function which reduces the implementation cost when compared to other spline functions. All these properties mentioned above contribute to the faster inference speed and lower calculation consumption (Flops). Mathematically, CatmullRom spline is defined as Eq. 1: ci(t) = 3 X j=0 bj(t)pi+j, i = 0, 1, . . . , n −3 (1) where 0 ≤t ≤1, pi(i = 0, 1, . . . , n −3; n ≥3) are control points, bj(t) is the basis. For example, it can be expressed by Eq. 2 when the highest power of t in the function bj(t) is 3: ci(t) = 1 2 ·  1 t t2 t3 ·   0 2 0 0 −τ 0 τ 0 2τ τ −6 −2(τ −3) −τ −τ 4 −τ τ −4 τ  ·   pi pi+1 pi+2 pi+3   (2) In order to reconcile arbitrary shapes of the tampered regions with CatmullRom splines, we thoroughly studied oriented or curved tampered from existing datasets and the authentic images. In CatmullRom splines, τ (tension factor) is an important parameter that is used to control the tightness of the splines. A higher value of the tension factor will cause the curve to bend more tightly between the control points, thus fitting closer to the given data points during the fitting process. Conversely, lower values of the tension factor will cause the curve to be smoother between the control points. Intuitively, the conventional CatmullRom spline (parameter τ=1) is a poor fit for the IFL task directly, so we sought to find the right balance between fitting accuracy and curve smoothness by adjusting τ. Ablation experiments (In the ablation analysis part) show that CatmullRom splines can be reliable for this task when τ is set to 16. It also allows the learned control points to be closer to the foreground (tampered) area. CatmullRom Ground Truth Generation In IFL, many benchmarks use Mask or polygon-based datasets as public datasets (Dong, Wang, and Tan 2013; Hsu and Chang 2006; Alibaba 2021/2022). Given the annotated points {pi}n i=1 from the curved boundary where pi represents the i −th annotating point, the main goal is to obtain the optimal parameters for CatmullRom splines c(t) in Eq. 1. To achieve this, we can simply apply the standard least square method, as shown in Eq. 3:   p03t0 · · · p33t0 p03t1 · · · p33t1 ... ... ... p03tm · · · p33tm     cx0 cy0 cx1 cy1 cx2 cy2 cx3 cy3  =   Px0 Py0 Px1 Py1 ... ... Pxm Pym   (3) where m represents the number of annotated points for a curved boundary. while t is calculated by using the ratio of the cumulative length to the perimeter of the polyline. pij can be refered from Eq. 1, and we use Pi represents the new coordinate points after the transformation. According to Eq. 1 and Eq. 3, we convert the original masked annotation to a parameterized CatmullRom spline. Illustration can be referred from Fig. 4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7198 Figure 3: Overall of our proposed CSR-Net. The top part is our pipeline, which takes a suspicious image (H×W×3) as input, and the output is the predicted mask (H×W×1, the tampered regions). Formally, an uncertain number of potential regions will be obtained after CRA processing, and VTP will refine each region independently. The bottom parts are details of each module. Comprehensive Re-scoring Algorithm The fundamental mechanism behind Mask R-CNN is to treat the classification confidence of the resulting bounding boxes as scores, and then a predetermined threshold is used to filter out the background boxes. However, despite the advances, when bounding boxes contain an obviously incompatible region instance, it is accompanied by a large amount of background information, and Mask RCNN often filters out such low-score true positives, while it retains some FPs with relatively high confidence in contrast. Therefore, we re-assign scores for each region instance. Specifically speaking, the comprehensive score of region instance is composed of two parts: classification score (CLS) and instance score (INS). Mathematically, the comprehensive score for the i−th proposal, given the predicted n-class scores CLS =  scls ij | j ∈[0, · · · , n −1] and INS =  sins ij | j ∈[0, · · · , n −1] is computed via the customized softmax function: Eq. 4 . sij = escls ij +sins ij Pn−1 l=0 escls il +sins il (4) In our work, we adopt n = 2, where the two classes repreFigure 4: An example of Cubic CatmullRom splines. Note that with only two end-points c1 and c5 the CatmullRom spline degenerates to a straight line. sent tampered (foreground) and authentic (background) regions. Therefore, we only need to calculate the score for the foreground class. CLS is directly obtained by a classification branch similar to Mask R-CNN, and INS is the activation value of the region instance on the global region segmentation map. In detail, it is projected onto a tampered region segmentation map for each region instance, containing Pi =  p1 i , p2 i . . . pn i , and the mean of Pi in the region instance area can be formulated as: sins i1 = P j pj i N (5) where Pi is the set of the pixels’ value of i−th region instance on region segmentation map. The classification score is organically integrated with the instance score to get the comprehensive score, which can reduce the FP confidence in practice. This is because FPs tend to have weaker responses than regions on the segmentation map. Experiments results in the following show that our design is more friendly for splicing cases because the splicing cases usually enjoy a stronger response on the segmentation map, a high instance score will compensate for a low classification score. Vertical Texture-interactive Perception Traditional edge detection operators (e.g. Sobel, Roberts, Prewitt, etc.) help to extract handcrafted features on natural image processing tasks, while the biggest drawback is that they cannot learn dynamically according to the specificity of the task. Inspired by (Holla and Lee 2022), we adopt an edge detection operator into a learnable module coined Sobel layer, see Fig. 5. Furthermore, for better modeling of tampered area boundaries, we introduce Vertical Textureinteractive Perception (VTP) into our network. In VTP, tampered region is represented with a set of contour points, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7199 Figure 5: Diagrams of Sobel layer, used in VTP for enhancing edge-related patterns and manipulation edge detection. Features from the i-th block first go through the Sobel Unit (SU) followed by an Edge Residual Unit (ERU). For training and optimization reasons, a residual learning strategy is introduced. these points containing strong texture characteristics can accurately localize tampered regions with arbitrary shapes. See them all: there are two core parallel branches in VTP, in the top branch, we introduce a convolutional kernel with size 1 × k sliding over the feature maps to model the local texture information in the horizontal direction, which only focuses on the texture characteristics in a k-range region. This flexible trick is proved to be simple but works a lot through our pre-experiments. Moreover, It is nearly cost-free while maintaining competitive efficiency at the same time. Through a similar paradigm, the bottom branch is conducted to model the texture characteristics in the vertical direction through a convolutional kernel with size k × 1. k is a hyperparameter to control the size of the receptive field of texture characteristics. In the actual experiment, we take k = 3. Finally, two independent sigmoid layers are involved to normalize the heatmaps to [0, 1] in both directions. In this way, tampered regions can be detected in two orthogonal directions and represented with contour points in two different heatmaps, either of which only responds to texture characteristics in a certain direction. As false positive predictions can be effectively suppressed by considering the response value in both orthogonal directions, two heatmaps from VTP are further processed through Point Re-scoring Algorithm. Concretely, points in different heatmaps are first processed through NMS to achieve a tight representation. Then, to suppress the predictions with strong unidirectional or weakly orthogonal responses, we only select the points with distinct responses in both heatmaps as candidates. Finally, the tampered region can be represented with a polygon made up of these high-quality contour points. Optimization As described above, our network includes multi-task. Therefore, we calculate the loss function for the following components: L = Lrpn + λ1 · Lcls + λ2 · Lmask + λ3 · Lgts + λ4 · LCR (6) where Lrpn , Lcls and Lmask are the standard loss derived from Mask R-CNN. The Lgts is used to optimize tampered region detection, defined as: Lgts = 1 N X i −log epi P j epj ! (7) The Lgts is Softmax loss, where p is the output prediction of the network. The LCR is used to optimize the fit of CatmullRom spline detection, defined as: LCR = Lctr + Lbias (8) The Lctr and Lbias are all FCOS loss (Tian et al. 2019). The former is used to optimize distance loss from the center of CatmullRom control points, while the offset distance of these control points from the center is constrained by the latter. Experiment Experimental Setup Pre-training Data We create a sizable image tampering dataset and use it to pre-train our model. This dataset includes three categories: 1) splicing, 2) copy-move, and 3) removal. Testing Datasets Following (Wang et al. 2022; Hu et al. 2020), we evaluate our model on CASIA (Dong, Wang, and Tan 2013), Columbia (Hsu and Chang 2006), NIST16 (Guan et al. 2019), COVER (Wen et al. 2016). Evaluation Metrics To quantify the localization performance, following previous works (Hu et al. 2020), we use pixel-level Area Under Curve (AUC) and F1 score on manipulation masks. Since binary masks are required to compute F1 scores, we adopt the Equal Error Rate (EER) threshold to binarize them. Implementation Details The input images are resized to 512 × 512. In this work, the backbone network is ResNet50, pre-trained on ImageNet. Implemented by PyTorch, our model is trained with GeForce GTX 3090, using Adam as the optimizer. Comparison with the SOTA Methods Following classic methods (Hu et al. 2020; Wang et al. 2022), our model is compared with other state-of-the-art tampering localization methods under two settings: 1) training on the synthetic dataset and evaluating the full test datasets, and 2) fine-tuning the pre-trained model on the training split of test datasets and evaluating on their test split. The pre-trained model will demonstrate each method’s generalizability, and the fine-tuned model will demonstrate how well each method performs locally once the domain discrepancy has been significantly reduced. Pre-trained Model Tab. 1 shows the localization performance of pre-trained models for different SOTA methods on five datasets under pixel-level AUC. Our CSR-Net achieves the best localization performance on Coverage, CASIA, NIST16 and IMD20, ranking second on Columbia. Especially, It achieves 94.4% on the copy-move dataset The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7200 Figure 6: Visualization of the predicted manipulation mask by different methods. From left to right, we show forged images, predictions of ManTra-Net, SPAN, PSCCNet, TruFor, Ours and GT masks. Method Data Columbia Coverage CASIA NIST16 IMD20 SPAN 96k 93.6 92.2 79.7 84.0 75.0 TruFor 100k 97.7 85.4 83.3 83.9 81.8 PSCCNet 100k 98.2 84.7 82.9 85.5 80.6 ObjectFormer 62K 95.5 92.8 84.3 87.2 82.1 ManTraNet 64K 82.4 81.9 81.7 79.5 74.8 Ours 60K 96.8 94.3 88.1 88.3 85.4 Table 1: Comparisons of manipulation localization AUC (%) scores of different pre-trained models. (COVER), whose image forgery regions are indistinguishable from the background. This validates our model owns the superior ability to suppress the FPs and generate more accurate edges. Yet, we fail to achieve the best performance on Columbia, with a gap of 1.4 % AUC score lower than that of PSCCNet. We conjecture that the explanation may be the distribution of their synthesized training data closely resembles that of the Columbia dataset. This is further supported by the results in Tab. 2, which shows that CSR-Net performs better than PSCCNet in terms of both AUC and F1 scores. Furthermore, it is worth pointing out that we achieve decent results with less pre-training data. Fine-tuned Model The network weights of the pretrained model are used to initiate the fine-tuned models that will be trained on the training split of Coverage, CASIA, and NIST16 datasets, respectively. We evaluate the fine-tuned models of different methods in Tab. 2. As for AUC and F1, our model achieves significant performance gains. This validates that the CRA module effectively suppresses false positive cases and improves the accuracy of predicted region Methods Coverage CASIA NIST16 AUC F1 AUC F1 AUC F1 J-LSTM 61.4 76.4 H-LSTM 71.2 79.4 SPAN 93.7 55.8 83.8 38.2 96.1 58.2 PSCCNet 94.1 72.3 87.5 55.4 99.1 74.2 ObjectFormer 95.7 75.8 88.2 57.9 99.6 82.4 RGB-N 81.7 43.7 79.5 40.8 93.7 72.2 Ours 97.9 78.0 90.4 58.5 99.7 83.5 Table 2: Comparison of manipulation localization results using fine-tuned models. locations and boundaries by VTP. After synthesizing the data in Tab. 1 and 2, our approach proves that introducing regression methods for pixel-level tasks is effective as expected which is mentioned in the Introduction. Ablation Analysis In this section, we conduct experiments to demonstrate the effectiveness of our proposed method CSR-Net. Formally, the CatmullRom Splines-based Regression (CSR) is introduced to better describe the tampered region compared to traditional regression methods. Comprehensive Rescoring Algorithm (CRA) aims to choose expected regions with high classification scores as well as superior instance scores, while Vertical Texture-interactive Perception (VTP) is used to model texture features both horizontally and vertically to refine the target region. To further evaluate the effectiveness of CSR, CRA, and VTP, we remove them separately and verify the forgery localization performance on CASIA and NIST16 datasets. Tab. 3 shows the quantitative results. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7201 Figure 7: L2 distance with different value of τ. We show the results through the format of ”dataset/label”, for example, RIFL21/Mask-level means the average distance of control points in RIFL21 from the Ground Truth with Mask-level. The baseline (I) denotes that we only use the traditional regression method (Li et al. 2022). In the following ablation experiments, we can infer that the F1 scores decrease by 1.9 % on CASIA and 1.7 % on NIST16 when VTP is not involved. While without CRA, the AUC scores decrease more compared to (IV). Yet, when CRA is not available, significant performance degradation in (II), i.e., 12.3% in terms of AUC and 11.2% in terms of F1 on CASIA can be observed. Index Variants CASIA NIST16 AUC F1 AUC F1 I Baseline 68.5 35.3 75.9 51.2 II w/o CSR 78.1 47.6 86.1 62.1 III w/o CRA 86.3 55.5 95.9 78.8 IV w/o VTP 88.9 56.9 97.8 81.8 V Ours 90.4 58.8 99.7 83.5 Table 3: Ablation results on CASIA and NIST16 dataset using different variants of CSR-Net. AUC and F1 scores (%) are reported. In Fig. 7, we represent the different values of parameters τ in CatmullRom Ground Truth Generation to validate the respective prediction effects over natural image dataset (i.e., CASIA) and social media dataset (i.e., RIFL21). Intuitively, as the τ gradually increases, the Euclidean distance between the fitted CatmullRom control points and the Ground Truth with mask-level in different datasets gradually decreases, showing a better fit. However, when τ exceeds 16, the Euclidean distance instead shows a tendency to expand, implying that the fitting effect may appear to decrease. Clearly, τ = 16 is an excellent choice to generate the optimal CatmullRom-based Ground Truth. Visualization Results Qualitative results. We provide predicted forgery masks of different methods in Fig. 6. Since the source code of ObjectFormer (Wang et al. 2022) is not available, their predictions are not available. Compared with the state-of-the-art methods, our CSR-Net achieves better performance, both in terms of suppressing false positives and in more accurate tampered region boundaries. We have reason to believe that the improvement benefits from the CRA and VTP. CRA is able to consider each possible area more comprehensively and determine the subtle differences between tampered and authentic regions, while VTP models texture boundaries from two orthogonal approaches simultaneously to accurately describe the target regions. Different splines-based regression. There are many types of interpolation functions, classical ones such as CatmullRom splines and Bezier curves, the former is an interpolation spline function, which precisely interpolates a set of known data points by using a series of nodes, while the latter is an approximation spline function, which approximates a set of data points by using nodes. Datasets for IFL are produced from natural images and social media, and the tampered regions share different shapes. Through comparison experiments, we found that CutmullRom splines are more suitable for datasets with diverse curvature (e.g., IFL), while Bezier curve-based methods are sometimes susceptible to interference from other targets. For more details, please follow Fig. 8. Figure 8: Visualization of the results by different splinesbased regression. From left to right, we show forged images, results of different splines-based regression (The left side shows the confidence score, while the right side is the predicted manipulation mask), GT masks. Due to the space limitation, please zoom in for better visualization. Conclusion In this paper, we elaborately design a customized CatmullRom Splines-based Regression Network (CSR-Net) for IFL, which first attempts to introduce regression methods into the pixel-level (IFL in this paper). In detail, in contrast to traditional detection methods that rely on bounding boxes, we first introduce the CatmullRom fitting technique, which adapts contour modeling for control points in the target region, thereby achieving more accurate and efficient localization of tampered regions. Then, to suppress the FPs, Comprehensive Re-scoring Algorithm (CRA) is designed to filter the exact tampered region with classification score and instance score. Moreover, we proposed a learnable region texture extraction module named Vertical Texture-interactive Perception (VTP) to further refine the edges. Thus the CSRNet can perceive all tampered regions without nearly FPs and achieve accurate localization. Extensive experiments show the superiority of CSR-Net to existing state-of-the-art approaches, not only on natural image datasets but also on social media datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7202 Acknowledgements This work was supported by the National Natural Science Foundation of China under grant (No.32171888), the Dean’s Fund of Hefei Institute of Physical Science, Chinese Academy of Sciences (YZJJ2022QN32), the Natural Science Foundation of Anhui Province (No.2208085MC57), and the National Key Research and Development Program of China (2019YFE0125700). References Alibaba. 2021/2022. Real-World Image Forgery Localization dataset. https://tianchi.aliyun.com/competition/ entrance/531945/introduction?spm=5176.12281957.0.0. 1aaf2448THhlg4,. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Carvalho, T.; Faria, F. A.; Pedrini, H.; Torres, R. d. S.; and Rocha, A. 2015. Illuminant-based transformed spaces for image forensics. IEEE transactions on information forensics and security, 11(4): 720–733. Chandra, M. 2020. Hardware implementation of hyperbolic tangent function using catmull-rom spline interpolation. arXiv preprint arXiv:2007.13516. Chen, L.-C.; Papandreou, G.; Schroff, F.; and Adam, H. 2017. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Chen, S.; Sun, P.; Song, Y.; and Luo, P. 2022. Diffusiondet: Diffusion model for object detection. arXiv preprint arXiv:2211.09788. Chen, Y.; Yu, Z.; Chen, Y.; Lan, S.; Anandkumar, A.; Jia, J.; and Alvarez, J. 2023. FocalFormer3D: Focusing on Hard Instance for 3D Object Detection. arXiv preprint arXiv:2308.04556. Chierchia, G.; Poggi, G.; Sansone, C.; and Verdoliva, L. 2014. A Bayesian-MRF approach for PRNU-based image forgery detection. IEEE Transactions on Information Forensics and Security, 9(4): 554–567. Choe, J.; Oh, S. J.; Lee, S.; Chun, S.; Akata, Z.; and Shim, H. 2020. Evaluating weakly supervised object localization methods right. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3133–3142. Dong, J.; Wang, W.; and Tan, T. 2013. Casia image tampering detection evaluation database. In 2013 IEEE China Summit and International Conference on Signal and Information Processing, 422–426. IEEE. Ferrara, P.; Bianchi, T.; De Rosa, A.; and Piva, A. 2012. Image forgery localization via fine-grained analysis of CFA artifacts. IEEE Transactions on Information Forensics and Security, 7(5): 1566–1577. Girshick, R. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, 1440–1448. Guan, H.; Kozak, M.; Robertson, E.; Lee, Y.; Yates, A. N.; Delgado, A.; Zhou, D.; Kheyrkhah, T.; Smith, J.; and Fiscus, J. 2019. MFC datasets: Large-scale benchmark datasets for media forensic challenge evaluation. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), 63– 72. IEEE. Guo, L.; and Wang, H. 2005. Fault detection and diagnosis for general stochastic systems using B-spline expansions and nonlinear filters. IEEE Transactions on Circuits and Systems I: Regular Papers, 52(8): 1644–1652. Holla, K. S.; and Lee, B. 2022. Convolutional Residual Blocks With Edge Guidance for Image Denoising. In 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), 645–647. IEEE. Hsu, J.; and Chang, S. 2006. Columbia uncompressed image splicing detection evaluation dataset. Columbia DVMM Research Lab. Hu, X.; Zhang, Z.; Jiang, Z.; Chaudhuri, S.; Yang, Z.; and Nevatia, R. 2020. SPAN: Spatial pyramid attention network for image manipulation localization. In European conference on computer vision, 312–328. Springer. Iakovidou, C.; Zampoglou, M.; Papadopoulos, S.; and Kompatsiaris, Y. 2018. Content-aware detection of JPEG grid inconsistencies for intuitive image forensics. Journal of Visual Communication and Image Representation, 54: 155–170. Li, D.; Zhu, J.; Wang, M.; Liu, J.; Fu, X.; and Zha, Z.J. 2023. Edge-Aware Regional Message Passing Controller for Image Forgery Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8222–8232. Li, H., PhD. 2022. Curve Fitting and Interpolation. In Numerical Methods Using Kotlin: For Data Science, Analysis, and Engineering, 169–196. Springer. Li, J.; and Chen, S. 2016. The cubic α-Catmull-Rom spline. Mathematical and Computational Applications, 21(3): 33. Li, J.; Liu, C.; and Liu, S. 2022. The quartic Catmull–Rom spline with local adjustability and its shape optimization. Advances in Continuous and Discrete Models, 2022(1): 1– 14. Li, W.; Chen, Y.; Hu, K.; and Zhu, J. 2022. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1829–1838. Li, Y.; and Koˇseck´a, J. 2022. Uncertainty aware proposal segmentation for unknown object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 241–250. Liu, X.; Liu, Y.; Chen, J.; and Liu, X. 2022. PSCC-Net: Progressive spatio-channel correlation network for image manipulation detection and localization. IEEE Transactions on Circuits and Systems for Video Technology. Liu, Y.; Chen, H.; Shen, C.; He, T.; Jin, L.; and Wang, L. 2020. Abcnet: Real-time scene text spotting with adaptive bezier-curve network. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9809– 9818. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7203 Liu, Y.; Jin, L.; Zhang, S.; Luo, C.; and Zhang, S. 2019. Curved scene text detection via transverse and longitudinal sequence connection. Pattern Recognition, 90: 337–345. Ma, L.; Wu, T.; Li, Y.; Li, J.; Chen, Y.; and Chapman, M. 2019. Automated extraction of driving lines from mobile laser scanning point clouds. In Proc. Adv. Cartogr. GISci. ICA, 1–6. Nguyen, N.; Nguyen, T.; Tran, V.; Tran, M.-T.; Ngo, T. D.; Nguyen, T. H.; and Hoai, M. 2021. Dictionary-guided scene text recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7383– 7392. Park, J.; Kwon, I.-H.; Kim, S.-S.; and Baek, J.-G. 2011. Spline regression based feature extraction for semiconductor process fault detection using support vector machine. Expert Systems with Applications, 38(5): 5711–5718. Pittner, M.; Condurache, A.; and Janai, J. 2023. 3DSpLineNet: 3D Traffic Line Detection Using Parametric Spline Representations. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 602–611. Pranav, M.; Zhenggang, L.; et al. 2020. A day on campus-an anomaly detection dataset for events in a single camera. In Proceedings of the Asian Conference on Computer Vision. Redmon, J.; and Farhadi, A. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. Savran, A.; Sankur, B.; and Bilge, M. T. 2012. Regressionbased intensity estimation of facial action units. Image and Vision Computing, 30(10): 774–784. Tang, J.; Zhang, W.; Liu, H.; Yang, M.; Jiang, B.; Hu, G.; and Bai, X. 2022. Few could be better than all: Feature sampling and grouping for scene text detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4563–4572. Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9627–9636. Wang, C.; Huang, Z.; Qi, S.; Yu, Y.; Shen, G.; and Zhang, Y. 2023. Shrinking the Semantic Gap: Spatial Pooling of Local Moment Invariants for Copy-Move Forgery Detection. IEEE Transactions on Information Forensics and Security. Wang, J.; Wu, Z.; Chen, J.; Han, X.; Shrivastava, A.; Lim, S.-N.; and Jiang, Y.-G. 2022. Objectformer for image manipulation detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2364–2373. Wen, B.; Zhu, Y.; Subramanian, R.; Ng, T.-T.; Shen, X.; and Winkler, S. 2016. COVERAGE—A novel database for copy-move forgery detection. In 2016 IEEE international conference on image processing (ICIP), 161–165. IEEE. Wu, Y.; AbdAlmageed, W.; and Natarajan, P. 2019. Mantranet: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9543–9552. Xia, W.; Gao, Q.; Wang, Q.; and Gao, X. 2021. Regressionbased clustering network via combining prior information. Neurocomputing, 448: 324–332. You, Y.; Ye, Z.; Lou, Y.; Li, C.; Li, Y.-L.; Ma, L.; Wang, W.; and Lu, C. 2022. Canonical voting: Towards robust oriented bounding box detection in 3d scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1193–1202. Yu, B.; and Chen, Y. 2017. Driving rhythm method for driving comfort analysis on rural highways. PrometTraffic&Transportation, 29(4): 371–379. Zhang, L.; Du, J.; Dong, S.; Wang, F.; Xie, C.; and Wang, R. 2022. AM-ResNet: Low-energy-consumption additionmultiplication hybrid ResNet for pest recognition. Computers and Electronics in Agriculture, 202: 107357. Zhuo, L.; Tan, S.; Li, B.; and Huang, J. 2022. SelfAdversarial Training incorporating Forgery Attention for Image Forgery Localization. IEEE Transactions on Information Forensics and Security, 17: 819–834. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7204
2024
800
18,630
Deep Semantic Graph Transformer for Multi-View 3D Human Pose Estimation Lijun Zhang1, 2, Kangkang Zhou1, 2, Feng Lu3, 4, Xiang-Dong Zhou1, 2, Yu Shi1, 2 1Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China 2Chongqing School, University of Chinese Academy of Sciences, Chongqing, China 3Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China 4 Peng Cheng Laboratory, Shenzhen, China {zhanglijun, zhouxiangdong, shiyu}@cigit.ac.cn, zhoukangkang21@mails.ucas.ac.cn, lf22@mails.tsinghua.edu.cn Abstract Most Graph Convolutional Networks based 3D human pose estimation (HPE) methods were involved in single-view 3D HPE and utilized certain spatial graphs, existing key problems such as depth ambiguity, insufficient feature representation, or limited receptive fields. To address these issues, we propose a multi-view 3D HPE framework based on deep semantic graph transformer, which adaptively learns and fuses multi-view significant semantic features of human nodes to improve 3D HPE performance. First, we propose a deep semantic graph transformer encoder to enrich spatial feature information. It deeply mines the position, spatial structure, and skeletal edge knowledge of joints and dynamically learns their correlations. Then, we build a progressive multi-view spatial-temporal feature fusion framework to mitigate joint depth uncertainty. To enhance the pose spatial representation, deep spatial semantic feature are interacted and fused across different viewpoints during monocular feature extraction. Furthermore, long-time relevant temporal dependencies are modeled and spatial-temporal information from all viewpoints is fused to intermediately supervise the depth. Extensive experiments on three 3D HPE benchmarks show that our method achieves state-of-the-art results. It can effectively enhance pose features, mitigate depth ambiguity in single-view 3D HPE, and improve 3D HPE performance without providing camera parameters. Codes and models are available at https://github.com/z0911k/SGraFormer. Introduction 3D human pose estimation (HPE) is a popular research topic in computer vision. It is a crucial tool for analyzing human behavior since it is able to estimate human pose by predicting the locations of main human body joints in 3D space. As a result, it is the foundational technology for many humanassisted vision tasks, such as robotics, action recognition, pedestrian re-identification, and virtual/augmented reality. With the advancement of deep learning techniques, 3D HPE methods based on Convolutional Neural Networks (CNNs) have risen to prominence. They are broadly characterized as direct estimation methods (Pavlakos et al. 2017; Luvizon, Tabia, and Picard 2019) and 2D-3D lifting methods (Tekin et al. 2017; Zhou et al. 2019). The latter performs better due to the intermediate supervision of 2D poses, which Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. is the current mainstream. CNNs commonly concatenate the 2D joint coordinates directly as input features in the 2D-3D lifting approaches, ignoring the original spatial arrangement of human body joints. Since Graph Convolutional Networks (GCNs) have good performance when processing irregular graph data, researchers have introduced GCNs into 3D HPE and gained some achievements. GCN-based 3D HPE methods are currently employed primarily in single-view 3D HPE (Cai et al. 2019; Zhao et al. 2019; Xu and Takano 2021; Zhang et al. 2022b), where graph features are commonly generated based on adjacency matrices of connected 2D joints. However, there is an inherent depth ambiguity problem with single-view 3D HPE, as a 2D pose may project multiple 3D poses. The utilization of multi-view information can mitigate the depth ambiguity problem, while few graph-based multi-view 3D HPE methods have evolved, hence this paper covers this direction. Most GCN-based 3D HPE approaches (Pavllo et al. 2019; Zhao et al. 2019; Xu and Takano 2021; Liu et al. 2021) only consider the connections between joint points, without taking into account the original position and skeletal edge information of human joints, as well as their effective fusion. Additionally, the GCN has a limited receptive field, resulting in inadequate feature representation. Some work (Cai et al. 2019; Wang et al. 2020; Zeng et al. 2021; Zhang et al. 2022b) incorporated temporal information to enhance the feature and alleviate depth uncertainty, while it is difficult to build long-time dependencies. Transformer has addressed this issue and has been employed in several 3D HPE methods with decent results (Zheng et al. 2021; Shuai, Wu, and Liu 2022; Zhao et al. 2023; Li et al. 2023). However, these works directly transform and concatenate the coordinates of 2D points into input feature tokens, with no regard for node spatial structural information such as graphs. Only a few methods (Zhao, Wang, and Tian 2022; Ionescu et al. 2023) incorporated graph features into transformer, while they have limited model performance. To address the above problems, we propose a multi-view 3D HPE method based on a deep semantic graph transformer. The network can dynamically learn deep semantic features and their correlations involving the position, spatial structure, and skeletal edge of all human joints. It progressively fuses significant spatial-temporal information across multiple viewpoints and successfully models the long-time The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7205 LN & Linear 𝑄𝑝 𝐾𝑝 𝑉𝑝 𝐺spatial embedding SoftMax LN & Linear 𝐸 LN & FFN LN & FFN edge embedding position embedding concatenate Semantic graph transformer encoder LN & Linear 𝑄 𝐾 𝑉 SoftMax LN & FFN Node feature concatenate Common transformer encoder H× heads H× heads Figure 1: Our semantic graph transformer encoder over common transformer encoder, which dynamically learns the position, spatial structure, and skeletal edge knowledge of human joints, as well as their correlations. dependencies of relative frames, with the goal of effectively alleviating the depth ambiguity problem of single-view 3D HPE and improving model performance. Many graph-based 3D HPE methods get graph features by manually creating adjacency matrices, which are then fed into CNNs to predict the pose. These graphs mainly focus on the connections between joints while ignoring many significant details like joint locations or edges. The convolutional window has a limited receptive field, making it challenging to model long-time joint dependencies. Also, since its weights are independent of the input, there is no interaction between the graph features, which can be considered static. To deal with these, we improve the common transformer and propose a deep semantic graph transformer encoder by introducing several graph features conveying spatial information on joint position features, as shown in Figure 1. To extract deep hidden semantic knowledge, we simultaneously mine the position, spatial structure, and bone edge information associated with human nodes and generate a variety of relevant feature embeddings. Then, we establish a dynamic communication strategy among these features via a semantic attention mechanism, which fully exploits their correlations and performs effective fusion to enhance feature representation. In order to reduce depth ambiguity and improve model performance, we build a progressive spatial-temporal feature fusion framework across multiple viewpoints. To make the spatial feature more expressive, we perform cross-view spatial feature fusion during monocular semantic feature extraction using multi-head attention between features of different views. The mutual supervision and interaction of spatial semantic knowledge from different viewpoints are utilized to rich the joint features. To supplement the depth information, the spatial and temporal features across multiple viewpoints are progressively fused, and long-time dependencies of relevant frames are dynamically learned and adopted. Extensive experiments demonstrate the efficacy of our method. It significantly mitigates the depth ambiguity problem of singleview 3D HPE and improves the accuracy of 3D pose prediction with the proposed graph features and fusion framework. The main contributions of this paper are: • We propose a deep semantic graph transformer encoder, which effectively enhances pose feature representation through deeply mining the position, spatial structure, and skeletal edge information of human joints, as well as learning their correlations dynamically. • We build a progressive multi-view spatial-temporal feature fusion framework. The depth uncertainty of human joints is greatly reduced by performing feature fusion from spatial to temporal and modeling long-time dependencies of relevant images across multiple viewpoints. • Extensive experiments on three popular 3D HPE benchmarks reveal that our method can outperform several state-of-the-art 3D HPE approaches, significantly mitigates the depth ambiguity problem of single-view 3D HPE and improves 3D HPE performance. Related Works CNN-Based 3D HPE Methods According to different frameworks, CNN-based 3D HPE methods can be classified into direct estimation and 2D-3D lifting approaches. The direct estimation methods (Luvizon, Picard, and Tabia 2018; Luvizon, Tabia, and Picard 2019; Xiang, Joo, and Sheikh 2019) design an end-to-end network to directly infer 3D pose from the input image. The 2D to 3D lifting methods (Zhou et al. 2019; Cai et al. 2019; Pavllo et al. 2019; Yeh, Hu, and Schwing 2019; Zeng et al. 2021; Liu et al. 2020b) first utilize a 2D pose estimator to obtain the 2D pose, then adopt the 2D-3D lifting network to acquire 3D pose, which usually performs better due to intermediate supervision of 2D pose. Our method adheres to the 2D-3D lifting line, but improves several constraints of CNNs-based methods by combining the graph with transformer. It can also be divided into single-view and multi-view 3D HPE based on camera view number. Single-view 3D HPE methods (Luvizon, Tabia, and Picard 2019; Zheng et al. 2021; Li et al. 2022b; Zeng et al. 2021) predict 3D pose from monocular images, which is an ill-posed problem with depth ambiguity during 2D-3D pose mapping. Multi-view 3D HPE methods (Shuai, Wu, and Liu 2022; He et al. 2020; Ma et al. 2021) have evolved to address this, because knowledge from various views can supplement the missing joint depth, yielding superior results in complex scenes with occlusion or camera motion. Some jobs (He et al. 2020; Xie, Wang, and Wang 2022; Wang et al. 2021; Iskakov et al. 2019) used epipolar geometry or triangulation to integrate multi-view 2D heatmaps while neglecting considerable joint semantic knowledge and requiring pre-providing camera parameters. Some (Bouazizi et al. 2021; Gholami et al. 2022; Kim et al. 2022) fused multi-view features only at the deep network levels, ignoring useful information at the shallow and medium network layers. Some (Iqbal, Molchanov, and Kautz 2020; Zhang et al. 2020) used complex loss functions, making model training challenging. We present a progressive multi-view feature fusion framework from spatial The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7206 to temporal using basic L2 loss, with no extrinsic camera parameters required during implementation. Graph-Based 3D HPE Methods Current graph-based 3D HPE work mostly employs GCNs to acquire graph features of 2D poses and then predict 3D poses, which is mainly used in single-view 3D HPE (Cai et al. 2019; Zhao et al. 2019; Liu, Zou, and Tang 2020; Liu et al. 2020a; Zou et al. 2020, 2021; Xu and Takano 2021; Liu et al. 2021; Zhang et al. 2022b, 2023a). However, most of these methods (Pavllo et al. 2019; Zhao et al. 2019; Cai et al. 2019; Xu and Takano 2021; Liu et al. 2021; Zhang et al. 2022b, 2023a) only consider certain structural information in the pose graph, ignoring numerous significant signals such as joint locations and bone edges. Some (Cai et al. 2019; Wang et al. 2020; Zeng et al. 2021; Liu et al. 2021; Zhang et al. 2022b, 2023a) use temporal information of related images to help identify the joint depth of the target image, although modeling long-time relationships is challenging. In contrast to these efforts, we construct a deep semantic graph transformer encoder that dynamically and adaptively learns the location, spatial structure, and skeletal edge properties of human joints. We also create a multi-view information fusion network capable of mining the spatial-temporal dependencies of human nodes in long-time related images. Transformer-Based 3D HPE Methods Due to the superior performance of transformer (Vaswani et al. 2017) in modeling long-range dependencies, 3D HPE work utilizing transformer has increasingly emerged. Currently, its primary application is in single-view 3D HPE (Zheng et al. 2021; Li et al. 2022b,a; Zhao, Wang, and Tian 2022; Zhao et al. 2023; Li et al. 2023; Gong et al. 2023; Shan et al. 2023), and only a few jobs are about multi-view (He et al. 2020; Ma et al. 2021; Shuai, Wu, and Liu 2022; Zhang et al. 2023b; Zhou et al. 2023). The input feature tokens of most these works are generally converted by 2D joint positions that ignore much spatial structure information of human nodes. A few single-view works (Zhao, Wang, and Tian 2022; Ionescu et al. 2023) mix graphs with transformer together, but they only evaluate certain structural messages and have restricted performance. Our work combines graphs with transformer networks. It first proposes a semantic graph transformer encoder that learns the position, structure, and edge features of human joints adaptively to enhance node spatial feature representation. A multi-view spatial-temporal feature fusion framework is also developed to address the depth ambiguity issue of single-view 3D HPE and improve model performance. Method The framework of the proposed method is illustrated in Figure 2. For the input image sequence I = {Ii}V ×T i=1 with T frames from V views, we first use an offline 2D pose estimator to detect the 2D pose P2D ∈RT ×J×2 of the human body in each frame, and then input these 2D poses into the subsequent 2D-3D lifting network to estimate the 3D temporal position embedding view v1 … … view v2 Semantic graph transformer encoder Multi-view cross-channel fusion Temporal transformer encoder Multi-view spatial-temporal fusion 2D pose Cross-view spatial fusion Semantic graph transformer encoder 3D pose Regression position embedding spatial embedding edge embedding spatial embedding … … … … position embedding edge embedding … … … … … … … … Cross-view spatial fusion 𝑋𝑣1 𝑋𝑣2 𝐸𝑣1 𝐸𝑣2 𝑋𝑣1 0 𝑋𝑣2 0 𝐸𝑣1 0 𝐸𝑣2 0 𝑋𝑓 𝐸𝑓 Figure 2: The architecture of our method. A deep semantic graph transformer encoder involving the position, spatial structure, and bone edge information of human joints is proposed to enhance spatial feature representation. A progressive multi-view spatial-temporal feature fusion framework is built to mitigate the depth ambiguity of single-view 3D HPE and improve 3D HPE performance. pose P3D ∈RT ×J×3 of the target image Ii. In our network, we first propose a deep semantic graph transformer encoder to fully extract the position, structure, and skeletal edge features involved in human joints, and utilize the attention mechanism to mine their correlations and dependencies to enhance the representation of spatial features. On this basis, we build a hierarchical multi-view information fusion framework to fully fuse the spatial and temporal features from multiple views, mitigate depth ambiguity of singleview 3D HPE, and enhance 3D pose prediction accuracy. Input Feature Embedding Human body joints mainly involve the location of each joint, the spatial structure formed by all joints, and bones between connected joints. Most current graph-based 3D HPE work utilizes the adjacency matrices produced by the connections between joints to construct the graph feature, which only depicts part of the structural information of the human joints, while disregarding the influence of the position and bone edges. In order to enrich the spatial knowledge of pose features, we here consider the position, spatial structure, and skeleton edge feature embeddings of human nodes at the same time, and try to dynamically learn and mine their correlations. The following are the specifics of these features: Node position embedding. The transformed features of the 2D joint coordinates obtained by the 2D pose estimator are defined as the node position embedding: X = ϕ  ∥T,J i=1,j=1 {Pij}  , (1) where Pij = (xij, yij), ϕ is a feature conversion function. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7207 1 1 0 0 0 1 1 1 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 1 1 𝑛1 𝑛2 𝑛3 𝑛5 𝑛1 𝑛2 𝑛3 𝑛5 Spatial graph 𝑛4 𝑛4 𝑥1, 𝑦1 Position 𝑃𝑖= 𝑥𝑖, 𝑦𝑖 0 𝐿12 0 0 0 𝐿21 0 𝐿23 𝐿24 0 0 𝐿32 0 0 0 0 𝐿42 0 0 𝐿45 0 0 0 𝐿54 0 𝑛1 𝑛2 𝑛3 𝑛5 𝑛1 𝑛2 𝑛3 𝑛5 Edge graph 𝑛4 𝑛4 edge 1 𝑛1 𝑛2 𝑛5 𝑛3 𝑛4 edge 2 edge 3 edge 4 𝑥3, 𝑦3 𝑥2, 𝑦2 𝑥4, 𝑦4 𝑥5, 𝑦5 Figure 3: The position, spatial structure, and bone edge feature embeddings of five connected nodes. It describes the initial spatial position information of each node ignored by many GCN-based 3D HPE approaches. Spatial graph embedding. Based on connections of human joints, we construct multi-order graph features from global to local to describe the spatial structure of human joints. The global graph denotes the connected relationships between all nodes, whereas the local graph depicts specific special relationships such as similarity. For a 1-order graph adjacency matrix A, if there is a connection between node i and j, then its element aij = 1, otherwise aij = 0. These graph features serve as the node spatial embedding G and characterize their spatial arrangements, denoted as: G = ∥K k=1σ  WkX ˜Ak  , (2) where ˜A = ˆD−1 2 ˆA ˆD−1 2 is the symmetric matrix of ˆA. ˆA = A + I, A is the interconnecting adjacency matrix, and I means self-connections. ˆD is the normalized diagonal matrix of ˆA. W = {wij} is a learnable weight matrix. σ denotes a nonlinear activation function, ∥is the concatenation of K kinds of global to local graph features. Edge graph embedding. The bone edge connecting two nodes describes their spatial information as well. We build the edge graph adjacency matrix B based on whether there is a connected bone between two points, and use their distance as the corresponding element of this matrix. This feature is used as the node edge embedding E, depicting the skeletal connection and bone length between two connected joints. E = σ  WX ˜B  , (3) where bij is the element of matrix B at position (i, j). If i and j is connected, bij = Lij = ∥Pi −Pj∥2, else bij = 0. Semantic Graph Transformer Encoder We improve the common transformer encoder (Vaswani et al. 2017) and propose a semantic graph transformer encoder, as shown in Figure 1. In order to enhance the pose spatial feature representation, it deeply mines significant semantic information hidden in the position, spatial, and edge embeddings of all human joints, and dynamically builds an adaptive communication and fusion bridge between them. The attention matrix can be analogous to a rownormalized adjacency matrix of a directed weighted complete graph. Unlike a static input graph, it aggregates input features dynamically using the attention mechanism. When producing the attention matrix, however, there is no direct way to merge the input spatial features. To solve the problem, we incorporate spatial embedding G into position embedding X and propose the semantic attention (SA), which is described as: Qp = LN(ψ1(X)), Kp = LN(ψ2(X)), Vp = LN(ψ3(X)), (4) SA(Qp, Kp, Vp) = Softmax  QpK⊤ p / √ d + σ(G)  Vp, (5) where ψ is a linear layer, and d is the feature dimension. σ(G) serves as a bias term to position features, indicating the combination of joint spatial and location knowledge. The multi-head semantic attention (MHSA) is also utilized to further enhance the feature, denoted as: MHSA(X) = ∥H h=1SA(Qh p, Kh p , V h p ), (6) where ∥is the concatenation of H attention heads. To make the feature more expressive, we further merge skeletal edge embeddings associated with each pair of connected joints into the position and spatial features. We transform their outputs passing through the MHSA and perform element-wise product with the layer-normalized and linear converted edge features. Residual connection is utilized to help in network training. Node features from layer l −1 to l are changed as: Xl′ = MHSA(Xl−1) + Xl−1, (7) Xl = Xl′ + µl p(Xl′) + τ l p µl p(Xl′) ⊙LN(ψp(El−1))  , (8) El = El−1 + τ l e µl e(Xl′) ⊙LN(ψe(El−1))  , (9) where ⊙denotes element-wise product. µ and τ are different feature transformation functions in the feedforward network. Progressive Multi-View Feature Fusion Single-view 3D HPE suffers from severe depth ambiguity. Since a monocular image cannot determine the depth of human joints, a 2D pose may map several different 3D poses, making single-view 2D-3D lifting challenging. To address this issue, we take full advantage of the intermediate supervision of information from multiple viewpoints and design a progressive multi-view feature fusion framework from spatial to temporal, which alleviates depth uncertainty and improves 3D pose prediction accuracy. Cross-view Spatial Fusion (CSF). To improve the spatial feature representation, we perform cross-view spatial feature fusion during the spatial semantic feature extraction of each individual viewpoint. The node and edge features are fused individually to mine richer unique information. Assume the output node features of layer l of view v1 and v2 are Xv1 and Xv2, respectively. We first transform them using linear layers, and then feed them into the general transformer encoder for interaction and fusion. The converted Xv1 is as the Q and K of the multi-head attention, while the converted Xv2 is as the V . The fusion node feature is generated as: X′ = MHA (η1 (Xv1) , η2 (Xv1) , η3 (Xv2)) , (10) X′′ = X′ + Xv1 + Xv2, (11) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7208 Methods Dir. Disc. Eat Greet Phone Photo Pose Purch Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg. Single-view methods (Ionescu et al. 2023) 47.9 50.0 47.1 51.3 51.2 59.5 48.7 46.9 56.0 61.9 51.1 48.9 54.3 40.0 42.9 50.5 (Zeng et al. 2021) 43.1 50.4 43.9 45.3 46.1 57.0 46.3 47.6 56.3 61.5 47.7 47.4 53.5 35.4 37.3 47.9 (Geng et al. 2023) 47.8 (Zhao et al. 2023) 45.2 (Liu et al. 2020b) 41.8 44.8 41.1 44.9 47.4 54.1 43.4 42.2 56.2 63.6 45.3 43.5 45.3 31.3 32.2 45.1 (Zheng et al. 2021) 41.5 44.8 39.8 42.5 46.5 51.6 42.1 42.0 53.3 60.7 45.5 43.3 46.1 31.8 32.2 44.3 (Li et al. 2023) 39.1 42.7 38.7 40.3 44.1 50.0 41.4 38.7 53.9 61.6 43.6 40.8 42.5 29.6 30.6 42.5 (Zhang et al. 2022a) 37.6 40.9 37.3 39.7 42.3 49.9 40.1 39.8 51.7 55.0 42.1 39.8 41.0 27.9 27.9 40.9 (Gong et al. 2023) 33.2 36.6 33.0 35.6 37.6 45.1 35.7 35.5 46.4 49.9 37.3 35.6 36.5 24.4 24.1 36.9 (Ci et al. 2023) 31.7 35.4 31.7 32.3 36.4 42.4 32.7 31.5 41.2 52.7 36.5 34.0 36.2 29.5 30.2 35.6 Multi-view methods (camera parameters are given) (Kadkho. et al. 2021) 39.4 46.9 41.0 42.7 53.6 54.8 41.4 50.0 59.9 78.8 49.8 46.2 51.1 40.5 41.0 49.1 (Luvizon et al. 2022)(+) 31.0 33.0 41.0 34.0 41.0 37.0 37.0 51.0 56.0 43.0 44.0 37.0 33.0 42.0 32.0 39.0 (Bultmann and Behnke 2021) 27.1 29.9 27.0 26.5 31.3 28.9 27.1 29.8 36.5 36.0 30.8 29.3 29.7 27.3 26.3 29.8 (Bartol et al. 2022) 27.5 28.4 29.3 27.5 30.1 28.1 27.9 30.8 32.9 32.5 30.8 29.4 28.5 30.5 30.1 29.1 (He et al. 2020) 25.7 27.7 23.7 24.8 26.9 31.4 24.9 26.5 28.8 31.7 28.2 26.4 23.6 28.3 23.5 26.9 (Qiu et al. 2019) (+) 24.0 26.7 23.2 24.3 24.8 22.8 24.1 28.6 32.1 26.9 31.0 25.6 25.0 28.1 24.4 26.2 (Iskakov et al. 2019) 19.9 20.0 18.9 18.5 20.5 19.4 18.4 22.1 22.5 28.7 21.2 20.8 19.7 22.1 20.2 20.8 Multi-view methods (camera parameters are not given) (Luvizon et al. 2022)(+) 40.0 36.0 44.0 39.0 44.0 42.0 41.0 66.0 70.0 46.0 49.0 43.0 34.0 46.0 34.0 45.0 (Huang et al. 2020) 26.8 32.0 25.6 52.1 33.3 42.3 25.8 25.9 40.5 76.6 39.1 54.5 35.9 25.1 24.2 37.5 (Iskakov et al. 2019) 27.6 30.3 29.0 29.4 33.1 36.5 27.4 34.8 39.1 54.0 34.4 30.7 36.2 26.2 28.4 33.1 (Remelli et al. 2020) 27.3 32.1 25.0 26.5 29.3 35.4 28.8 31.6 36.4 31.7 31.2 29.9 26.9 33.7 30.4 30.2 (Gordon et al. 2022) 22.0 23.6 24.9 26.7 30.6 35.7 25.1 32.9 29.5 32.5 32.6 26.5 34.7 26.0 27.7 30.2 Ours (CPN, T=27) 26.5 28.3 23.0 25.9 27.2 31.0 25.4 27.2 28.6 33.8 28.6 25.6 30.1 27.1 26.5 27.6 Table 1: Comparisons with state-of-the-art 3D HPE methods on Human3.6M with P1 (mm) using the detected 2D poses. Our results are given when the temporal receptive field is under 27. (+) means using extra data. Best in bold. Xf = MLP (LN (X′′)) + X′′, (12) where η is a linear layer. The Xf are concatenated with Xv1 and Xv2 to generate the final cross-view spatial features XF , which is fed into the deeper network. XF = Concat(Xv1, Xf, Xv1). (13) The cross-view fusion edge features EF can be obtained in the same way. This approach not only preserves the distinctive characteristics of each viewpoint, but also fully embeds fusion features across various viewpoints, resulting in richer hidden information extraction. Multi-view Spatial-Temporal Fusion (MSTF). To further mitigate the depth ambiguity of 3D HPE, we develop a multi-view spatial-temporal fusion module. It profoundly integrates temporal knowledge of related images with spatial data from multiple viewpoints to supplement the missing depth message of human joints in the target image. For the cross-view fused features XF and EF , we first utilize a multi-view cross-channel fusion block (consisting of a batch normalization, 1x1 convolution layer, and layer normalization) to better preserve and refine the original spatial information, and convert these features to YX and YE. These two features are then multiplied and embedded with frame temporal position encoding ET P os before being fed into the temporal transformer encoder for further spatial-temporal fusion. Finally, an MLP layer is utilized to regress the final pose features and predict the 3D pose. The spatial-temporal fusion feature Z is defined as: Y ′ = ρ(YX ⊗YE) + ET P os, (14) Y ′′ = MHA (LN (Y ′)) + Y ′, (15) Z = MLP (LN (Y ′′)) + Y ′′, (16) in which ρ is a feature transformation function, and ⊗is the dot product operation. Loss Function. We train our model using only the basic Mean Squared Error (MSE) loss function without any bells and whistles, which minimizes the L2 distance error between the estimated human joint points and the corresponding ground-truth joint points, denoted as L = T X i=1 J X j=1 ˆP 3D i,j −P 3D i,j 2 , (17) where ˆP 3D i,j and P 3D i,j denotes predicted and ground-truth 3D coordinates of the j-th node in the i-th frame, respectively. Experiments Datasets and Protocols Human3.6M. (Ionescu et al. 2013) is the largest and most popular 3D HPE benchmark. It contains 3.6 million 3D human pose images and corresponding annotations captured by 4 synchronized cameras at 50Hz with different viewpoints in a controlled indoor environment. These data involves 15 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7209 Methods Dir. Disc. Eat Greet Phone Photo Pose Purch Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg. (Gordon et al. 2022) 22.9 (Shuai, Wu, and Liu 2022) 15.5 17.1 13.7 15.5 14.0 16.2 15.8 16.5 15.8 16.1 14.5 14.5 16.9 14.3 13.7 15.3 Ours 11.7 13.0 10.1 12.1 10.7 13.0 12.1 10.7 10.8 11.9 11.0 11.6 12.8 11.1 12.0 11.7 Table 2: Comparisons with state-of-the-art multi-view 3D HPE methods on Human3.6M with P1 (mm) using the ground-truth 2D poses. Our results are given when the temporal receptive field is under 27. Best in bold. action scenes performed by 11 professional actors, with 17 human nodes annotated in each image. Following previous works (Zheng et al. 2021; Li et al. 2022b, 2023), we use subjects S1, S5, S6, S7, and S8 for model training, S9 and S11 for model testing. The Protocol 1 (P1) and Protocol 2 (P2) are used to evaluate the validity of our models. P1 calculates the mean per joint positioning error (MPJPE) in millimeters, which is the Euclidean distance between the ground-truth and the predicted joint points. P2 indicates the ProcrustesMPJPE (P-MPJPE) error in millimeters, i.e., the MPJPE error between the predicted and ground-truth nodes after rigid alignment in terms of translation, rotation, and scale. MPI-INF-3DHP. (Mehta et al. 2017) is a large-scale 3D human pose dataset from both indoor and outdoor scenes. It consists of more than 1.3 million images captured by 14 synchronized cameras from different viewpoints, recording 8 types of activities of 8 participants. 17 nodes of each image are annotated. Four chest views of S1-S6 are used for training, S7 and S8 are for testing. The Protocol 1, Protocol 2, Percentage of Correct Keypoints (PCK) with a threshold of 150 mm, and corresponding Area Under Curve (AUC) are used to evaluate the model. Ski-Pose PTZ-Camera. (Fasel et al. 2016) is a smaller dataset with challenging in-the-wild images of alpine skiers performing giant slalom runs. It contains images of 6 subjects captured from 6 camera viewpoints. Following the official implementations, we use the subject 1-5 for model training, and subject 6 for model testing. The Protocol 1 and Protocol 2 are used for model evaluation. Implementation Details Our experiments are conducted on the PyTorch platform with 4 GeForce RTX 1080Ti GPUs. The Amsgrad optimizer is used with a weight decay of 0.1. For model training, the initial learning rate is 0.0002. The learning shrink factor after each epoch is α = 0.98. When training the model, we set the maximum epoch and batch size to 50 and 1024, respectively. Four-order global-to-local spatial embedding graph features are considered. Four cascaded spatial and temporal transformer encoder layers are used in our framework, respectively. When using the detected 2D pose to obtain the 3D pose, we adopt the Cascaded Pyramid Network (CPN) (Chen et al. 2018) as the 2D pose detector. For all the three datasets, our models are trained just using the dataset themselves, without any other additional training data. Comparison With State-of-the-art Methods Results on Human3.6M. Table 1 shows our comparisons with state-of-the-art (SOTA) single-view and multi-view 3D Methods Trainset PCK AUC P1(mm) P2(mm) (Chen et al. 2021) H36M 64.3 31.6 (Luvizon et al. 2022) H36M+ 80.6 42.1 112.1 (Iqbal et al.2020) H36M+ 80.2 110.8 (Kocabas et al. 2019) 3DHP 77.5 109.0 (Gholami et al. 2022) 3DHP 101.5 76.5 (Wandt et al. 2021) 3DHP 77.0 104.0 70.3 (Kocabas et al. 2019) H36M 76.6 67.5 Ours 3DHP 98.7 90.2 16.9 12.1 H36M 99.9 91.7 10.6 7.6 Table 3: Comparison results on 3DHP dataset. Best in bold. HPE algorithms on Human3.6M. Our method outperforms all SOTA single-view 3D HPE methods, with MPJPE reduced by 8.0mm (22.5%) to (Ci et al. 2023). This finding suggests that intermediate supervision of multi-view information is helpful to reduce depth ambiguity of single-view 3D HPE and effectively enhances model performance. Our model surpasses several SOTA multi-view 3D HPE techniques that require camera calibration but performs somewhat worse than (Qiu et al. 2019) and (Iskakov et al. 2019). It indicates that while our method is competitive, methods using camera calibration still dominate in model performance. However, they are difficult in adjusting to various scenes because they are too reliant on camera settings. When compared with multi-view 3D HPE methods without pre-providing camera parameters, we achieve superior results, with MPJPE decreasing by 2.6mm (8.6%) compared to (Gordon et al. 2022). When given ground-truth 2D pose, as shown in Table 2, our model performance improves, with MPJPE 15.9mm (57.6%) and 3.6mm (23.5%) lower than Ours (CPN) and (Shuai, Wu, and Liu 2022), respectively. This implies that 2D pose is essential to the 2D-3D lifting and that better 2D poses facilitate the model. It is worth noting that our model was trained without any extra training data, using only the basic L2 loss function. These demonstrate how effective our method is. Results on 3DHP. Table 3 compares our methods with relevant SOTA approaches on 3DHP. Two alternative scenarios have been explored. The first involves training and testing the model both on 3DHP. The other is finetuning the model trained on Human3.6M and testing it on 3DHP. Results depict that our method outperforms others in both scenarios and works better in the second case, with MPJPE and P-MPJPE decreasing by 66.0mm (86.2%) and 59.9mm (88.7%), respectively, in comparison to (Kocabas, Karagoz, and Akbas 2019). Because the Human3.6M dataset is larger The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7210 Methods Trainset P1(mm) P2(mm) (Chen et al. 2021) H36M 130.2 108.7 (Wandt et al. 2021) Ski 128.1 89.6 (Chen et al. 2021) H36M+ 99.4 74.7 (Rhodin et al. 2018) Ski 85.0 (Gordon et al. 2022) Ski 65.5 Ours Ski 63.2 48.5 H36M 45.3 31.4 Table 4: Comparisons on Ski-Pose dataset. Best in bold. position spatial edge P1(mm) P2(mm) ✓ ✓ ✓ 27.6 21.8 ✓ ✓ × 28.2 21.8 ✓ × ✓ 28.4 21.9 ✓ × × 29.4 22.7 Table 5: Impact of various features. Best in bold. and contains more action categories, the second scenario allows the model to learn more about data types, actions, and scenes, bringing in better results. These show that our approach works for both indoor and outdoor datasets. Results on Ski-Pose. Table 4 compares how well our algorithm performs against related methods on Ski-Pose. Scenarios of both training and testing on Ski-Pose, as well as finetuning the model trained on Human3.6M and testing on Ski-Pose, are studied. Results show that our method outperforms other approaches in both scenarios. In the second scenario, our MPJPE is 20.2mm (30.8%) lower than (Gordon et al. 2022), proving that the model benefits from more training data categories. In the first scenario, our method is also highly competitive, with MPJPE decreasing by 2.3mm (3.5%) compared to (Gordon et al. 2022). These show efficacy of our method in handling challenging in-the-wild data. We present some qualitative results of our method on the three datasets in Figure 5, demonstrating the intuitive effectiveness of our approach in predicting 3D poses. It can be observed that our method performs well even for severe selfoccluded poses and challenging complex poses. Ablation Study Impact of various features. Table 5 shows how the employed node position, structure, and edge feature embeddings affect the model. When all three features are utilized, the model performs best. The spatial embeddings, which provide main spatial structural knowledge of human joints, have a greater impact on the model than edge embeddings, while edge features also contribute. When we solely use joint position features, the model performs the worst, proving that the introduction of related spatial graph features can enrich joint information and improve feature representation. We have also shown attention maps of the three feature embeddings, i.e., position, spatial, and edge in Figure 4, which demonstrates how the pose information gradually becomes richer and more meaningful as more feature types are embedded, indicating the ability of our model to learn and utiP P + G P + G + E 1.5 1.6 1.7 1.8 1.9 2.0 2.1 Figure 4: The attention map of different feature embeddings. CSF MSTF Params(M) FLOPs (G) P1(mm) P2(mm) ✓ ✓ 11.42 0.37 27.6 21.8 × ✓ 11.39 0.35 29.0 23.1 ✓ × 6.92 0.12 31.7 24.0 Table 6: Impact of fusion modules. Best in bold. view number Params(M) FLOPs (G) P1(mm) P2(mm) 1 7.25 0.28 47.8 37.2 2 9.64 0.31 32.0 25.1 3 10.02 0.33 31.1 24.7 4 11.42 0.37 27.6 21.8 Table 7: Impacts of different viewpoints. Best in bold. lize significant semantic information. Impact of fusion modules. Table 6 displays the impacts of our proposed feature fusion modules, CSF and MSTF, on 3D HPE performance. Results reveal that removing any of the fusion modules worsens the model, emphasizing the importance of progressive multi-view feature fusion in improving 3D pose prediction. When MSTF is removed, MLP is used for feature conversion, and the model performs worse than when CSF is removed, indicating that spatial-temporal feature fusion has a greater influence on model improvement than simple spatial feature fusion, and that temporal knowledge of relevant frames is crucial for reducing depth ambiguity and improving model performance. Impact of views. The effects of various input numbers of viewpoint information on model performance are depicted in Table 7. Our model improves steadily as the number of viewpoint increases, and it performs best when the viewpoint number is 4 (since the Human3.6M data contains a maximum of 4 viewpoints, we set the maximum number of viewpoints to 4 here). It suggests that intermediate supervision of multi-view data can successfully compensate for missing joint depth in single-view 3D HPE and boost the model. Furthermore, it illustrates that our framework is applicable for multi-view data fusion, capable of accepting an unlimited number of viewpoints. Impact of temporal receptive fields (TRFs). We explore the impact of various TRFs on the model in Figure 6. It can be observed that when TRF grows, the model gradually improves. When TRF reaches 81, the model is essentially saturated, and further increases will not result in significant performance advancements. It implies that while temporal data promotes the model, utilizing a large TRF necessitates more The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7211 Luvizon et al. 2022 Gorden et al. 2022 Ours GT 3DHP Input H36M GT Input Ski H36M GT Input Luvizon et al. 2022 Gorden et al. 2022 Ours GT Input 3DHP H36M GT Input Ski H36M GT Input Visualization results on Human3.6m Visualization results on Ski-Pose Visualization results on 3DHP Figure 5: The qualitative results on three datasets. 0.02 0.04 0.12 0.37 1.09 3.27 11.35 11.35 11.36 11.38 11.39 11.42 29.6 28.2 28 27.6 27.5 27.3 22.5 22 21.9 21.8 21.4 21.2 0 5 10 15 20 25 30 1 3 9 27 81 243 FLOPs(G) Params(M) P1(mm) P2(mm) Figure 6: Impacts of temporal receptive fields (TRFs). data to train a better model. When TRF=1, our model performs well, showing that our method can extract sufficiently rich pose semantic information and somewhat reduce depth ambiguity even in the absence of significant temporal information. As TRF grows, the model parameters almost remain constant, proving that our method is not sensitive to TRF and has no burden to process multi-frame data. FLOPs rise slowly with TRF but remain small even at TRF=243. Computational complexity. Table 8 compares the computational complexity of our model with relevant methods. Even though our TRF is larger than (Luvizon, Picard, and Tabia 2022), our model has fewer parameters and yields better performance. When compared with (Gordon et al. 2022) using the same TRF, our model performs better with fewer parameters and FLOPs. These indicate that our approach strikes a balance between performance and effiMethods TRF Params(M) FLOPs(G) P1(mm) (Luvizon et al. 2022) 1 23.3 45.0 (Gordon et al. 2022) 27 70.4 8.5 30.2 Ours 27 11.4 0.4 27.6 Table 8: Computation complexity comparison. Best in bold. ciency, demonstrating strong practicality. Conclusion In this paper, we developed a deep semantic graph transformer-based multi-view 3D HPE structure, which improved 3D pose prediction performance by adaptively learning and fusing various significant pose semantic features. First, we developed a deep semantic graph transformer encoder, which dynamically mined the position, spatial structure, and skeletal edge feature embeddings and their correlations of human joints, greatly enhancing the spatial feature representation. Then, we constructed a progressive multiview spatial-temporal feature fusion framework, successfully merging the spatial-temporal distinguishing and consistent features across multiple viewpoints using various feature fusion modules. Extensive experiments on three 3D HPE benchmarks demonstrated how effective our approach is. It effectively increases the expressiveness of the pose feature, and its spatial-temporal feature fusion strategy is fairly beneficial in reducing depth ambiguity in single-view 3D HPE and significantly enhancing 3D HPE performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7212 Acknowledgments This work was supported by the National Natural Science Foundation of China under Grant 62106247 and Grant 62371438. References Bartol, K.; Bojani´c, D.; Petkovi´c, T.; and Pribani´c, T. 2022. Generalizable Human Pose Triangulation. In CVPR, 11028– 11037. Bouazizi, A.; Wiederer, J.; Kressel, U.; and Belagiannis, V. 2021. Self-Supervised 3D Human Pose Estimation with Multiple-View Geometry. In FG. Bultmann, S.; and Behnke, S. 2021. Real-Time Multi-View 3D Human Pose Estimation using Semantic Feedback to Smart Edge Sensors. In RSS. Cai, Y.; Ge, L.; Liu, J.; Cai, J.; Cham, T.; Yuan, J.; and Thalmann, N. 2019. Exploiting spatial-temporal relationships for 3d pose estimation via graph convolutional networks. In ICCV, 2272–2281. Chen, Y.; Wang, Z.; Peng, Y.; Zhang, Z.; Yu, G.; and Sun, J. 2018. Cascaded pyramid network for multi-person pose estimation. In CVPR, 7103–7112. Ci, H.; Wu, M.; Zhu, W.; Ma, X.; Dong, H.; Zhong, F.; and Wang, Y. 2023. GFPose: Learning 3D Human Pose Prior With Gradient Fields. In CVPR, 4800–4810. Fasel, B.; Sp¨orri, J.; Gilgien, M.; Boffi, G.; Chardonnens, J.; M¨uller, E.; and Aminian, K. 2016. Three-dimensional body and centre of mass kinematics in alpine ski racing using differential gnss and inertial sensors. Remote Sensing, 8(8): 617. Geng, Z.; Wang, C.; Wei, Y.; Liu, Z.; Li, H.; and Hu, H. 2023. Human Pose as Compositional Tokens. In CVPR. Gholami, M.; Rezaei, A.; Rhodin, H.; Ward, R.; and Wang, Z. J. 2022. Self-supervised 3D human pose estimation from video. Neurocomputing, 488: 97–106. Gong, J.; Foo, L. G.; Fan, Z.; Ke, Q.; Rahmani, H.; and Liu, J. 2023. DiffPose: Toward More Reliable 3D Pose Estimation. In CVPR, 13041–13051. Gordon, B.; Raab, S.; Azov, G.; Giryes, R.; and Cohen-Or, D. 2022. Flex: Parameter-free multi-view 3d human motion reconstruction. In ECCV. He, Y.; Yan, R.; Fragkiadaki, K.; and Yu, S.-I. 2020. Epipolar transformers. In CVPR, 7779–7788. Huang, F.; Zeng, A.; Liu, M.; and Lai, Q. 2020. DeepFuse: An IMU-Aware Network for Real-Time 3D Human Pose Estimation from Multi-View Image. In WACV, 429–438. Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2013. Human3.6M: Large scale datasets and predic- tive methods for 3D human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7): 1325–1339. Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2023. Pose-Oriented Transformer with Uncertainty-Guided Refinement for 2D-to-3D Human Pose Estimation. AAAI, 37(1): 1296–1304. Iqbal, U.; Molchanov, P.; and Kautz, J. 2020. Weaklysupervised 3d human pose learning via multi-view images in the wild. In CVPR, 5242–5251. Iskakov, K.; Burkov, E.; Lempitsky, V.; and Malkov, Y. 2019. Learnable triangulation of human pose. In ICCV, 7717–7726. Kim, H.-W.; Lee, G.-H.; Oh, M.-S.; and Lee, S.-W. 2022. Cross-View Self-Fusion for Self-Supervised 3D Human Pose Estimation in the Wild. In ACCV, 1385–1402. Kocabas, M.; Karagoz, S.; and Akbas, E. 2019. Selfsupervised learning of 3d human pose using multi-view geometry. In CVPR, 1077–1086. Li, W.; Liu, H.; Ding, R.; Liu, M.; Wang, P.; and Yang, W. 2022a. Exploiting temporal contexts with strided transformer for 3d human pose estimation. IEEE Transactions on Multimedia, 25: 1282–1293. Li, W.; Liu, H.; Tang, H.; and Wang, P. 2023. MultiHypothesis Representation Learning for Transformer-Based 3D Human. Pattern Recognition, 141. Li, W.; Liu, H.; Tang, H.; Wang, P.; and Gool, L. V. 2022b. Mhformer: Multi-hypothesis transformer for 3d human pose estimation. In CVPR, 13147–13156. Liu, J.; Rojas, J.; Li, Y.; Liang, Z.; Guan, Y.; Xi, N.; and Zhu, H. 2021. A Graph Attention Spatio-temporal Convolutional Networks for 3D Human Pose Estimation in Video. In ICRA, 3374–3380. Liu, K.; Ding, R.; Zou, Z.; Wang, L.; and Tang, W. 2020a. A comprehensive study of weight sharing in graph networks for 3D human pose estimation. In ECCV, 318–334. Liu, K.; Zou, Z.; and Tang, W. 2020. Learning Global Pose Features in Graph Convolutional Networks for 3D Human Pose Estimation. In ACCV, 89–105. Liu, R.; Shen, J.; Wang, H.; Chen, C.; Cheung, S.; and Asari, V. 2020b. Attention mechanism exploits temporal contexts: Real-time 3d human pose reconstruction. In CVPR, 5064– 5073. Luvizon, D.; Picard, D.; and Tabia, H. 2018. 2D/3D pose estimation and action recognition using multitask deep learning. In CVPR, 5137–5146. Luvizon, D.; Tabia, H.; and Picard, D. 2019. Human pose regression by combining indirect part detection and contextual information. Computers and Graphics, 85: 15–22. Luvizon, D. C.; Picard, D.; and Tabia, H. 2022. ConsensusBased Optimization for 3D Human Pose Estimation in Camera Coordinates. International Journal of Computer Vision, 130: 869–882. Ma, H.; Chen, L.; Kong, D.; Wang, Z.; Liu, X.; Tang, H.; Yan, X.; Xie, Y.; Lin, S.-Y.; and Xie, X. 2021. Transfusion: Cross-view fusion with transformer for 3d human pose estimation. In BMVC. Mehta, D.; Rhodin, H.; Casas, D.; Fua, P.; Sotnychenko, O.; Xu, W.; and Theobalt, C. 2017. Monocular 3D human pose estimation in the wild using improved cnn supervision. In 3DV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7213 Pavlakos, G.; Zhou, X.; Derpanis, K.; and Daniilidis, K. 2017. Coarse-to-fine volumetric prediction for single-image 3d human pose. In CVPR, 1263–1272. Pavllo, D.; Feichtenhofer, C.; Grangier, D.; and Auli, M. 2019. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In CVPR, 7745– 7754. Qiu, H.; Wang, C.; Wang, J.; Wang, N.; and Zeng, W. 2019. Cross View Fusion for 3D Human Pose Estimation. In ICCV, 4342–4351. Remelli, E.; Han, S.; Honari, S.; Fua, P.; and Wang, R. 2020. Lightweight multi-view 3d pose estimation through cameradisentangled representation. In CVPR. Rhodin, H.; Meyer, F.; Sp¨orri, J.; M¨uller, E.; Constantin, V.; Fua, P.; Katircioglu, I.; and Salzmann, M. 2018. Learning Monocular 3D Human Pose Estimation From Multi-View Images. In CVPR, 8437–8446. Shan, W.; Liu, Z.; Zhang, X.; Wang, Z.; Han, K.; Wang, S.; Ma, S.; and Gao, W. 2023. Diffusion-Based 3D Human Pose Estimation with Multi-Hypothesis Aggregation. arXiv:2303.11579. Shuai, H.; Wu, L.; and Liu, Q. 2022. Adaptive Multi-view and Temporal Fusing Transformer for 3D Human Pose Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(8): 1–14. Tekin, B.; Marquez-Neila, P.; Salzmann, M.; and Fua, P. 2017. Learning to fuse 2D and 3D image cues for monocular body pose estimation. In ICCV, 3961–3970. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention Is All You Need. In NeurIPS. Wandt, B.; Rudolph, M.; Zell, P.; Rhodin, H.; and Rosenhahn, B. 2021. CanonPose: Self-Supervised Monocular 3D Human Pose Estimation in the Wild. In CVPR, 13294– 13304. Wang, C.; Qiu, W.; Qin, W.; and Zeng, W. 2021. AdaFuse: Adaptive Multiview Fusion for Accurate Human Pose Estimation in the Wild. International Journal of Computer Vision, 129: 703–718. Wang, J.; Yan, S.; Xiong, Y.; and Lin, D. 2020. Motion Guided 3D Pose Estimation from Videos. In ECCV, 764– 780. Xiang, D.; Joo, H.; and Sheikh, Y. 2019. Monocular total capture: Posing face, body, and hands in the wild. In CVPR, 10957–10966. Xie, R.; Wang, C.; and Wang, Y. 2022. Metafuse: A pretrained fusion model for human pose estimation. In CVPR. Xu, T.; and Takano, W. 2021. Graph Stacked Hourglass Networks for 3D Human Pose Estimation. In CVPR, 16105– 16114. Yeh, R.; Hu, Y.; and Schwing, A. 2019. Chirality nets for human pose regression. NeurIPS, 32: 8163–8173. Zeng, A.; Sun, X.; Yang, L.; Zhao, N.; Liu, M.; and Xu, Q. 2021. Learning skeletal graph neural networks for hard 3d pose estimation. In ICCV, 11436–11445. Zhang, J.; Tu, Z.; Yang, J.; Chen, Y.; and Yuan, J. 2022a. MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video. In CVPR, 13232–13242. Zhang, L.; Lu, F.; Zhou, K.; Zhou, X.-D.; and Shi, Y. 2023a. Hierarchical Spatial-temporal Adaptive Graph Fusion for Monocular 3D Human Pose Estimation. IEEE Signal Processing Letters, 1–5. Zhang, L.; Shao, X.; Li, Z.; Zhou, X.-D.; and Shi, Y. 2022b. Spatio-temporal Attention Graph for Monocular 3D Human Pose Estimation. In ICIP, 1231–1235. Zhang, L.; Zhou, K.; Liu, L.; Li, Z.; Zhao, X.; Zhou, X.-D.; and Shi, Y. 2023b. Progressive Multi-view Fusion for 3D Human Pose Estimation. In ICIP, 1600–1604. Zhang, Y.; An, L.; Yu, T.; Li, X.; Li, K.; and Liu, Y. 2020. 4d association graph for realtime multi-person motion capture using multiple video cameras. In CVPR, 1321–1330. Zhao, L.; Peng, X.; Tian, Y.; Kapadia, M.; and Metaxas, D. 2019. Semantic graph convolutional networks for 3D human pose regression. In ICCV, 3425–3435. Zhao, Q.; Zheng, C.; Liu, M.; Wang, P.; and Chen, C. 2023. PoseFormerV2: Exploring Frequency Domain for Efficient and Robust 3D Human Pose Estimation. In CVPR. Zhao, W.; Wang, W.; and Tian, Y. 2022. GraFormer: GraphOriented Transformer for 3D Pose Estimation. In CVPR, 20438–20447. Zheng, C.; Zhu, S.; Mendieta, M.; Yang, T.; Chen, C.; and Ding, Z. 2021. 3d human pose estimation with spatial and temporal transformers. In CVPR, 11656–11665. Zhou, K.; Han, X.; Jiang, N.; Jia, K.; and Lu, J. 2019. HEMlets pose: Learning part-centric heatmap triplets for accurate 3D human pose estimation. In ICCV, 2344–2353. Zhou, K.; Zhang, L.; Lu, F.; Zhou, X.-D.; and Shi, Y. 2023. Efficient Hierarchical Multi-view Fusion Transformer for 3D Human Pose Estimation. In ACMMM, 7512–7520. Zou, L.; Huang, Z.; Gu, N.; Wang, F.; Yang, Z.; and Wang, G. 2021. GMDN: A lightweight graph-based mixture density network for 3D human pose regression. Computers and Graphics, 95: 115–122. Zou, Z.; Liu, K.; Wang, L.; and Tang, W. 2020. High-order graph convolutional networks for 3D human pose estimation. In BMVC. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7214
2024
801
18,631
Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model Lingjun Zhang1,2*†, Xinyuan Chen2*, Yaohui Wang2, Yue Lu1‡, Yu Qiao2 1East China Normal University, Shanghai, China 2Shanghai Artificial Intelligence Laboratory, Shanghai, China 51215904033@stu.ecnu.edu.cn, chenxinyuan@pjlab.org.cn, wangyaohui@pjlab.org.cn, ylu@cs.ecnu.edu.cn, yu.qiao@siat.ac.cn Abstract Recently, diffusion-based image generation methods are credited for their remarkable text-to-image generation capabilities, while still facing challenges in accurately generating multilingual scene text images. To tackle this problem, we propose Diff-Text, which is a training-free scene text generation framework for any language. Our model outputs a photo-realistic image given a text of any language along with a textual description of a scene. The model leverages rendered sketch images as priors, thus arousing the potential multilingual-generation ability of the pre-trained Stable Diffusion. Based on the observation from the influence of the cross-attention map on object placement in generated images, we propose a localized attention constraint into the cross-attention layer to address the unreasonable positioning problem of scene text. Additionally, we introduce contrastive image-level prompts to further refine the position of the textual region and achieve more accurate scene text generation. Experiments demonstrate that our method outperforms the existing method in both the accuracy of text recognition and the naturalness of foreground-background blending. Code: https://github.com/ecnuljzhang/brush-your-text. Introduction Minority languages, such as Arabic, Thai, and Kazakh, not only have a significant number (reaching 5000 to 7000), but their low-resource nature also impedes the progress of computer vision, particularly in the domain of image generation. In recent years, with the advancement of diffusion models (Ho, Jain, and Abbeel 2020), significant progress has been made in generating realistic and prompt-aligned images (Rombach et al. 2022; Ramesh et al. 2022; Saharia et al. 2022). However, achieving accurate scene text generation remains challenging due to the fine-grained structure within the scene text. Recent efforts utilize diffusion models to overcome the limitations of traditional methods and enhance text rendering quality. For instance, Imagen (Saharia et al. 2022) and DeepFloyd (DeepFloydLab 2023) use the T5 series to gen*Equal Contribution. †Work done as an intern at Shanghai AI Laboratory. ‡Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. erate text better. While these methods are capable of generating structurally accurate scene text, they demand a large amount of training data which is not suitable for minority languages and still lack control over the generated scene text. Some researchers (Wu et al. 2019; Yang, Huang, and Lin 2020; Lee et al. 2021; Krishnan et al. 2023) exploit GAN (Goodfellow et al. 2014) based scene text editing methods to generate scene text, which is more controllable. However, these methods are confined to generating scene text at the string level and do not possess the capability to generate complete scene compositions. To tackle these challenges, we propose a training-free framework, Diff-Text, and a simple yet highly effective approach for multilingual scene text image generation. Our proposed framework inherits the off-the-shelf diffusion model while specializing in text generation by localized attention constraint method along with positive and negative image-level prompts. Specifically, given a text to be rendered, we first render it to a sketch image and then detect the edge map which is used as the control input of our model. Our model generates a realistic scene image according to the control input and the prompt input which contains a description of a scene. However, the control inputs are easily treated as grotesque patterns instead of texts on signs or billboards. Recent research (Hertz et al. 2022) suggests that the input prompts exert their influence on the object placement within the generated images via the crossattention mechanism. Inspired by this observation, we first identify the keywords in the prompt that correspond to the textual region, such as “sign”, “notice”, and “billboard”, and then constrain the cross-attention maps for these keywords to the textual region. Furthermore, we introduce a positive image-level prompt that further refines the placement of the textual region and a negative image-level prompt that enhances the alignment between the generated scene text and edge image, thereby ensuring greater accuracy in the generated scene text. Experiments demonstrate the effectiveness and robustness of our method. Related Works Scene Text Generation automates the creation of scene text images from provided textual content. Notably, SynthText (Gupta, Vedaldi, and Zisserman 2016) is widely used to train scene text recognition models. It employs existing models The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7215 Figure 1: Diff-Text has the ability to generate accurate and realistic scene text images from a given scene text of any language along with a textual description of any scene. to analyze images, identifies compatible text regions in semantically coherent areas, and places processed text using a designated font. Furthermore, SynthText3D (Liao et al. 2020) and UnrealText (Long and Yao 2020) generate scene text images from a virtual realm using a 3D graphics engine. However, these methods directly overlay text onto the background, resulting in artifacts in text appearing, which leads to a significant disparity between the synthesized and real image distributions. Some methods introduce GANs for realistic image generation. SF-GAN (Zhan, Zhu, and Lu 2019) introduces geometry and appearance synthesizers for realistic scene text generation, but struggles with accurate text placement. Scene text editing methods (Wu et al. 2019; Yang, Huang, and Lin 2020; Roy et al. 2020; Zhang et al. 2021; Lee et al. 2021; Xie et al. 2021; Krishnan et al. 2023; He et al. 2022) attempt tackle this problem. However, these methods concentrate only on generating the text region rather than the entire image. Text-to-Image Generation represents a promising result that has seen significant progress in generating realistic and prompt-aligned images (Rombach et al. 2022; Ramesh et al. 2022; Saharia et al. 2022), as well as videos (Singer et al. 2023; Ho et al. 2022; Blattmann et al. 2023; Ge et al. 2023; Wang et al. 2023a; Chen et al. 2023b; Wang et al. 2023b), through the application of diffusion models (Ho, Jain, and Abbeel 2020). GLIDE (Nichol et al. 2022) introduces text conditions into the diffusion process using classifier-free guidance. DALL-E 2 (Ramesh et al. 2022) adopts a diffusion prior module on CLIP text latent and cascaded diffusion decoder to generate high-resolution images. Imagen (Saharia et al. 2022) emphasizes language understanding and proposes to use a large T5 language model for better semantics representation. Stable Diffusion (Rombach et al. 2022) is an open-sourced model that projects the image into latent space with VAE and applies the diffusion process to generate feature maps in the latent level. In addition to text conditions, a realm of research explores controlling diffusion models through image-level conditions. Certain image editing methods (Meng et al. 2021; Kawar et al. 2023; Mokady et al. 2023; Brooks, Holynski, and Efros 2023) introduce images to be edited as conditions in the denoising process. Image inpainting (Balaji et al. 2022; Avrahami, Lischinski, and Fried 2022; Lugmayr et al. 2022; Bau et al. 2021) constitutes another type of editing method, aiming to generate coherent missing portions of an image based on a specified region while preserving the remaining areas. Additionally, SDG (Liu et al. 2023) represents an alternative approach involving extra conditions, which injects semantic input using a guidance function to direct the sampling process of unconditional DDPM. Some methods (Chen et al. 2023a; Ma et al. 2023) utilize textual layouts or masks as conditions for scene text generation. However, these approaches need extensive labeled datasets of scene text for training, which poses a challenge for lowresource languages. Moreover, ControlNet (Zhang, Rao, and Agrawala 2023) and T2I-adapter (Mou et al. 2023) are dedicated to offering a comprehensive solution for controlling the generation process by leveraging auxiliary information like edge maps, color maps, segmentation maps, etc. These methods exhibit remarkable control and yield impressive results in terms of image quality. In this work, we perceive scene text generation as a text-to-image task with supplementary control (scene text) and incorporate the rendered scene text as an The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7216 Figure 2: Our model employs input text (Itext) of any language to serve as the foreground element. The text is subsequently rendered into a sketch image, and its edges are detected to derive an edge image, which acts as an input of the control branch. Concurrently, our model takes in an input prompt (Iprompt) as the description of the background scene. After T denoising iterations, the model generates the final output image (Oimage). Localized attention constraint and contrastive image-level prompts are employed in the U-Net block’s cross-attention layer to enhance textual region positioning for precise scene text generation. Figure 3: Details of the proposed localized attention constraint method. The “×” signifies matrix multiplication, while “⊙” denotes element-wise multiplication. image-level condition within the diffusion model. Methods Overall Framework We introduce a training-free scene text generation framework named Diff-Text, applicable to any language. Given an input text Itext and a prompt Iprompt, our proposed framework can generate scene text images that encompass: (1) precise textual content of Itext; (2) scenes that align with the provided prompt Iprompt; and (3) seamless integration of textual content with the depicted scenes. The architecture of our framework is presented in Fig. 2 and contains a preprocessing module, a U-Net branch, and a control branch. Initially, the provided input text Itext undergoes preprocessing and is rendered into a sketch image denoted as Is, depicting black text against a whiteboard backdrop with a randomly chosen font. Subsequently, the Canny edge detection algorithm is applied to derive an edge image denoted as Ie. This image, serving as an image-level condition, is then utilized as input for the control branch. Simultaneously, the provided input prompt Iprompt is processed by the text encoder, serving as a text-level condition. Under the guidance of both image-level and text-level conditions, the UNet branch predicts the noise zt at time t and utilizes zt to reconstruct the output image from Gaussian noise. Due to the independence of control input and prompt input for the U-Net network, there is a risk of incorrect fusion between image-level and text-level controls. For instance, the network might mistake the edges of the character “O” as part of a circular pattern. This issue is particularly prominent in the generation of scene text images for minor languages. To address this concern, we introduce a localized attention constraint method tailored for scene text generation. Simultaneously, to ensure a more rational fusion and enhance the precision of image-level control, we have proposed a contrastive image-level prompt. The localized attention constraint is utilized to confine the cross-attention maps associated with text region descriptors from the prompt input, such as “sign” or “billboard”. These maps are limited to areas near the text through a pre-processing module that generates random bounding boxes. Regarding the contrastive imagelevel prompt, it comprises a positive image-level prompt and a negative image-level prompt. Localized Attention Constraint Our goal is to place scene text sensibly within scenes, such as on billboards or street signs. To achieve this, we introduce the localized attention constraint method. As shown in Fig. 3, during one forward pass at each timestep, we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7217 Figure 4: Visualizations of scene text generation in English and Chinese, compared with existing methods. The first three columns represent the generated results of English scene text, while the last three columns depict the generated results of Chinese scene text. traverse through all layers of the diffusion model and manipulate the cross-attention map. The cross-attention map is denoted as Mt ∈RHW ×dt, where HW refers to the width and height of zt at different scales, and dt represents the maximum length of tokens. In the framework, the positions of text within Is are either user-specified or randomly placed, which means, obtaining the corresponding text bounding box is straightforward. We use this bounding box to derive a mask image of the text region, which we define as mbbx ∈RH×W . Then, assuming that the indices of tokens corresponding to words that may contain text in the prompt are represented by the set I, we resize mbbx to HW and compute the new cross-attention map M ∗ t = {λ × M i t ⊙mbbx|∀i ∈I}. Finally, M ∗ t is involved in the calculation of the z∗ t−1. After applying the localized attention constraint, we find a sensible and appropriate position to place the scene text. This approach also enhances the natural integration of foreground text with the background, resulting in a more realistic scene text generation. Contrastive Image-level Prompts The limited availability of images for minority languages within the training dataset of Stable Diffusion frequently results in the misinterpretation of edge images as object outlines. This misinterpretation often leads to the introduction of additional strokes, ultimately resulting in unrecognizable scene text generation. Indeed, the effectiveness of the localized attention constraint method depends on the presence of objects in the generated image that can accommodate the placement of text. In other words, if M i t, i ∈I approaches 0 and M ∗ t remains the same as Mt, the localized attention constraint will not yield the desired output. To tackle this issue, we introduce the definition of contrastive image-level prompt. In this regard, we consider the edge image Ie as the foundation of the image-level prompt, which we extend into a positive image-level prompt (PIP) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7218 Figure 5: Visualizations of scene text generation in Russian, Thai, and Arabic, compared with existing methods. The first and second columns present the results of Russian scene text, the third and fourth columns depict Thai scene text, and the final two columns illustrate Arabic scene text. and a negative image-level prompt (NIP). The edge image for PIP is the original edge image incorporating the depiction of a bounding box, while the sketch image for NIP is purely white. These two conditional inputs, denoted as I ′ e and ∅, respectively, serve as the basis for the contrastive image-level prompt. They are then incorporated into the denoising process through the following equation: zt−1 = eϵ(zt, Ie, Iprompt) = ϵ(zt, ∅, ∅) + scfg(ϵ(zt, ∅, Iprompt) −ϵ(zt, ∅, ∅)) + sneg(ϵ ′(zt, Ie, Iprompt) −ϵ(zt, ∅, Iprompt)), ϵ ′(zt, Ie, Iprompt) = ϵ(zt, Ie, Iprompt) + spos(ϵ(zt, I ′ e, Iprompt) −ϵ(zt, Ie, Iprompt)), (1) where scfg and sneg are used to finely adjust the respective effects of the PIP item and NIP item on the predictions, which will be discussed in our ablation study (see Fig. 6). PIP provides a subtle hint to the network, compelling it to include objects suitable for placing scene text in the generated image. On the other hand, NIP is used to control the clarity and visibility of the scene text. Through this contrastive image-level prompt, we provide the model with both a negative direction and a positive direction which enables the model to generate clear and precise scene text while maintaining a rational background. Experiments Implementation Details Experimental Settings Our model is built with Diffusers. The pre-trained models are “runwayml/stable-diffusion-v15” and “lllyasviel/sd-controlnet-canny”. While predicting, the size of the output images is 512×512. We use one A100 GPU for inference. The localized attention constraint is applied in both the U-Net branch and the control branch. The λ in the localized attention constraint is 6.0. The scfg, sneg and spos are respectively 7.5, 2.0 and 0.1. The wordlist for localized attention constraint includes “sign”, “billboard”, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7219 Language Metrics Stable Diffusion DeepFloyd TextDiffuser ControNet Ours Arabic CLIPScore 0.7961 0.7335 0.8084 0.8067 0.8138 Accuracy 0.000 0.000 0.000 3.291 33.13 Edit accuracy 16.65 13.17 11.58 34.80 72.93 Thai CLIPScore 0.7733 0.7926 0.7873 0.8059 0.8164 Accuracy 0.000 0.000 0.000 7.160 38.41 Edit accuracy 10.70 14.51 11.64 36.34 82.97 Russian CLIPScore 0.7948 0.8201 0.8335 0.8306 0.8632 Accuracy 0.000 0.000 1.375 9.790 39.29 Edit accuracy 14.60 26.05 37.72 39.21 80.58 English CLIPScore 0.7879 0.8658 0.8666 0.7334 0.8649 Accuracy 0.083 16.67 43.91 12.88 61.03 Edit accuracy 32.75 66.20 84.84 40.04 89.52 Chinese CLIPScore 0.8265 0.8347 0.8201 0.8312 0.8351 Accuracy 0.000 0.000 0.000 5.875 32.40 Edit accuracy 3.890 6.830 9.598 26.81 68.75 Table 1: Quantitative comparison with existing methods across five languages. The bold numbers represent the best results among all compared methods. Figure 6: The image-level prompt comprises both positive and negative components, denoted as spos and sneg, respectively. spos controls the intensity of “sign” occurrences in the background, while sneg controls the clarity of the scene text. and so on. More details are shown in Appx. in https://arxiv. org/abs/2312.12232. Evaluation Due to the lack of publicly available multilingual benchmarks, we use multilingual vocabularies in the work of Zhang et al. (Zhang et al. 2021) and Xie et al. (Xie et al. 2023) as the input texts and generate corresponding input prompts using chatGPT (Ouyang et al. 2022). We select five languages and filter out words with fewer than five characters. From the remaining set, we randomly choose 3000 words for each language. Ultimately, we generate 15,000 multilingual images for evaluation for each comparative method. We conduct both quantitative and qualitative comparative experiments. In the quantitative comparison, we utilize three metrics: CLIP Score (Hessel et al. 2021; Huang et al. 2021; Radford et al. 2021), accuracy, and normalized edit distance (Shi et al. 2017). To ensure equitable capabilities across all languages for OCR tools, we use a multilingual OCR, namely easy-OCR (JadedAI 2020). Comparison with Existing Methods In this subsection, we compare our method with existing open-source methods capable of scene text generation, i.e., Stable Diffusion (Rombach et al. 2022), DeepFloyd (DeepFloydLab 2023), TextDiffuser (Chen et al. 2023a) and ControlNet (Zhang, Rao, and Agrawala 2023). DeepFloyd uses two super-resolution modules to generate higher resolution 1024 × 1024 images compared with 512 × 512 images generated by other methods. We employ the template-to-image mode of the TextDiffuser method and utilize our sketch image as the template image. Quantitative Comparison In the quantitative comparison, we selected the following three metrics: (1) CLIPScore is used to measure the similarity between the generated images and the input prompts. (2) Accuracy evaluation employs OCR tools to detect and calculate the recognition accuracy to assess whether the scene text in the generated images matches the input text. (3) Normalized edit distance is used to compare the similarity between the scene text in the generated images and the input text. We demonstrate the quantitative results compared with existing methods in Table 1. As shown in Table 1, Although training-free, our method still achieves a competitive CLIP score and significantly enhances the recognition accuracy of generated images. For each specific language, our method demonstrates an average improvement in accuracy of 25% compared to the existing method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7220 Figure 7: Visualization of ablation experiments on the localized attention constraint method. The heatmaps illustrate the average cross-attention map corresponding to different tokens across all diffusion steps. Qualitative comparison Fig. 4 and Fig. 5 show the comparison between our method and existing methods in generating scene text images for majority and minority languages, respectively. From Fig. 4, it can be observed that for English, which has a significant presence in the training dataset of existing methods, the generated images possess a certain level of recognizability. However, Stable Diffusion and DeepFloyd may exhibit instances of generating multiple or missing characters. TextDiffuser, with the sketch image as an input template, addresses the issue of multiple and missing characters. Nevertheless, due to insufficient strictness in control, TextDiffuser still encounters problems with erroneous character generation. Despite utilizing edge images for strict control, ControlNet still results in the generation of scene text appearing in unreasonable positions or having additional strokes. In contrast, our method can generate clear, precise, and reasonably positioned scene text. For the languages with a smaller presence in the training dataset (Chinese, Arabic, Thai, Russian), Stable Diffusion, DeepFloyd, and TextDiffuser fail to generate recognizable scene text. TextDiffuser may generate some English letters instead of similar characters from other languages. ControlNet still encounters issues of generating text in unreasonable positions, and when dealing with characters resembling special patterns, such as Arabic characters, ControlNet merges the text with the background, rendering the generated text unidentifiable. Our method, on the other hand, successfully Method CLIP Accuracy Edit accuracy W/o constraint 0.8065 31.42 74.30 W/o PIP 0.7935 27.68 70.92 W/o NIP 0.7718 10.39 50.95 Full model 0.8108 35.48 77.22 Table 2: Quantitative ablation studies on localized attention constraint and image level prompt. “W/o constraint” denotes the exclusion of the localized attention constraint method, “W/o NIP” denotes the exclusion of the negative image-level prompt, and “W/o PIP” denotes the exclusion of the positive image-level prompt. The results indicate that our full model achieves the best generation results. generates scene text images for all languages. Ablation Study To validate the effectiveness of the proposed localized attention constraint and contrastive image-level prompt, we conduct the ablation study. Table 2 presents the quantitative analysis of the ablation experiments. As demonstrated in Table 2, it is evident that the full model achieves the best performance in both the CLIP score and the accuracy of generated characters. In addition, we also conduct qualitative analysis for the ablation study, and the results are presented in Fig. 6 and 7. The seed is fixed at 2345 to generate visualized results. In Fig. 6, we discuss the impact of different parameters for PIP and NIP (i.e., spos and sneg) on the generated images. From Fig. 6, it can be observed that as spos increases, the sign in the background becomes more prominent, but excessively high spos can cause the sign to appear too pronounced and flat. On the other hand, as sneg increases, the scene text in the foreground becomes clearer, but excessively high sneg can result in scene text floating in unreasonable positions. Fig. 7 showcases the results with and without our localized attention constraint. It can be observed that when we constrain the cross-attention map corresponding to the “sign” and “logo” to the scene text region, the generated images appear more reasonable and realistic. Discussion and Conclusion Currently, the bounding box of the text region is obtained either through user specification or random generation, and the tokens in the prompt that require localized attention constraint are determined by manually given wordlists. In future work, it is possible to integrate these two parts with GPT4 API for a more rational selection of bounding boxes and wordlists. However, our model still faces challenges in generating small-scale scene text and achieving precise text color control. Moreover, the generated scene text still occasionally includes unintended textual elements. In this paper, we introduce a training-free framework, named Diff-Text. This framework is designed to apply to scene text generation in any language. Localized attention constraint method and contrastive image-level prompt are proposed to enhance the precision, clarity, and coherence of generated scene text images. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7221 Acknowledgements This work was jointly supported by the National Natural Science Foundation of China under Grant No. 62102150, No. 62176091, the National Key Research and Development Program of China under Grant No. 2020AAA0107903. References Avrahami, O.; Lischinski, D.; and Fried, O. 2022. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18208–18218. Balaji, Y.; Nah, S.; Huang, X.; Vahdat, A.; Song, J.; Kreis, K.; Aittala, M.; Aila, T.; Laine, S.; Catanzaro, B.; et al. 2022. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324. Bau, D.; Andonian, A.; Cui, A.; Park, Y.; Jahanian, A.; Oliva, A.; and Torralba, A. 2021. Paint by word. arXiv preprint arXiv:2103.10951. Blattmann, A.; Rombach, R.; Ling, H.; Dockhorn, T.; Kim, S. W.; Fidler, S.; and Kreis, K. 2023. Align your latents: High-resolution video synthesis with latent diffusion models. In Computer Vision and Pattern Recognition. Brooks, T.; Holynski, A.; and Efros, A. A. 2023. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18392–18402. Chen, J.; Huang, Y.; Lv, T.; Cui, L.; Chen, Q.; and Wei, F. 2023a. TextDiffuser: Diffusion Models as Text Painters. arXiv preprint arXiv:2305.10855. Chen, X.; Wang, Y.; Zhang, L.; Zhuang, S.; Ma, X.; Yu, J.; Wang, Y.; Lin, D.; Qiao, Y.; and Liu, Z. 2023b. SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction. arXiv preprint arXiv:2310.20700. DeepFloydLab. 2023. DeepFloyd IF. https://github.com/ deep-floyd/IF. Accessed: 2023-1-20. Ge, S.; Nah, S.; Liu, G.; Poon, T.; Tao, A.; Catanzaro, B.; Jacobs, D.; Huang, J.-B.; Liu, M.-Y.; and Balaji, Y. 2023. Preserve your own correlation: A noise prior for video diffusion models. arXiv preprint arXiv:2305.10474. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. Gupta, A.; Vedaldi, A.; and Zisserman, A. 2016. Synthetic data for text localisation in natural images. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2315–2324. He, H.; Chen, X.; Wang, C.; Liu, J.; Du, B.; Tao, D.; and Qiao, Y. 2022. Diff-Font: Diffusion Model for Robust OneShot Font Generation. arXiv preprint arXiv:2212.05895. Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-Or, D. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626. Hessel, J.; Holtzman, A.; Forbes, M.; Bras, R. L.; and Choi, Y. 2021. CLIPScore: A Reference-free Evaluation Metric for Image Captioning. In EMNLP. Ho, J.; Chan, W.; Saharia, C.; Whang, J.; Gao, R.; Gritsenko, A.; Kingma, D. P.; Poole, B.; Norouzi, M.; Fleet, D. J.; et al. 2022. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Huang, Y.; Xue, H.; Liu, B.; and Lu, Y. 2021. Unifying multimodal transformer for bi-directional image and text generation. In Proceedings of the 29th ACM International Conference on Multimedia, 1138–1147. JadedAI. 2020. EasyOCR. https://github.com/JaidedAI/ EasyOCR. Accessed: 2020-3-14. Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; and Irani, M. 2023. Imagic: Text-based real image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6007–6017. Krishnan, P.; Kovvuri, R.; Pang, G.; Vassilev, B.; and Hassner, T. 2023. Textstylebrush: Transfer of text aesthetics from a single example. IEEE Transactions on Pattern Analysis and Machine Intelligence. Lee, J.; Kim, Y.; Kim, S.; Yim, M.; Shin, S.; Lee, G.; and Park, S. 2021. Rewritenet: Realistic scene text image generation via editing text in real-world image. arXiv preprint arXiv:2107.11041, 1. Liao, M.; Song, B.; Long, S.; He, M.; Yao, C.; and Bai, X. 2020. SynthText3D: synthesizing scene text images from 3D virtual worlds. Science China Information Sciences, 63: 1–14. Liu, X.; Park, D. H.; Azadi, S.; Zhang, G.; Chopikyan, A.; Hu, Y.; Shi, H.; Rohrbach, A.; and Darrell, T. 2023. More control for free! image synthesis with semantic diffusion guidance. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 289–299. Long, S.; and Yao, C. 2020. Unrealtext: Synthesizing realistic scene text images from the unreal world. arXiv preprint arXiv:2003.10608. Lugmayr, A.; Danelljan, M.; Romero, A.; Yu, F.; Timofte, R.; and Van Gool, L. 2022. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11461–11471. Ma, J.; Zhao, M.; Chen, C.; Wang, R.; Niu, D.; Lu, H.; and Lin, X. 2023. GlyphDraw: Learning to Draw Chinese Characters in Image Synthesis Models Coherently. arXiv preprint arXiv:2303.17870. Meng, C.; He, Y.; Song, Y.; Song, J.; Wu, J.; Zhu, J.-Y.; and Ermon, S. 2021. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073. Mokady, R.; Hertz, A.; Aberman, K.; Pritch, Y.; and CohenOr, D. 2023. Null-text inversion for editing real images The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7222 using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6038–6047. Mou, C.; Wang, X.; Xie, L.; Zhang, J.; Qi, Z.; Shan, Y.; and Qie, X. 2023. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453. Nichol, A. Q.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; Mcgrew, B.; Sutskever, I.; and Chen, M. 2022. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In International Conference on Machine Learning, 16784–16804. PMLR. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. Roy, P.; Bhattacharya, S.; Ghosh, S.; and Pal, U. 2020. STEFANN: scene text editor using font adaptive neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13228–13237. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494. Shi, B.; Yao, C.; Liao, M.; Yang, M.; Xu, P.; Cui, L.; Belongie, S.; Lu, S.; and Bai, X. 2017. Icdar2017 competition on reading chinese text in the wild (rctw-17). In 2017 14th iapr international conference on document analysis and recognition (ICDAR), volume 1, 1429–1434. IEEE. Singer, U.; Polyak, A.; Hayes, T.; Yin, X.; An, J.; Zhang, S.; Hu, Q.; Yang, H.; Ashual, O.; Gafni, O.; Parikh, D.; Gupta, S.; and Taigman, Y. 2023. Make-A-Video: Text-to-Video Generation without Text-Video Data. In ICLR. Wang, Y.; Chen, X.; Ma, X.; Zhou, S.; Huang, Z.; Wang, Y.; Yang, C.; He, Y.; Yu, J.; Yang, P.; et al. 2023a. LAVIE: HighQuality Video Generation with Cascaded Latent Diffusion Models. arXiv preprint arXiv:2309.15103. Wang, Y.; Ma, X.; Chen, X.; Dantcheva, A.; Dai, B.; and Qiao, Y. 2023b. LEO: Generative Latent Image Animator for Human Video Synthesis. arXiv preprint arXiv:2305.03989. Wu, L.; Zhang, C.; Liu, J.; Han, J.; Liu, J.; Ding, E.; and Bai, X. 2019. Editing text in the wild. In Proceedings of the 27th ACM international conference on multimedia, 1500–1508. Xie, Y.; Chen, X.; Sun, L.; and Lu, Y. 2021. DG-Font: Deformable Generative Networks for Unsupervised Font Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5130– 5140. Xie, Y.; Chen, X.; Zhan, H.; and Shivakum, P. 2023. Weakly Supervised Scene Text Generation for Low-resource Languages. arXiv preprint arXiv:2306.14269. Yang, Q.; Huang, J.; and Lin, W. 2020. Swaptext: Image based texts transfer in scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14700–14709. Zhan, F.; Zhu, H.; and Lu, S. 2019. Spatial fusion gan for image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3653–3662. Zhang, L.; Chen, X.; Xie, Y.; and Lu, Y. 2021. Scene Text Transfer for Cross-Language. In International Conference on Image and Graphics, 552–564. Springer. Zhang, L.; Rao, A.; and Agrawala, M. 2023. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3836–3847. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7223
2024
802
18,632
IRPruneDet: Efficient Infrared Small Target Detection via Wavelet Structure-Regularized Soft Channel Pruning Mingjin Zhang1, Handi Yang1*, Jie Guo1, Yunsong Li1, Xinbo Gao1, Jing Zhang2* 1Xidian University 2The University of Sydney mjinzhang@xidian.edu.cn, 22011210777@stu.xidian.edu.cn, jguo@mail.xidian.edu.cn ysli@mail.xidian.edu.cn, xbgao@mail.xidian.edu.cn, jing.zhang1@sydney.edu.au Abstract Infrared Small Target Detection (IRSTD) refers to detecting faint targets in infrared images, which has achieved notable progress with the advent of deep learning. However, the drive for improved detection accuracy has led to larger, intricate models with redundant parameters, causing storage and computation inefficiencies. In this pioneering study, we introduce the concept of utilizing network pruning to enhance the efficiency of IRSTD. Due to the challenge posed by low signal-to-noise ratios and the absence of detailed semantic information in infrared images, directly applying existing pruning techniques yields suboptimal performance. To address this, we propose a novel wavelet structureregularized soft channel pruning method, giving rise to the efficient IRPruneDet model. Our approach involves representing the weight matrix in the wavelet domain and formulating a wavelet channel pruning strategy. We incorporate wavelet regularization to induce structural sparsity without incurring extra memory usage. Moreover, we design a soft channel reconstruction method that preserves important target information against premature pruning, thereby ensuring an optimal sparse structure while maintaining overall sparsity. Through extensive experiments on two widely-used benchmarks, our IRPruneDet method surpasses established techniques in both model complexity and accuracy. Specifically, when employing U-net as the baseline network, IRPruneDet achieves a 64.13% reduction in parameters and a 51.19% decrease in FLOPS, while improving IoU from 73.31% to 75.12% and nIoU from 70.92% to 74.30%. The code is available at https://github.com/hd0013/IRPruneDet. Introduction Single frame infrared small target (SIRST) detection plays an irreplaceable role in many practical applications, such as traffic management and maritime rescue (Cuccurullo et al. 2012; Law et al. 2016; Zhang and Tao 2020). When dealing with target detection tasks (Zou et al. 2023) in visible images, challenges arise under conditions of weak illumination and occlusion. In contrast, infrared images excel at capturing target information due to their penetrating infrared thermal radiation. Nevertheless, SIRST comes with stringent criteria (Chapple et al. 1999): target size below 0.15% of the total *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. MDvsFA ALCNet ACM FC3-Net IRpruneDet 1000G 10G 2.5G2G 1G 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0 1 2 3 4 5 6 7 IoU Parameters × 106 Figure 1: Comparison between the proposed IRPruneDet and other deep learning-based models on the NUAA-SIRST dataset. The area of the gray circles denotes the number of FLOPs. IRPruneDet achieves the highest IoU while maintaining the lowest parameters and FLOPs. image, contrast ratio under 15%, and signal-to-noise ratio (SNR) below 1.5. Consequently, overcoming these obstacles involving small targets, noise, clutter, and object interference has sparked significant research interest in infrared small target detection (IRSTD) in recent years. To cope with the above difficulties in IRSTD, traditional methods (Dai and Wu 2017; Han et al. 2020) usually employ filtering techniques to filter out background interference or image enhancement methods to enhance targets. However, these methods heavily rely on hyper-parameter tuning and exhibit certain limitations when confront with complex scenes characterized by variations in illumination, complex backgrounds, and target occlusion. In light of the rapid advancements in deep learning, an increasing number of deep convolutional neural network (CNN)-based models have demonstrated superior performance in IRSTD. For instance, the pioneering implementation of deep CNN for IRSTD can be attributed to the miss detection vs. false alarm (MDvsFA) model (Wang, Zhou, and Wang 2019). It employs two generative adversarial networks (GANs) (Goodfellow et al. 2020) to separately reduce MD and FA while reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7224 quiring a considerable number of computations (see Fig. 1). Dai et al. achieve significant advancements over MDvsFA by replacing GANs with a U-net in asymmetric context modulation (ACM) approach (Dai et al. 2021a). Furthermore, they propose an attentional local contrast network (ALCNet) (Dai et al. 2021b) to effectively combine discriminative and model-driven methods by increasing the network size. By propagating the target features to deeper layers of the network, Zhang et al. (Zhang et al. 2022b) present a feature compensation and cross-level correlation network (FC3-Net), which achieves superior detection performance while having much more parameters and computations than ALCNet and ACM. As the complexity of a model increases, the number of model parameters and computations grows significantly, leading to inefficiencies in storage, memory, and computation. Directly deploying them on platforms with limited resources is impractical. Therefore, there exists an imperative demand to explore a lightweight network architecture for efficient IRSTD. Recently, model compressing methods have been proposed to devise lightweight networks for various tasks (Han et al. 2015; Rastegari et al. 2016; Denton et al. 2014; Hinton, Vinyals, and Dean 2015). Among them, structured pruning (He et al. 2019b) has garnered recognition for its ability to achieve practical storage space savings and inference acceleration on general-purpose hardware. This approach can prune redundant filters in convolutional layers. Nevertheless, the existing pruning methods encounter challenges that hinder their direct applicability to CNN-based IRSTD models. (1) The conventional criteria used to evaluate channel importance are not applicable to the IRSTD task. Currently, pruning methods predominantly rely on criteria that assess the informative importance within channels, assuming that channels with greater magnitude are more vital (Huang et al. 2021). However, due to the low SNR in infrared images, channels containing background noise and clutter often exhibit higher magnitudes. Consequently, relying on conventional criteria may erroneously prune important channels that have low magnitudes but contain crucial information about small targets, leading to the discarding of important channels. (2) During the iterative channel pruning process, certain channels that contain important information may be pruned prematurely and deactivated permanently. In the IRSTD task, the targets typically have small sizes, resulting in a limited number of channels carrying important information. The erroneous pruning of critical channels can lead to a drastic decrease in detection accuracy. In this study, we introduce the concept of utilizing network pruning to enhance the efficiency of IRSTD for the first time. Specifically, we propose a novel wavelet structureregularized soft channel pruning method, resulting in the efficient IRPruneDet model (see Fig. 2). Firstly, we design a wavelet channel pruning (WCP) strategy based on the wavelet-based sparse constraint. Through wavelet analysis of convolutional layers, weight matrices are decomposed into low and high-frequency components. By applying l1norm regularization to these wavelet coefficients, we assess channel importance according to their magnitude. To manage memory, we propose a memory-efficient wavelet-based pruning criterion within an energy minimization framework, treating the wavelet transform of weight matrices akin to their differential operators. Additionally, we avoid premature pruning of channels holding crucial SIRST information by implementing a soft channel reconstruction (SCR) method. This involves dynamically retaining parameters of convolutional layers with the highest detection accuracy during pruning. For soft channel reconstruction, we combine channel reconstruction with the pruning process, randomly interpolating between recovered pruned channel parameters and those associated with optimal performance. Our method assesses the importance of all channels to obtain desired network structure while adhering to sparsity constraints. Experiments on two widely-used benchmarks demonstrate that IRPruneDet outperforms existing methods in detection accuracy, while significantly reducing FLOPs and parameters. In summary, the contribution of this study is three-fold. (1) We propose an efficient IRPruneDet model for IRSTD. To the best of our knowledge, IRPruneDet is the first attempt to design a lightweight network architecture tailored for the IRSTD task via network pruning. Using U-net18 as the baseline network, IRPruneDet reduces 64.13% parameters and 51.19% FLOPS while improving IoU from 73.31% to 75.12% and nIoU from 70.92% to 74.30% on the NUAASIRST dataset. (2) We design a WCP strategy by representing the weight matrix in the wavelet domain. To encourage structural sparsity without imposing additional memory requirements, we design and incorporate a novel wavelet regularization penalty into the network. (3) We develop an SCR method for the pruning process. It can mitigate the risk of prematurely and incorrectly pruning channels that carry critical information, thereby ensuring the preservation of important network features throughout the pruning procedure. Related Work Infrared Small Target Detection The IRSTD algorithms can be categorized into traditional and deep learning-based methods. Traditional techniques focus on extracting distinctive features in infrared (IR) images. These methods encompass filter-based methods like the max-median filter (Deshpande et al. 1999) and top-hat filter (Bai and Zhou 2010), low-rank methods such as weighted strengthened local contrast measure (WSLCM) (Han et al. 2020) and tri-layer local contrast measure (TTLCM) (Chen et al. 2013), along with HVS-based methods such as infrared patch-image (IPI) (Gao et al. 2013), non-convex rank approximation minimization (NARM) (Zhang et al. 2018), and the partial sum of the tensor nuclear norm (PSTNN) (Zhang and Peng 2019). With the advent of deep learning, CNN-based techniques (Zhang et al. 2022a; Li et al. 2022a; McIntosh, Venkataramanan, and Mahalanobis 2020; Zhang et al. 2023) are introduced into the IRSTD task. For instance, MDvsFA (Wang, Zhou, and Wang 2019) eschews the traditional approach of relying on a single goal to jointly reduce MD and FA by decomposing it into two subtasks with two GANs (Goodfellow et al. 2020). To preserve feature information, Dai et al. (Dai et al. 2021a) propose an ACM model with global context feedback and a modulation path using The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7225 Zero-out channel Reconstructed channel Pruned channel Soft Channel Reconstruction (SCR) Importance LL HL LH HH Wavelet Channel Pruning (WCP) 1 𝒇𝑢 𝑺0,0 𝑺0,1 𝑺1,0 𝑺1,1 ⊗ Training No 𝑡𝑒𝑝𝑜𝑐ℎ≤𝑇? WCP Training Filters Initial model Sparse model SCR Prune Yes Preserved channel Zero-out channel Soft-recovery channel Interpolation channel Reconstructed channel Pruned channel Figure 2: Illustration of the proposed IRPruneDet method. The pruning process of a specific convolutional layer is used to illustrate the dynamic iterative process of IRPruneDet, which includes wavelet channel pruning (WCP), training, soft channel reconstruction (SCR), and final hard channel pruning to obtain a sparse model. WCP assesses channel importance based on the l1-norm of the wavelet decomposition coefficients obtained by convolving the weight matrix with the Haar wavelet. pointwise channel attention to exchange high-level semantics and low-level details. Furthermore, Dai et al. (Dai et al. 2021b) introduce a feature map cyclic shifting scheme and present an ALCNet with increased network size. Zhang et al. (Zhang et al. 2022b) develop an even larger network FC3Net. However, these methods enhance small IR target detection by scaling up network size to increase model capacity and extract semantic features. This strategy often leads to increased model size, memory footprint, and computations. Neural Network Pruning Pruning (Han et al. 2015) removes unimportant structures in the network to produce a sparse and efficient model. Channel pruning (Li et al. 2016), a subset of this technique, falls into two categories based on channel status after pruning: hard and soft pruning. Hard pruning permanently deactivates channels identified by specific criteria (Sui et al. 2021; He et al. 2019b; Liu et al. 2017; Wang, Li, and Wang 2021; Tang et al. 2020; He et al. 2021). For instance, Li et al. (Li et al. 2016) introduce a pruning filter for efficient convNets (PEEC), which calculates channel importance via l1-norm (Li et al. 2016). HRank (Lin et al. 2020) suggests evaluating channel importance using the rank of convolutional layer weights. In contrast, soft pruning involves dynamic channel pruning without permanent discarding (He et al. 2019a; Guo et al. 2020; Lin et al. 2019; Ding et al. 2019; He et al. 2022). Channels’ weights are approximated to 0, permitting their participation in future training and pruning iterations. For example, soft filter pruning (SFP) (He et al. 2018) generates masks based on channel norms over time, allowing updates in subsequent phases. Operation-aware soft channel pruning (SCP) (Kang and Han 2020) formulates discrete masks to differentiable forms for joint learning of model parameters and dynamic masks. To the best of our knowledge, no prior research has explored network pruning within the context of IRSTD. Our goal is to bridge this gap by utilizing the idea of network pruning to develop an efficient IRSTD model. Methodology Preliminaries Given an infrared image XIR, the IRSTD problem based on deep learning can be formulated: YIR = fdet (XIR; Θ) , (1) where fdet is a trainable deep neural network, Θ represents the model parameters, and YIR denotes the segmentation mask of targets in the infrared image. Without loss of generality, it is assumed that there exist L layers of parameters, where the lth convolution layer can be parameterized by n W (l) ∈RCl×Cl−1×K×K, 1 ≤l ≤L o . Here, W (l), Cl, Cl−1, K represent the learnable weight matrix (i.e., model parameters), the number of output channels, the number of input channels, and the kernel size of the ith convolutional layer, respectively. During the process of channel pruning, we can conceptualize the above model parameters W (l) as a series of filters F (l), which can be represented as a set of n F (l) j ∈RCl−1×K×K, 1 ≤j ≤Cl, 1 ≤l ≤L o . Wavelet Channel Pruning In the IRSTD task, the detection accuracy is often reduced due to the loss of edge information of the targets. Accordingly, while pruning the CNN-based network, it’s essential to preserve channels that contribute more to the edge information, ensuring that the network’s performance remains inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7226 tact despite the reduction in model size. In the image processing domain, wavelet-based analysis (Mallat 1989) has been widely-used to decompose one image into wavelet coefficients containing low-frequency and high-frequency information. In the context of IRSTD, our investigation suggests that channels with higher high-frequency coefficients, as transformed by the wavelet framework, possess more edge information of the targets. Furthermore, the lp-norm of the wavelet decomposition coefficients of a channel, computed for each convolutional layer, can effectively indicate the significance of the channel in terms of containing the edge information of the targets. To this end, we propose to regularize the l1-norm of the wavelet decomposition coefficients, which can promote sparsity by pushing the weights of unimportant channels to zero during training. It can be formulated as follows: min F 1 N N X i=1 L (Yi, f (Xi, F )) + λ N X i=1 ∥HF ∥1, (2) where L(·) denotes the loss function. f(·), Xi, Yi are the prediction, input, and the ground truth label, respectively. H represents the two-dimensional discrete wavelet transform (DWT). λ is a hyper-parameters to balance the two loss terms. However, to apply the wavelet-based coefficient regularization penalty to the network, additional memory is required to store the results of wavelet transform for each filter F in the network. To address this issue, we propose to approximate the DWT as a differential operator. Specifically, we adopt the two-dimensional Haar tight frame system (Chui 1992). When processing F , particularly F (l) j ∈RCl−1×K×K, we convert it to a 2D shape fu ∈RM×M by tiling the K × K filters (i.e., if Cl−1 = p × p, then M = p ∗k). The Haar basis is defined as: S0,0 = 1 4[1, 1; 1, 1], S0,1 = 1 4[1, −1; 1, −1], S1,0 = 1 4[1, 1; −1, −1], S1,1 = 1 4[1, −1; −1, 1]. Denoting Ωas a domain in the two-dimensional real space R2, there exists u ∈L2(Ω) (Li et al. 2022b), which is sufficiently smooth and related to fu, i.e., (fu) [i, j] = u (xi, yj) , s.t. (xi, yj) = (ih, jh) , and 0 ≤i, j ≤N, (3) where h is the reciprocal of N. Consequently, the regularization penalty term for a certain filter F in the network can be expressed as: ∥HF ∥1 = h2 X i,j (( 2 h)2(|(S0,1 [−.] ⊗fu) [i, j]|2 + |(S1,0 [−.] ⊗fu) [i, j]|2))1/2. (4) The Taylor expansion of u at point (xi, yj) is given by: 2 h (S0,1 [−.] ⊗fu) [i, j] = 1 2h (u (xi, yj) −u (xi −h, yj)) + 1 2h (u (xi, yj −h) −u (xi −h, yj −h)) , (5) 2 h (S1,0 [−.] ⊗fu) [i, j] = 1 2h (u (xi, yj) −u (xi, yj −h)) + 1 2h (u (xi −h, yj) −u (xi −h, yj −h)) . (6) In the case where h approaches 0, we can derive the following equation from the concept of partial derivatives: 2 h (S0,1 [−.] ⊗fu) [i, j] + 2 h (S1,0 [−.] ⊗fu) [i, j] = ux (xi, yj) + uy (xi, yj) . (7) Based on Eq. (4), we have: ∥HF ∥1 = Z Ω q u2x + u2ydxdy. (8) Thus, we can combine the wavelet-based sparse regularization term in Eq. (2) with the gradient of the weight matrix itself, reducing the memory overhead while efficiently learning sparse network structures. Note that the existence of R Ω|∇u| dxdy = R Ω q u2x + u2ydxdy is due to the sufficient smoothness of u. Hence, we can establish the connection between WCR and the differential operator of weight matrices within the framework of energy functional minimization: min F 1 N N X i=1 L (Yi, f (Xi, F )) + λ N X i=1 ∥DF ∥1, (9) where D is the first-order differential operator of the channel weight matrix. Since we apply regularization to the channel weight matrix after wavelet decomposition using the gradient of the channel weight matrix itself, no extra memory footprint is needed to store the results of the wavelet transform. Meanwhile, wavelet analysis retains channels with crucial edge information of targets, striking a balance between detection accuracy and model compression efficiency. Soft Channel Reconstruction There is a challenge in existing iterative channel pruning approaches (Guo, Yao, and Chen 2016), i.e., valuable information-rich channels can be prematurely discarded, resulting in reduced accuracy and generalization of the pruned model. This issue gains prominence when pruning channels in IRSTD models, given the small target size, which often implies only a few crucial channels hold vital information. Prematurely discarding these key channels during pruning substantially diminishes the pruned model’s accuracy. While dynamic soft channel pruning methods (He et al. 2018) maintain channel vitality by not entirely discarding pruned channels, they lack the assurance of finding the global optimal solution, leading to important channels becoming inactive. In the iterative pruning and training process of IRSTD, we observed that the channels retained as active after each pruning iteration can be further trained. These channels offer valuable information until the next pruning round. Unfortunately, due to short intervals between pruning rounds or insufficient activation, these channels are frequently pruned again, causing the iterative process to stagnate. To address this challenge, we propose the SCR method, which dynamically saves model parameters corresponding to the best detection accuracy throughout the pruning process. Prior to each pruning stage, SCR reconstructs previously pruned channels in a random manner, effectively reactivating them. The channels subjected to SCR can be expressed as follows: FSCR = αFbest + (1 −α)FSC, (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7227 where Fbest is the previously best channel parameters. FSC represents pruned channel parameters. In SCR, the channel FSCR after channel soft reconstruction can be regarded as cosine annealing interpolation between Fbest and FSC. In this sense, α can be expressed as: α = 1 2  1 + cos  1 −∆βSCR π  tepochπ T + ∆βSCR  , (11) where T denotes the total number of training iterations for SCR. ∆βSCR represents the interpolation control factor of SCR. By changing the value of ∆βSCR, the initial value of α can be empirically controlled between 0 and 1, and it decays to 0 during the SCR training process. This explains that in the early stages of SCR, we believe that Fbest needs to have a certain importance in channel recovery. As the model training and pruning progress, the sparse model gradually converges, and the current model parameters play an increasingly important role in the SCR process. Thus, the strong activation effect of SCR on pruned channels is most pronounced in the early stages of pruning, and its effectiveness gradually decreases as the sparse model converges. Furthermore, it should be noted that the SCR strategy is not applied to every pruned channel, as doing so may result in certain channels with low scores being constantly suppressed under the WCP criterion. To promote the diversity of the channels, SCR randomly selects a portion of the pruned channels for channel reconstruction each time and also applies a cosine decay to the ratio of channels to be reconstructed, gradually reducing it as the model converges. The dynamic channel reconstruction rate in SCR can be represented as: β = β0 2  1 + cos  tepochπ T  , where β0 is initial channel reconstruction rate. The scale of random channel reconstruction under a given global sparsity constraint can be controlled by adjusting β0, so as to find the optimal sparse model by using SCR. Experiments Experimental Details Dataset. We adopt the NUAA-SIRST (Dai et al. 2021a) and IRSTD-1k (Zhang et al. 2022c) datasets for evaluation. NUAA-SIRST consists of 427 infrared images with a total of 480 infrared targets. IRSTD-1k comprises 1,001 infrared images, encompassing various target categories. For each dataset, we divide IR images into three disjoint subsets: 50% for training, 30% for validation, and 20% for testing. Evaluation Metrics. The evaluation metrics can be categorized into two types: objective detection accuracy-based metrics and model complexity-based metrics. The former includes pixel-level metrics such as Intersection over Union (IoU) and Normalized IoU (nIoU) (Dai et al. 2021a), and object-level metrics such as Probability of Detection (Pd) and False-Alarm Rate (Fa) (Li et al. 2022a). The latter consists of the number of FLOPs and model parameters. Implementation Details. We resize the size of each IR image in NUAA-SIRST and IRSTD-1k datasets to 512×512, following the common practice (Zhang et al. Method NUAA-SIRST IRSTD-1k Pixel-Level Object-Level Pixel-Level Object-Level IoU↑nIoU↑Pd↑ Fa↓ IoU↑nIoU↑Pd↑ Fa↓ Top-Hat 1.508 3.084 79.74 16456 10.06 7.438 75.11 1432 Max-Median 6.022 25.35 84.34 774.3 6.998 3.051 65.21 59.73 WSLCM 6.393 28.31 88.74 4462 3.452 0.678 72.44 6619 TLLCM 4.240 12.09 88.37 6243 3.311 0.784 77.39 6738 IPI 1.09 50.23 87.05 30467 27.92 20.46 81.37 16.18 NRAM 13.54 18.95 60.04 25.23 15.25 9.899 70.68 16.93 RIPT 16.79 20.65 69.76 59.33 14.11 8.093 77.55 28.31 PSTNN 30.30 33.67 72.80 48.99 24.57 17.93 71.99 35.26 MSLSTIPT 1.080 0.814 0.052 8.183 11.43 5.932 79.03 1524 IRPruneDet 75.12 74.30 98.61 2.96 64.54 62.71 91.74 16.04 Table 1: Comparison with traditional methods on NUAASIRST and IRSTD-1k in terms of IoU(%), nIoU(%), Pd(%), and Fa(10−6). Method IoU↑ nIoU↑ Pd↑ Fa↓ FLOPs↓Params↓ U-Net 73.31 70.92 96.15 39.87 1.922 0.5023 +l1-norm 67.18 67.84 86.55 27.34 0.957 0.1887 +WCP 74.25 73.59 96.37 8.94 0.938 0.1802 +l1-norm+SCR 73.51 72.01 93.06 16.45 0.957 0.1887 +WCP+SCR 75.12 74.30 98.61 2.96 0.938 0.1802 Table 2: Ablation study of IRPruneDet. 2022a; Dai et al. 2021b). For the pruning and training process, we utilize AdaGrad as the optimizer with a learning rate of 0.01. The training process lasts for 500 epochs with a weight decay of 10−4 and a batch size of 16. By default, we set ∆βSCR to 0.5π and β0 to 1. We apply IRPruneDet only to a U-Net18 baseline model and we compare it with representative CNN-based methods: MDvsFA (Wang, Zhou, and Wang 2019), ACM (Dai et al. 2021a), ALCNet (Dai et al. 2021b), and FC3-Net (Zhang et al. 2022b), and traditional methods: Top-Hat (Bai and Zhou 2010), Max-Median (Deshpande et al. 1999), WSLCM (Han et al. 2020), TLLCM (Chen et al. 2013), IPI, NRAM (Zhang et al. 2018), RIPT (Dai and Wu 2017), PSTNN (Zhang and Peng 2019), and MSLSTIPT (Sun, Yang, and An 2020). Ablation Study To investigate the effectiveness of each component in IRPruneDet, we conduct ablation studies on the NUAA-SIRST dataset. Table 2 shows the results. (1) Impact of WCP. We apply the commonly used l1-norm criterion instead of the WCP strategy to measure the importance of channels. The comparative experiments demonstrate that pruning sparse models with the l1-norm criterion can obtain erroneous pruning of channels that encode low but important features, leading to a decrease in IoU and nIoU despite the compressed model size. (2) Impact of SCR. We control the usage of SCR with the same pruning criteria. From the experThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7228 Tophat IR image IPI MDvsFA ACM FC3-Net IRpruneDet GT (a) (b) (c) (d) (e) (f) (g) (h) Figure 3: Visual results of different IRSTD methods. The boxes in red, yellow, and blue represent correct, missed, and false detections, respectively. Close-up views are shown in the bottom corners. imental results, it is observed that SCR effectively recovers channels that were erroneously pruned early on. This prevents the erroneous pruning of channels containing important information, thereby enhancing the model’s accuracy. U-Net IoU↑ nIoU↑ Pd↑ Fa↓ FLOPs↓Params↓ +SFP(l1-norm) 67.18 67.84 86.55 27.34 0.957 0.1887 +SFP(l2-norm) 70.71 69.64 90.14 23.55 0.957 0.1887 +FPGM 71.54 70.80 91.47 33.64 1.022 0.1806 +ASFP(l1-norm) 70.82 70.88 90.93 26.33 0.957 0.1887 +ASFP(l2-norm) 71.86 71.9 91.83 21.42 0.957 0.1887 +IRPrune 75.12 74.30 98.61 2.96 0.938 0.1802 Table 3: Comparison with other pruning methods. Comparisons with Other Pruning Methods To our knowledge, no lightweight network architecture exists specifically designed for the IRSTD task. Thus, we compare IRPruneDet with other representative general pruning methods (He et al. 2018, 2019b,a). We conduct experiments under the constraint of an equal global sparsity level of 50% and evaluate the performance based on both target-level and pixel-level metrics. As shown in Table 3, the results demonstrate that the pruning technique utilized in developing IRPruneDet is more effective for the IRSTD task and outperforms other general pruning methods, validating its ability to compress model size while maintaining detection accuracy. Quantitative Results In Table 1 and Table 4, it can be observed that CNN-based IRSTD models outperform traditional algorithms in both pixel-level and object-level detection accuracy. However, CNN-based models have a high computational cost, e.g., the FLOPs and parameters can reach up to 988.6G and 6.896M, respectively, resulting in significant storage and computational overhead. After applying the proposed pruning technique to the baseline model based on U-net18, we get a more efficient sparse network IRPruneDet. It owes to the WCP strategy for channel pruning and SCR for channel reconstruction, which effectively prevents erroneous channel pruning during the dynamic pruning process. In terms of both detection accuracy and model complexity, our IRPruneDet outperforms all other methods on the NUAASIRST dataset, achieving an impressive IoU of 75.12% with only 0.938G FLOPs and 0.1802M parameters. Visual Results In Figure 3, we present some visual object segmentation results of different IRSTD methods. IRPruneDet achieves superior target shapes and more accurate localization compared to other methods. For example, in the first, second, third, and fifth test images, our method produces masks that are closer to the ground truth images compared to other methods. In the first, fourth, and fifth test images, it can be observed that our method achieves accurate target localization and avoids false detections or missed detections even in complex backgrounds. Besides, IRPruneDet can effectively capture the edge information of small targets in infrared images, even in the presence of complex backgrounds, noise, and clutter interference. In Figure 4, we demonstrate the channel selection for the downsampling and upsampling convolutional layers during the pruning process. From the feature maps generated by different channels, we can observe that our method discards channels with excessive background noise and missing object edge information, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7229 Method NUAA-SIRST IRSTD-1k FLOPs↓ Params↓ Pixel-Level Object-Level Pixel-Level Object-Level IoU↑ nIoU↑ Pd↑ Fa↓ IoU↑ nIoU↑ Pd↑ Fa↓ MDvsFA 60.30 58.26 89.35 56.35 49.50 47.41 82.11 80.33 998.6 3.919 ACM 72.33 71.43 96.33 9.325 60.97 58.02 90.58 21.78 2.009 0.5198 ALCNet 74.31 73.12 97.34 20.21 62.05 59.58 92.19 31.56 2.127 0.5404 FC3-Net 74.75 73.81 98.13 3.21 64.98 63.59 92.93 15.73 10.57 6.896 IRPruneDet 75.12 74.30 98.61 2.96 64.54 62.71 91.74 16.04 0.9380 0.1802 Table 4: Comparison with CNN-based methods on NUAA-SIRST and IRSTD-1k in terms of IoU(%), nIoU(%), Pd(%), Fa(10−6), FLOPs(109), and number of parameters, i.e., Params(106). 1 2 3 4 9 13 10 11 12 14 15 16 1 2 3 4 9 10 11 12 13 14 15 16 5 6 7 8 5 6 7 8 (a) (b) Figure 4: Above is a visualization of selected feature maps during pruning. The first conv layer of the downsampling process is shown on the left, and the last conv layer of the upsampling process is shown on the right. Blue boxes denote the channels selected by the norm-based pruning method (left: channels 4; right: channels 4, 5, 12). Red channels denote the channels selected by our WCP-based pruning method (left: channels 6; right: channels 6, 8, 9). Yellow boxes denote the channels retained by both pruning methods (left: channels 1, 5, 11, 13, 16; right: channels 1, 2, 7, 11, 16). (a) Input IR image. (b) Detection result. such as channel 4 in the downsampling and upsampling processes. Furthermore, our method not only considers the responses in the channels but also emphasizes compelling target features, such as channels 6, 8, and 9 in the upsampling process. Although these channels may have small responses, they contain critical information about the IR targets. Conclusion In this paper, we introduce the idea of network pruning to the IRSTD task and develop a novel and efficient IRPruneDet model. IRPruneDet implements wavelet sparse regularization with the differential operator of the weight matrix, which efficiently discovers effective sparse structures in a pruning process without added memory usage. In addition, during the dynamic pruning process, it incorporates a soft recovery mechanism for pruned channels, preventing premature discarding of channels containing crucial features. Experiments on two public datasets validate that IRPruneDet significantly cuts FLOPs and parameters while maintaining or even improving detection accuracy. In future work, it is worth investigating the integration of the proposed method into alternative model architectures, such as vision transformers. Additionally, it would be valuable to explore more effective loss functions to enhance the preservation of useful edge information during the pruning process. Acknowledgments This work is supported in part by the National Natural Science Foundation of China under Grants 62272363, 62036007, 62061047, 62176195, and U21A20514, the Young Elite Scientists Sponsorship Program by CAST under Grant 2021QNRC001, the Youth Talent Promotion Project of Shaanxi University Science and Technology Association under Grant 20200103, the Special Project on Technological Innovation and Application Development under Grant No.cstc2020jscx-dxwtB0032, the Chongqing Excellent Scientist Project under Grant No.cstc2021ycjh-bgzxm0339, and the Joint Laboratory for Innovation in Onboard Computing and Electronic Technology under Grant 2024KFKT0011. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7230 References Bai, X.; and Zhou, F. 2010. Analysis of new top-hat transformation and the application for infrared dim small target detection. Pattern Recognition, 43(6): 2145–2156. Chapple, P. B.; Bertilone, D. C.; Caprari, R. S.; Angeli, S.; and Newsam, G. N. 1999. Target detection in infrared and SAR terrain images using a non-Gaussian stochastic model. In Targets and Backgrounds: Characterization and Representation V, volume 3699, 122–132. SPIE. Chen, C. P.; Li, H.; Wei, Y.; Xia, T.; and Tang, Y. Y. 2013. A local contrast method for small infrared target detection. IEEE transactions on geoscience and remote sensing, 52(1): 574–581. Chui, C. K. 1992. An introduction to wavelets, volume 1. Academic press. Cuccurullo, G.; Giordano, L.; Albanese, D.; Cinquanta, L.; and Di Matteo, M. 2012. Infrared thermography assisted control for apples microwave drying. Journal of food engineering, 112(4): 319–325. Dai, Y.; and Wu, Y. 2017. Reweighted infrared patch-tensor model with both nonlocal and local priors for single-frame small target detection. IEEE journal of selected topics in applied earth observations and remote sensing, 10(8): 3752– 3767. Dai, Y.; Wu, Y.; Zhou, F.; and Barnard, K. 2021a. Asymmetric contextual modulation for infrared small target detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 950–959. Dai, Y.; Wu, Y.; Zhou, F.; and Barnard, K. 2021b. Attentional local contrast networks for infrared small target detection. IEEE Transactions on Geoscience and Remote Sensing, 59(11): 9813–9824. Denton, E. L.; Zaremba, W.; Bruna, J.; LeCun, Y.; and Fergus, R. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. Advances in neural information processing systems, 27. Deshpande, S. D.; Er, M. H.; Venkateswarlu, R.; and Chan, P. 1999. Max-mean and max-median filters for detection of small targets. In Signal and Data Processing of Small Targets 1999, volume 3809, 74–83. SPIE. Ding, X.; Ding, G.; Guo, Y.; Han, J.; and Yan, C. 2019. Approximated oracle filter pruning for destructive cnn width optimization. In International Conference on Machine Learning, 1607–1616. PMLR. Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; and Hauptmann, A. G. 2013. Infrared patch-image model for small target detection in a single image. IEEE transactions on image processing, 22(12): 4996–5009. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Guo, S.; Wang, Y.; Li, Q.; and Yan, J. 2020. Dmcp: Differentiable markov channel pruning for neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1539–1547. Guo, Y.; Yao, A.; and Chen, Y. 2016. Dynamic network surgery for efficient dnns. Advances in neural information processing systems, 29. Han, J.; Moradi, S.; Faramarzi, I.; Zhang, H.; Zhao, Q.; Zhang, X.; and Li, N. 2020. Infrared small target detection based on the weighted strengthened local contrast measure. IEEE Geoscience and Remote Sensing Letters, 18(9): 1670– 1674. Han, S.; Pool, J.; Tran, J.; and Dally, W. 2015. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28. He, H.; Liu, J.; Pan, Z.; Cai, J.; Zhang, J.; Tao, D.; and Zhuang, B. 2021. Pruning self-attentions into convolutional layers in single path. arXiv preprint arXiv:2111.11802. He, Y.; Dong, X.; Kang, G.; Fu, Y.; Yan, C.; and Yang, Y. 2019a. Asymptotic soft filter pruning for deep convolutional neural networks. IEEE transactions on cybernetics, 50(8): 3594–3604. He, Y.; Kang, G.; Dong, X.; Fu, Y.; and Yang, Y. 2018. Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866. He, Y.; Liu, P.; Wang, Z.; Hu, Z.; and Yang, Y. 2019b. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4340–4349. He, Y.; Liu, P.; Zhu, L.; and Yang, Y. 2022. Filter pruning by switching to neighboring CNNs with good attributes. IEEE Transactions on Neural Networks and Learning Systems. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Huang, Z.; Shao, W.; Wang, X.; Lin, L.; and Luo, P. 2021. Rethinking the pruning criteria for convolutional neural network. Advances in Neural Information Processing Systems, 34: 16305–16318. Kang, M.; and Han, B. 2020. Operation-aware soft channel pruning using differentiable masks. In International Conference on Machine Learning, 5122–5131. PMLR. Law, W.-C.; Xu, Z.; Yong, K.-T.; Liu, X.; Swihart, M. T.; Seshadri, M.; and Prasad, P. N. 2016. Manganese-doped near-infrared emitting nanocrystals for in vivo biomedical imaging. Optics express, 24(16): 17553–17561. Li, B.; Xiao, C.; Wang, L.; Wang, Y.; Lin, Z.; Li, M.; An, W.; and Guo, Y. 2022a. Dense nested attention network for infrared small target detection. IEEE Transactions on Image Processing. Li, H.; Kadav, A.; Durdanovic, I.; Samet, H.; and Graf, H. P. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Li, Z.; Meunier, D.; Mollenhauer, M.; and Gretton, A. 2022b. Optimal rates for regularized conditional mean embedding learning. Advances in Neural Information Processing Systems, 35: 4433–4445. Lin, M.; Ji, R.; Wang, Y.; Zhang, Y.; Zhang, B.; Tian, Y.; and Shao, L. 2020. Hrank: Filter pruning using high-rank The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7231 feature map. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1529–1538. Lin, S.; Ji, R.; Yan, C.; Zhang, B.; Cao, L.; Ye, Q.; Huang, F.; and Doermann, D. 2019. Towards optimal structured cnn pruning via generative adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2790–2799. Liu, Z.; Li, J.; Shen, Z.; Huang, G.; Yan, S.; and Zhang, C. 2017. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision, 2736–2744. Mallat, S. G. 1989. A theory for multiresolution signal decomposition: the wavelet representation. IEEE transactions on pattern analysis and machine intelligence, 11(7): 674– 693. McIntosh, B.; Venkataramanan, S.; and Mahalanobis, A. 2020. Infrared target detection in cluttered environments by maximization of a target to clutter ratio (TCR) metric using a convolutional neural network. IEEE Transactions on Aerospace and Electronic Systems, 57(1): 485–496. Rastegari, M.; Ordonez, V.; Redmon, J.; and Farhadi, A. 2016. Xnor-net: Imagenet classification using binary convolutional neural networks. In proceedings of European Conference on ComputerVision, 525–542. Sui, Y.; Yin, M.; Xie, Y.; Phan, H.; Aliari Zonouz, S.; and Yuan, B. 2021. Chip: Channel independence-based pruning for compact neural networks. Advances in Neural Information Processing Systems, 34: 24604–24616. Sun, Y.; Yang, J.; and An, W. 2020. Infrared dim and small target detection via multiple subspace learning and spatialtemporal patch-tensor model. IEEE Transactions on Geoscience and Remote Sensing, 59(5): 3737–3752. Tang, Y.; Wang, Y.; Xu, Y.; Tao, D.; Xu, C.; Xu, C.; and Xu, C. 2020. Scop: Scientific control for reliable neural network pruning. Advances in Neural Information Processing Systems, 33: 10936–10947. Wang, H.; Zhou, L.; and Wang, L. 2019. Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8509–8518. Wang, Z.; Li, C.; and Wang, X. 2021. Convolutional neural network pruning with structural redundancy reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14913–14922. Zhang, J.; and Tao, D. 2020. Empowering things with intelligence: a survey of the progress, challenges, and opportunities in artificial intelligence of things. IEEE Internet of Things Journal, 8(10): 7789–7817. Zhang, L.; Peng, L.; Zhang, T.; Cao, S.; and Peng, Z. 2018. Infrared small target detection via non-convex rank approximation minimization joint l 2, 1 norm. Remote Sensing, 10(11): 1821. Zhang, L.; and Peng, Z. 2019. Infrared small target detection based on partial sum of the tensor nuclear norm. Remote Sensing, 11(4): 382. Zhang, M.; Bai, H.; Zhang, J.; Zhang, R.; Wang, C.; Guo, J.; and Gao, X. 2022a. RKformer: Runge-Kutta Transformer with Random-Connection Attention for Infrared Small Target Detection. In Proceedings of the 30th ACM International Conference on Multimedia, 1730–1738. Zhang, M.; Yang, H.; Yue, K.; Zhang, X.; Zhu, Y.; and Li, Y. 2023. Thermodynamics-Inspired Multi-Feature Network for Infrared Small Target Detection. Remote Sensing, 15(19): 4716. Zhang, M.; Yue, K.; Zhang, J.; Li, Y.; and Gao, X. 2022b. Exploring Feature Compensation and Cross-level Correlation for Infrared Small Target Detection. In Proceedings of the 30th ACM International Conference on Multimedia, 1857–1865. Zhang, M.; Zhang, R.; Yang, Y.; Bai, H.; Zhang, J.; and Guo, J. 2022c. ISNET: Shape matters for infrared small target detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 877–886. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; and Ye, J. 2023. Object detection in 20 years: A survey. Proceedings of the IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7232
2024
803
18,633
M2Doc: A Multi-Modal Fusion Approach for Document Layout Analysis Ning Zhang1,2, Hiuyi Cheng1, Jiayu Chen2, Zongyuan Jiang1, Jun Huang2, Yang Xue1, Lianwen Jin1, 3* 1 South China University of Technology 2 Platform of AI(PAI), Alibaba Group 3 SCUT-Zhuhai Institute of Modern Industrial Innovation, Zhuhai, China johnning2333@gmail.com, {eechenghiuyi, eejiangzongyuan}@mail.scut.edu.cn, {yunji.cjy, huangjun.hj}@alibaba-inc.com, {yxue, eelwjin}@scut.edu.cn Abstract Document layout analysis is a crucial step for intelligent document understanding. However, many existing methods primarily focus on the visual aspects and overlook the textual features of documents. Although document pre-trained models utilize multi-modal features during the pre-training phase, they tend to operate as a unimodal pipeline when it comes to layout analysis tasks. Furthermore, current multi-modal methods perform worse than unimodal detectors on complex layout analysis datasets. To address these limitations, we propose an effective and pluggable multi-modal fusion approach named M2Doc, which fuses visual and textual features for better layout detection. M2Doc contains two pluggable multimodal fusion modules, early-fusion and late-fusion, which align and fuse visual and textual features at the pixel level and block level. Benefitting from the concision and effectiveness of M2Doc, it can be easily applied to various detectors for better layout detection, including two-stage and end-toend object detectors. Our experimental results demonstrate significant performance improvements in detectors equipped with M2Doc on datasets such as DocLayNet (+11.3 mAP) and M6Doc (+1.9 mAP). Furthermore, through the integration of the DINO detector with M2Doc, we achieve stateof-the-art results on DocLayNet (89.0 mAP), M6Doc (69.9 mAP), and PubLayNet (95.5 mAP). The code will be publicly released at https://github.com/johnning2333/M2Doc. Introduction Document layout analysis (DLA) is a fundamental task in document understanding (Namboodiri and Jain 2007), which aims to detect and segment different types of regions and analyze their relationships within document image. DLA can be divided into two categories, physical and logical layout analysis (Lee et al. 2019). Physical layout analysis focuses on detecting the fundamental blocks within a document, such as text, figure, and table. Representative datasets include PubLayNet (Zhong, Tang, and Jimeno Yepes 2019) and PubMed (Li et al. 2020a). Logical layout analysis requires a finer-grained layout detection based on the document’s structures and content semantics, where representative datasets include PRImA (Antonacopoulos et al. 2009), *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 10 15 20 25 30 35 40 Epochs 70.0 72.5 75.0 77.5 80.0 82.5 85.0 87.5 90.0 mean Average precision (mAP%) Cascade Mask R-CNN SwinDocSegmenter Ours(Cascade Mask R-CNN) Mask R-CNN DINO Ours(DINO) VSR YOLOv5 Human Figure 1: The mAP curves on DocLayNet test set for our method and previous methods. DocBank (Li et al. 2020b), DocLayNet (Pfitzmann et al. 2022), and M6Doc (Hiuyi et al. 2023). Many current DLA models, such as TransDLANet (Hiuyi et al. 2023), SelfDocSeg (Subhajit et al. 2023), and SwinDocSegmenter (Ayan et al. 2023) focus on enhancing generic object detectors to more suitably match layout analysis tasks. However, these models tend to rely solely on visual features while overlooking textual features of documents. In recent years, self-supervised models such as LayoutLM (Yupan et al. 2022) and StructText (Yu et al. 2023) have demonstrated remarkable progress in a variety of Document AI tasks. These models primarily focus on developing better pre-training tasks to align cross-modal features and enhance models’ ability to represent multiple modalities. Despite incorporating various modality inputs and applying multiple pretext tasks during the pre-training phase, these models are only used to initialize the backbone of generic object detectors when transferred to layout analysis tasks. Essentially, these pipelines treat DLA as an image-centric The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7233 Early Fusion RPN / Encoder RCNN / Decoder Backbone Late Fusion Pluggable Pluggable Figure 2: The overall framework of M2Doc plugs into detectors. object detection problem rather than a multi-modal problem. Currently, numerous multi-modal DLA models are in the process of being developed, with VSR (Peng et al. 2021) being a representative example. VSR employs a complex network, incorporating multiple granularity textual modality inputs, a two-stream backbone, and Transformer layers for relation modeling. However, VSR exhibits limited effectiveness and occasionally performs worse than unimodal object detectors when applied to complex logical layout analysis. Considering the aforementioned limitations and issues, we have rethought the distinctions between generic object detection and DLA, and have identified two main distinctions: (1) DLA scenarios mostly involve rich text documents, which makes it more appropriate and intuitive to use multi-modal methods; (2) The textual instances in documents contain connectivity and logical relationships. For instance, text positioned beneath an image is likely to be a caption, while instances that are contextually connected are likely to belong to the same category. Considering the abundant textual content and logical relationships in the majority of DLA application scenarios, a multi-modal model of combining visual and textual features is a promising solution. To this end, we propose an effective and pluggable multimodal fusion approach named M2Doc, which aims to convert unimodal detectors into multi-modal detectors for DLA tasks. As illustrated in Fig 2, it can be easily implemented on both two-stage and end-to-end detectors. Firstly, we obtain textual grid representations using a pre-trained language model BERT (Jacob et al. 2019). As textual representation is aligned to visual representation at pixel level, we use a single backbone to extract both textual and visual features. Specifically, we densely fuse each scale visual and textual feature using early-fusion module. Additionally, we use late-fusion module to explicitly align block-level visual and textual features. By combining these early-fusion and late-fusion modules within the M2Doc approach, we effectively align and fuse visual and textual features at both pixel and block levels, enabling improved performance in DLA tasks. To validate the effectiveness of our proposed approach, we conducted extensive experiments on physical and logical layout analysis datasets. Our results demonstrate that Cascade Mask R-CNN (Cai and Vasconcelos 2018) and DINO (Hao et al. 2023) show promising improvements with the use of M2Doc on the DocLayNet dataset, as shown in Fig. 1. Moreover, ablation study results show that various detectors can benefit from M2Doc. The contributions of this paper are summarized as follows: • To endow existing unimodal detectors with multi-modal capabilities when handling DLA tasks, we propose a pluggable multi-modal fusion approach M2Doc. • M2Doc can be easily integrated into existing two-stage and end-to-end detectors. Our experimental results indicate that many detectors benefit from M2Doc, showcasing its versatility and wide applicability. • Our experimental results demonstrate that DINO with M2Doc outperforms previous models by large margin and achieves state-of-the-art performance on complex logical layout analysis datasets. Related Work This paper analyzes the document layout analysis task from the perspective of the modalities used, including unimodal and multi-modal models. Unimodal Document Layout Analysis Unimodal layout analysis utilizes visual features to analyze document layout. Several approaches attempt to utilize object detection and instance segmentation methods to detect and segment document regions. PubLayNet (Zhong, Tang, and Jimeno Yepes 2019) directly use Faster R-CNN (Ren et al. 2015) and Mask-RCNN (He et al. 2017) for paper layout analysis. Lee et al. (Lee et al. 2019) propose a trainable multiplication layers combined with U-Net (Ronneberger, Fischer, and Brox 2015). Li et al. (Li, Yin, and Liu 2020) add a domain adaptation module based on Faster R-CNN. TransDLANet (Hiuyi et al. 2023) uses three parameter-shared multi-layer-perception (MLP) on top of ISTR. SwinDocSegmenter (Ayan et al. 2023) utilizes both high-level and low-level features of document images to initialize the query of DINO. SelfDocSeg (Subhajit et al. 2023) generates pseudo-layouts to pretrain the image encoder before fine-tuning on layout analysis datasets following BYOL (Grill et al. 2020). Although researchers have attempted to improve the performance of original models, these approaches unable to utilize the semantic information of document. In recent years, many researchers have attempted to introduce multi-modal information from a self-supervised perspective. Such as LayoutLM (Yupan et al. 2022), BEiT (Hangbo et al. 2022), DiT (Li et al. 2022), UDoc (Jiuxiang The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7234 et al. 2021), and StrucText (Yu et al. 2023). The training of these models comprises two phases: (1) the pre-training phase, where models are self-supervised training on massive unlabeled data utilizing multi-modal inputs; (2) the finetuning phase, in which models are supervised training on labeled data to complete downstream tasks using pre-trained weights. With sufficient pre-training on enormous document data, these models perform well when transferred to various downstream document AI tasks, including image classification, layout analysis, and information extraction. For instance, DiT is self-supervised trained on 42 million document data using Masked Image Modeling (MIM) as its pretraining objective. However, during the fine-tuning phase, it merely integrates the well-pre-trained model as a feature backbone of Cascade R-CNN (Cai and Vasconcelos 2018). Despite detector benefiting from DiT, it remains essentially an unimodal pipeline, without utilizing semantic features associated with the downstream DLA datasets. Multi-Modal Document Layout Analysis Multi-modal DLA models focus on utilizing the multimodal features of text blocks. For instance, MFCN (Yang et al. 2017) uses skip-gram to obtain sentence-level textual features and combines textual and visual features in the decoder. VSR (Peng et al. 2021) combines three granularity: Chargrid (Katti et al. 2018), Wordgrid, and Sentencegrid (Denk and Reisswig 2019), into the full text embedding maps as textual input. It then uses two backbones to extract visual and textual features, which are fused in the multi-scale-adaptive-aggregation (MSAA) module. Furthermore, VSR emphasizes the importance of relation modeling for layout analysis, which uses Transformer layers (Vaswani et al. 2017) to model the relation of text blocks for further feature enhancement. These methods have the problem of shallow modality fusion or complex network structure, which makes their robustness and effectiveness compromised on complex datasets. Method M2Doc is a pluggable approach that can be directly applied to enhance the existing document layout analysis detectors, as shown in Fig 2. The detail architecture of M2Doc is depicted in Fig. 3 (a). The main pipeline of our method consists of four phases: (1) Textual Grid Representation, where a pretrained BERT is utilized to convert images to textual grid representations; (2) Feature Extraction, employing a single backbone to extract both visual features and textual features; (3) Early Fusion, where textual and visual features are fused at corresponding scales; (4) Late Fusion, fusing the visual features and textual features of text blocks, which is determined based on the Intersection over Union (IoU) of the candidate bounding boxes and optical character recognition (OCR) bounding boxes. Finally, the layout bounding boxes and categories are predicted based on the fused features. Textual Grid Representations Considering a document image I ∈RH×W ×3 with N words, where H and W represent the height and width of image. And we have follow-up labels (wi, bi), for i ∈ {1, . . . , N}, where wi originating from the OCR result, it signifies the i-th word or sentence. And bi = [(x1 i , y1 i ), (x2 i , y2 i )] indicates the coordinates of the top-left and bottom-right corners of the bounding boxes corresponding to the i-th word. To obtain the textual representation of the document, we refer to the information extraction method BERTgrid (Denk and Reisswig 2019). We align the OCR results one by one and input them into the pre-trained BERT to generate sequential text embeddings Ti ∈Rd×1, where d represents the feature dimension of the BERT model, with a value of 768, as shown in Eq.(1). (T1, . . . , TN) = BERT(w1, . . . , wN) (1) Finally, we transform the sequential text embeddings Ti into a 2D grid representation G ∈RH×W ×d based on bi, which is defined as follows, Gx,y = Ti, if(x, y) ∈bi 0, otherwise (2) This grid representation maximizes the preservation of the documents’ layout and aligns the textual grid representation G with the original image I at pixel level. Feature Extraction We employ a single backbone to extract both textual and visual features, as shown in Fig. 3. In contrast to VSR, we contend that the utilization of two distinct backbones is unnecessary due to the precise pixel-level alignment between the visual and textual inputs. Consequently, our proposed method has better generality and demands fewer model parameters in comparison to VSR. We use convolution layers to align the channel dimensions of visual and textual inputs before feeding them into the first ResNet block (Kaiming et al. 2016). Subsequently, downsampling operations are performed in four ResNet blocks to obtain features at different scales, where each scale becomes {1/4, 1/8, 1/16, 1/32} of the original input. And we obtain the corresponding visual features Pθ and textual features Sθ, where θ ∈{1, 2, 3, 4}. Early Fusion Due to end-to-end detectors requires the utilization of features extracted by the backbone for generating anchors, and two-stage detectors necessitates the application of RoIAlign to select appropriate features from backbone. So modality fusion is essential before inputting the features to the Region Proposal Network (RPN) (Ren et al. 2015) or Transformer encoder. We adopt the gate-like mechanism from the referring image segmentation model LAVT (Zhao et al. 2022) to obtain fusion scores that can adaptively vary with Sθ. αθ = η(Sθ) (3) Fθ = LayerNorm(αθ ⊙Sθ + Pθ) (4) where ⊙means element-wise multiplication, and η(·) refers to a function consisting of two 1×1 convolutional layers followed by two activation layers. To calculate the score of Sθ, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7235 OCR results BERT OCR bboxes Backbone 1x1 1x1 Tanh Relu M + LN Early-Fusion Text stream Vision stream Conv 1x1 1x1 LN Layer Normalization M Element-wise multiplication IoU IoU match Late-Fusion (a) Overall pipeline of M2Doc. Late-Fusion ( Two-Stage Detectors) RPN + IoU R-CNN Late-Fusion ( End-to-End Detectors) + IoU k, v q (b) Late-Fusion for Two-Stage Detectors. (c) Late-Fusion for End-to-End Detectors. Image (H, W, 3) Textual grid (H, W, 768) (N, 768) Multi-scales textual features Multi-scales visual features Output Encoder Decoder ………… T1 TN ………… E1 EN ………… E1 EN ………… E1 EN TN EN TokenN EmbedN Figure 3: The pipeline of our proposed method. The modules with green, red, yellow, and blue backgrounds represent the Textual Grid Representation, Feature Extraction, Early Fusion, and Late Fusion, respectively. we use a 1×1 convolutional layer followed by a ReLU activation layer (Nair and Hinton 2010). We then apply another 1 × 1 convolutional layer and a Tanh activation layer to restrict the score to the range of (0, 1). We obtain the weighted textual feature by multiplying the score αθ with Sθ. Finally, we add the weighted textual feature to the visual features Pθ to obtain the fused feature Fθ. Since the textual grid representation G equals 0 in pixels without text as given in Eq.(2), we normalize the fused features using a LayerNorm normalization layer. This normalization ensures the distribution of the fused features is more consistent. After the early fusion phase, well-fused features Fθ are generated. Late Fusion After feeding Fθ into either the RPN or Transformer encoder, we generate numerous candidate bounding boxes. We then fuse features based on the candidate bounding boxes. Specifically, we fuse the visual features Pθ with the assigned block-level textual features for each candidate bounding box. This fusion allows us to obtain more accurate predictions of the bounding box locations and classifications. Since we have candidate bounding boxes and sequential text embeddings Ti based on Eq.(1), we can assign each candidate bounding box its own block-level textual features through an IoU matching operation. Accounting for the distinct networks employed in generating candidate bounding boxes, we will discuss end-to-end detectors and two-stage detectors separately. End-to-End Detectors We use DINO (Hao et al. 2023) as the representative end-to-end detector for illustration. As shown in Fig. 3 (c), DINO initially flattens the fused features with corresponding positional encodings and feeds them into the Transformer encoder layers to enhance the feature representation. The output of the Transformer encoder serves as the keys and values of each layer of the Transformer decoder. With regards to the queries of the decoder layers, DINO splits them into two parts, positonal queries and content queries. The positional queries explicitly indicate the position of candidate bounding boxes, while the content queries are obtained through learnable embeddings that represent the features of candidate bounding boxes. In each decoder layer, DINO refines the positions and categories of candidate bounding boxes gradually. To enhance the multi-modal feature representations of the content queries, we calculate the IoUi,j between the predicted candidate boxes rj where j = {1, . . . , K} and the OCR bounding boxes bi where i = {1, . . . , N}. IoUi,j = |rj T bi| |rj S bi| (5) When the IoUi,j is greater than the threshold, it means that the word bounding box bi is inside the candidate box rj. We use Jj = {i = {1, . . . , N}|(IoUi,j > IoUthreshold)} ∈ RN×1 to represent the inclusion of the candidate bounding boxes rj for all N words. Block level textual feature Ej ∈ Rc×1 can then be constructed as follows, Ej = Γ(T · Jj) (6) where T = (T1, . . . , TN) ∈Rd×N is the textual feature matrix obtained in Eq.(1), and Γ represents the MLP layers that map the textual embedding channel dimensions d to Transformer decoder channel dimensions c. We add the block level textual features Ej to the content queries Queryj to obtain multi-modal content queries. Queryj = Queryj + λ1Ej (7) Where λ1 is an adjustable hyper-parameter. We then use the new content queries as input to the decoder layer to obtain finer predictions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7236 Method Model CapFootForListPagePagePicSectionTable Text Title mAP tion note mula item footer header ture header Human [1] 89 91 85 88 94 89 71 84 81 86 72 83 Faster R-CNN [1] R101 70.1 73.7 63.5 81.0 58.9 72.0 72.0 68.4 82.2 85.4 79.9 73.4 Mask R-CNN [1] R101 71.5 71.8 63.4 80.8 59.3 70.0 72.7 69.3 82.9 85.8 80.4 73.5 YOLOv5 [1] v5x6 77.7 77.2 66.2 86.2 61.1 67.9 77.1 74.6 86.3 88.1 82.7 76.8 Cascade R101 73.2 75.3 66.9 83.9 61.7 71.3 75.0 70.1 85.9 87.1 81.5 75.6 Mask R-CNN TransDLANet [3] R101 68.2 74.7 61.6 81.0 54.8 68.2 68.5 69.8 82.4 83.8 81.8 72.3 SwinDocSegmenter [4] Swin 83.6 64.8 62.3 82.3 65.1 66.4 84.7 66.5 87.4 88.2 63.3 76.9 DINO [5] R101 71.8 78.8 72.7 85.6 63.0 76.6 74.1 72.1 87.3 87.6 85.1 77.7 VSR [6] R101 72.6 72.1 73.8 86.2 81.8 81.3 63.1 82.5 79.4 88.4 80.7 78.4 Ours (Cascade R101 86.0 83.6 87.1 92.8 86.7 85.6 76.3 89.1 86.4 92.7 87.8 86.7 Mask R-CNN) Ours(DINO) R101 85.3 86.7 89.8 93.6 90.3 91.0 78.4 90.7 87.4 93.9 91.3 89.0 Table 1: Performance comparisons on DocLayNet. Bold indicates SOTA and underline indicates the second best. ([1](Pfitzmann et al. 2022), [2](He et al. 2017), [3](Hiuyi et al. 2023), [4](Ayan et al. 2023), [5](Hao et al. 2023), [6](Peng et al. 2021) ) Our experimentation has shown that utilizing a summation fusion method in the late fusion phase can yield superior results compared to the gate mechanism used in the early fusion phase. This difference can be attributed to the fact that the textual features in the early fusion phase are extracted by the backbone, while the textual features in the late fusion are provided directly by the pre-trained language model. Two-Stage Detectors In contrast to end-to-end detectors, which employs a Transformer encoder-decoder and learnable queries to generate candidate bounding boxes, twostage detectors uses the RPN to generate candidate bounding boxes called Region of Interests (ROIs), as shown in Fig. 3 (b). After obtaining the ROIs, two-stage detectors extracts the features using ROIAlign and feeds the features into the R-CNN network for further regression on the offsets of ROIs. The multi-modal ROI feature can be obtained as follows: RFi = RFj + λ2Ej (8) where RFi is the feature in the ith ROI and λ2 is a hyperparameter controls trade-off between two modality features. Ej is the corresponding block-level textual features, which is obtained by using IoU matching and block-level textual feature transforming in Eq.( 5) and Eq.( 6). Finally, we send the multi-modal ROI features to R-CNN for better categorisation and precise regression. Experimental Results Datasets We evaluate the effectiveness of our method on three layout analysis datasets: PubLayNet (Zhong, Tang, and Jimeno Yepes 2019), DocLayNet (Pfitzmann et al. 2022), and M6Doc (Hiuyi et al. 2023). PubLayNet PubLayNet is a widely used dataset that contains 360,000 document images. As all images in PubLayNet originate from PDF documents, the extraction of word-level OCR annotations can be facilitated through the use of PDFMiner (Shinyama 2015). PubLayNet is a physical layout analysis dataset that focuses on scientific articles and only classifies the basic units of document images, with 5 categories: Text, Title, List, Table, and Figure. DocLayNet DocLayNet is a recently released logical layout analysis dataset that focuses on complex, challenging, and diverse layouts. It contains 80,863 manually annotated pages with sentence-level OCR annotations. DocLayNet mainly include 6 scenarios: Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents, and Government. It distinguishes eleven categories in the layout, including Caption, Footnote, Formula, List-item, Page-footer, Page-header, Picture, Section-header, Table, Text, and Title. M6Doc M6Doc is a newly released logical layout analysis dataset includes 9,080 document images from 7 scenarios: Scientific articles, Textbooks, Books, Test papers, Magazines, Newspapers, and Notes. M6Doc contains PDF documents, scanned and photographed documents, and we get sentence-level OCR annotations using OCR engine. M6Doc is the first dataset to consider the commonality and specificity of documents, it classify 74 categories, including QR code, advertisement, figure, and algorithm, etc. Evaluation Metric and Implementation Details To measure the performance of the document layout analysis models, we use the metric mean Average Precision (mAP) @ IoU [0.50:0.95:0.05], which is commonly used in the obeject detection task. In main experiments, we employ DINO (Hao et al. 2023) and Cascade Mask R-CNN (Cai and Vasconcelos 2018) as representative end-to-end and two-stage detectors, respectively. We use ResNet-101 (Kaiming et al. 2016) with FPN (Tsung-Yi et al. 2017) to extract features. Considering that both DocLayNet and M6Doc contain non-English texts, we use BERT-Base-Multilingual as the language model and load the pre-trained weights provided by HuggingFace1. We also load the pre-trained weights of DINO and Cascade Mask R-CNN detectors from the COCO 2017 dataset (Tsung-Yi et al. 2014) for initialisation. For the Cascade 1https://huggingface.co/bert-base-multilingual-cased The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7237 Method Model AP50 AP75 Recall mAP SOLOv2 [1] R101 67.5 51.4 61.5 46.8 Faster R-CNN [1] R101 67.8 57.2 57.2 49.0 Mask R-CNN [1] R101 58.4 46.2 50.8 40.1 Cascade Mask R-CNN [1] R101 70.5 62.9 62.1 54.4 HTC [1] R101 74.3 67.2 68.1 58.2 SCNet [1] R101 73.5 65.1 67.3 56.1 Deformable DETR [1] R101 76.8 63.4 75.2 57.2 QueryInst [1] R101 67.1 58.1 71.0 51.0 ISTR [1] R101 80.8 70.8 73.2 62.7 TransDLANet [1] R101 82.7 72.7 74.9 64.5 VSR [2] R101 76.2 68.8 66.4 59.9 DINO [3] R101 84.6 76.7 82.9 68.0 Ours(Cascade Mask R-CNN) R101 78.0 70.7 67.9 61.8 Ours(DINO) R101 86.7 79.4 82.5 69.9 Table 2: Performance comparisons on M6Doc. ([1](Hiuyi et al. 2023), [2](Peng et al. 2021), [3](Hao et al. 2023)) Mask R-CNN, we use 10 anchors [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2, 5, 10] to adapt to different scales of input. For the DINO, we use DINO-4Scale, and set the query numbers to 900 following the default settings of DINO. We train our model based on MMDetection (Chen et al. 2019). We adopt the same setting of models trained on both the M6Doc dataset and the DocLayNet dataset., Cascade Mask R-CNN uses the SGD optimiser with an initialised learning rate of 2e-2 to train for 36 epochs, while learning rate decays to 2e-3 on 27th epoch and decays to 2e-4 on 33rd epoch; DINO uses the AdamW optimiser (Ilya and Frank 2019) with an initialised learning rate of 1e-4 to train for 36 epochs, while learning rate decays to 3.3e-5 on 27th epoch and decays to 1e-5 on 33rd epoch. For PubLayNet dataset, both Cascade Mask R-CNN and DINO training 6 epochs with the same initialized learning rate in DocLayNet, and both learning rates divided by 10 on 5th epoch. Results and Discussion The performance of all methods on DocLayNet is summarized in Table 1. The first row represents the human performance baseline provided by the DocLayNet. Notably, our model is the first to significantly outperform the human baseline. Our multi-modal models achieve significant improvements over the performance of their previous unimodal models. DINO was improved by 11.3% mAP (from 77.7% mAP to 89.0% mAP), and Cascade Mask R-CNN also obtained an 11.1% mAP gain (from 75.6% mAP to 86.7% mAP). It’s worth noting that VSR as the only multi-modal model in Table1 except for our method, and it still performs better than other unimodal detectors. Notably, VSR and our method have both demonstrated tremendous improvements in certain categories (Page-header, Page-footer, Sectionheader) compared to other detectors. Instances in these categories have semantically distinct elements, thus detectors can get better classification accuracy by integrating textual features. Therefore, these categories can be better enhanced from multi-modal modeling methods. Additionally, it is interesting to note that the only category with a worse mAP than the unimodal detectors is Figure, Method ModelTextTitleListTableFiguremAP Faster R-CNN [1] X101 91.0 82.6 88.3 95.4 93.7 90.2 Mask R-CNN [1] X101 91.6 84.0 88.6 96.0 94.9 91.0 Cascade R101 93.9 88.4 94.7 97.6 96.9 94.2 Mask R-CNN UDoc†[3] R50 93.9 88.5 93.7 97.3 96.4 93.9 DiT†[4] ViT 94.4 88.9 94.8 97.6 96.9 94.5 LayoutLMv3†[5] ViT 94.5 90.6 95.5 97.9 97.0 95.1 StructTextv2†[6] ViT 95.5 SwinDocSegmenter [7] Swin 94.6 87.2 93.0 97.9 97.3 93.7 TransDLANet [8] R101 94.3 89.2 95.2 97.2 96.6 94.5 VSR [9] X101 96.7 93.1 94.7 97.4 96.4 95.7 DINO [10] R101 94.8 89.4 97 98.3 97.6 95.4 Ours(Cascade R101 94.3 88.7 95.2 97.3 96.7 94.5 Mask R-CNN) Ours(DINO) R101 95.6 89.7 96.6 98.1 97.3 95.5 Table 3: Performance comparisons on PubLayNet. (“†” denotes pre-trained methods, [1](Zhong, Tang, and Jimeno Yepes 2019), [2](He et al. 2017), [3](Jiuxiang et al. 2021), [4](Li et al. 2022), [5](Yupan et al. 2022), [6](Yu et al. 2023), [7](Ayan et al. 2023), [8](Hiuyi et al. 2023), [9](Peng et al. 2021), [10](Hao et al. 2023)) where most instances do not have texts. Thus it doesn’t gain mAP improvements using multi-modal models. As shown in Table 2, traditional two-stage detectors have a recall around 60%, and their mAP is below 60%. In contrast, the recall of end-to-end detectors is around 70%, and their mAP is mostly above 60%. Notably, DINO’s recall can reach 82.9% because of its large number of queries, thus its mAP is close to 70%. The high correlation between the mAP and Recall metrics is mainly due to several difficulties in the M6Doc dataset, including a great variation in input image scale, complex data scenario distribution, and the need to distinguish 74 categories for each instance. These difficulties lead to a low recall of the model for some scenarios, which in turn limits the detection performance. Although our method does not solve the problem of low recall on M6Doc, we can still improve the performance of the model at the original recall level and achieve state-of-the-art result 69.9% mAP. On the PubLayNet dataset, as presented in Table 3, we also compare pre-trained models, including LayoutLMv3 (Yupan et al. 2022), DiT (Li et al. 2022), UDoc (Jiuxiang et al. 2021), and StrucTextv2 (Yu et al. 2023). Although they utilize a well-pre-trained ViT (Dosovitskiy et al. 2020) backbone, VSR with ResNeXt-101 (Saining et al. 2017) as the backbone achieves the best performance on the PubLayNet dataset. Our proposed method also achieves a comparable result 95.5% mAP. However, we observed that our method does not significantly outperform DINO itself. We speculate that this is because PubLayNet is a simple physical layout analysis dataset, which only distinguishes five basic categories unrelated to semantic information for scientific articles. Therefore, a good enough unimodal detector such as DINO can perform well on this dataset. Furthermore, we find that the mAP gain for the Table and Figure categories was not as significant as for other categories after using multi-modal modeling(M2Doc or VSR), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7238 Method Early Late AP50 AP75 Recall mAP DINO ✔ ✔ 86.7 79.4 82.5 69.9 ✔ 85.4 77.4 82.9 68.5 ✔ 85.3 77.2 82.8 68.4 84.6 76.7 82.9 68.0 Cascade ✔ ✔ 78.0 70.7 67.9 61.8 ✔ 76.0 69.5 67.4 60.5 Mask R-CNN ✔ 76.3 69.3 67.3 60.9 74.9 68.6 65.7 59.7 Table 4: Main ablation results on M6Doc test set Module Strategies AP50 AP75 Recall mAP EarlyConcate 86.5 78.9 82.4 69.4 Sum 86.2 78.8 82.1 69.1 fusion Gate 86.7 79.4 82.5 69.9 LateConcate 85.6 78.1 82.6 69.2 Sum 86.7 79.4 82.5 69.9 fusion Gate 85.5 76.9 82.3 68.5 Table 5: Ablation results on M6Doc test set using DINO with different fusion strategies in two modules. and even exhibited a decline. Such intriguing experimental phenomenon was observed in Tabel 3 and 5. We attribute this phenomenon to the presence of text content within these categories, which can potentially degrade the detection quality of detectors. For instance, the determination of boundaries becomes challenging for multi-modal detectors when dealing with pictures containing overlaid texts. Ablation and Effectiveness Analysis To validate the effectiveness of our proposed two level modality fusion strategy, we conducted ablation studies on the M6Doc test set, and the results are presented in Table 4. To determine whether the early-fusion module indeed improves detector performance, we compared the results with and without the early-fusion module using DINO and Cascade Mask R-CNN detectors, respectively. As shown in Table 4, the removal of the early-fusion led to a 0.5% absolute mAP drop in DINO and a 0.8% drop in Cascade Mask RCNN. The removal also resulted in a drop of approximately 0.8% in AP50 and AP75 for DINO and a 1.0% drop for Cascade Mask R-CNN. These results demonstrate the benefits of the early-fusion. Similarly, we verified the effectiveness of the late-fusion following the same process, and we can see the mAP decrease with the removal of the late fusion, which demonstrates the benefits of the late fusion.And when we use both fusion modules, Cascade Mask R-CNN and DINO can get 2.1% and 1.9 % mAP gain respectively. These results indicate that either early-fusion or late-fusion module is beneficial to both detectors and provides a relatively large boost when used together due to the different fusion levels. Table 5 presents a comparison of the performance of early-fusion and late-fusion using three different fusion strategies. The results indicate that the best result is achieved by utilizing the gate mechanism in Eq.(4) for the earlyfusion module and the summation for the late-fusion module. We think this may be attributed to the unique textual feaMethod M2Doc AP50 AP75 Recall mAP DINO 84.6 76.7 82.9 68.0 ✔ 86.7 79.4 82.5 69.9 ∆ +2.1 +2.7 -0.4 +1.9 Cascade 74.9 68.6 65.7 59.7 ✔ 78.0 70.7 67.9 61.8 Mask R-CNN ∆ +3.1 +2.1 +2.2 +2.1 Mask R-CNN 73.2 64.7 63.7 55.9 ✔ 77.5 69.2 66.1 58.8 ∆ +4.3 +4.5 +2.4 +2.9 Faster R-CNN 72.3 64.6 62.6 55.3 ✔ 77.3 68.6 65.1 57.9 ∆ +5.0 +4.0 +2.5 +2.6 Deformable 81.4 70.1 73.8 62.3 ✔ 83.7 72.1 75.1 63.9 DETR ∆ +2.3 +2.0 +1.3 +1.6 Table 6: Comparison between detectors before and after plugging M2Doc on M6Doc test set. Due to the different experimental setting, the baseline results we reproduce are higher than the results provided by M6Doc. ture distributions in two module, as previously mentioned. Pluggablity of M2Doc To further validate the pluggablity of M2Doc, we also combine it with other detectors. As shown in Table 6, we conduct experiments on M6Doc dataset using Mask R-CNN (He et al. 2017), Faster R-CNN (Ren et al. 2015), and Deformable DETR (Xizhou et al. 2021) besides DINO and Cascade Mask R-CNN. The experimental setting of Mask R-CNN and Faster R-CNN basically refer to the setting of Cascade Mask R-CNN mentioned above, and Deformable DETR uses the default setting. In Table 6, with the use of M2Doc, all detectors get significant improvements across all metrics. These qualitative results demonstrate the excellent generality and robustness of M2Doc. Conclusion In this paper, we propose an effective and pluggable multimodal fusion approach M2Doc for document layout analysis. M2Doc aims to endow existing unimodal detectors with multi-modal capablities for DLA tasks. We have demonstrated the broad applicability of M2Doc by implementing it on top of both two-stage and end-to-end detectors. Extensive experiments on three benchmark datasets, DocLayNet, M6Doc and PubLayNet validate that M2Doc significantly boosts the performance over baseline unimodal detectors. While promising progress has been made, some limitations persist such as marginal gains on simple datasets where unimodal methods suffice. Future work can explore adaptive fusion techniques and incorporate structural and semantic relationships between document entities. Nonetheless, we believe M2Doc provides an important step towards developing more unified multi-modal models for advanced document layout understanding. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7239 Acknowledgements This research is supported in part by NSFC (Grant No.: 61936003) and Alibaba DAMO Innovative Research Foundation. We thank the support from the Alibaba-South China University of Technology Joint Graduate Education Program. References Antonacopoulos, A.; Bridson, D.; Papadopoulos, C.; and Pletschacher, S. 2009. A Realistic Dataset for Performance Evaluation of Document Layout Analysis. In ICDAR, 296– 300. Ayan, B.; Sanket, B.; Josep, L.; and Umapada, P. 2023. SwinDocSegmenter: An End-to-End Unified Domain Adaptive Transformer for Document Instance Segmentation. In ICDAR. Cai, Z.; and Vasconcelos, N. 2018. Cascade R-CNN: Delving Into High Quality Object Detection. In CVPR, 6154– 6162. Chen, K.; Wang, J.; Pang, J.; Cao, Y.; Xiong, Y.; Li, X.; Sun, S.; Feng, W.; Liu, Z.; Xu, J.; et al. 2019. MMDetection: Open MMLab Detection Toolbox and Benchmark. arXiv preprint arXiv:1906.07155. Denk, T. I.; and Reisswig, C. 2019. BERTgrid: Contextualized Embedding for 2D DocumentRepresentation and Understanding. In Document Intelligence Workshop at NeurIPS. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Grill, J.-B.; Strub, F.; Altch´e, F.; Tallec, C.; Richemond, P.; Buchatskaya, E.; Doersch, C.; Avila Pires, B.; Guo, Z.; Gheshlaghi Azar, M.; et al. 2020. Bootstrap your own latenta new approach to self-supervised learning. In NeurIPS, 21271–21284. Hangbo, B.; Li, D.; Songhao, P.; and Furu, W. 2022. BEiT: BERT Pre-Training of Image Transformers. In ICLR. Hao, Z.; Feng, L.; Shilong, L.; Lei, Z.; Hang, S.; Jun, Z.; Lionel, M. N.; and Heung-Yeung, S. 2023. DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection. In ICLR. He, K.; Gkioxari, G.; Dollar, P.; and Girshick, R. 2017. Mask R-CNN. In ICCV, 2961–2969. Hiuyi, C.; Peirong, Z.; Sihang, W.; Jiaxin, Z.; Qiyuan, Z.; Zecheng, X.; Jing, L.; Kai, D.; and Lianwen, J. 2023. M6Doc: A Large-Scale Multi-Format, Multi-Type, Multi-Layout, Multi-Language, Multi-Annotation Category Dataset for Modern Document Layout Analysis. In CVPR, 15138–15147. Ilya, L.; and Frank, H. 2019. Decoupled Weight Decay Regularization. In ICLR. Jacob, D.; Ming-Wei, C.; Kenton, L.; and Kristina, T. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL, 4171–4186. Jiuxiang, G.; Jason, K.; Morariu, V. I.; Handong, Z.; Nikolaos, B.; Rajiv, J.; Ani, N.; and Tong, S. 2021. Unified Pretraining Framework for Document Understanding. In NeurIPS, 39–50. Kaiming, H.; Xiangyu, Z.; Shaoqing, R.; and Jian, S. 2016. Deep Residual Learning for Image Recognition. In CVPR, 770–778. Katti, A. R.; Reisswig, C.; Guder, C.; Brarda, S.; Bickel, S.; Hohne, J.; and Faddoul, J. B. 2018. Chargrid: Towards Understanding 2D Documents. In EMNLP, 4459–4469. Lee, J.; Hayashi, H.; Ohyama, W.; and Uchida, S. 2019. Page Segmentation using a Convolutional Neural Network with Trainable Co-Occurrence Features. In ICDAR, 1023– 1028. Li, J.; Xu, Y.; Lv, T.; Cui, L.; Zhang, C.; and Wei, F. 2022. DiT: Self-supervised Pre-training for Document Image Transformer. In ACM Multimedia, 3530–3539. Li, K.; Wigington, C.; Tensmeyer, C.; Zhao, H.; Barmpalios, N.; Morariu, V. I.; Manjunatha, V.; Sun, T.; and Fu, Y. 2020a. Cross-Domain Document Object Detection: Benchmark Suite and Method. In CVPR, 12915–12924. Li, M.; Xu, Y.; Cui, L.; Huang, S.; Wei, F.; Li, Z.; and Zhou, M. 2020b. DocBank: A Benchmark Dataset for Document Layout Analysis. In ICCL, 949–960. Li, X.-H.; Yin, F.; and Liu, C.-L. 2020. Page segmentation using convolutional neural network and graphical model. In Document Analysis Systems: 14th IAPR International Workshop, DAS 2020, 231–245. Springer. Nair, V.; and Hinton, G. E. 2010. Rectified linear units improve restricted boltzmann machines. In ICML, 807–814. Namboodiri, A. M.; and Jain, A. K. 2007. Document Structure and Layout Analysis. Springer London. Peng, Z.; Can, L.; Liang, Q.; Zhanzhan, C.; Shiliang, P.; Yi, N.; and Fei, W. 2021. VSR: A Unified Framework for Document Layout Analysis combining Vision, Semantics and Relations. In ICDAR, 115–130. Pfitzmann, B.; Auer, C.; Dolfi, M.; Nassar, A. S.; and Staar, P. W. J. 2022. DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis. In ACM SIGKDD, 3743–3751. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In NeurIPS, 91–99. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 234–241. Springer. Saining, X.; Ross, G.; Piotr, D.; Zhuowen, T.; and Kaiming, H. 2017. Aggregated Residual Transformations for Deep Neural Networks. In CVPR, 1492–1500. Shinyama, Y. 2015. PDFMiner: Python PDF Parser and Analyzer. Retrieved on. Subhajit, M.; Sanket, B.; Siladittya, M.; Ayan, B.; Josep, L.; Saumik, B.; and Umapada, P. 2023. SelfDocSeg: A SelfSupervised vision-based Approach towards Document Segmentation. In ICDAR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7240 Tsung-Yi, L.; Michael, M.; Serge, B.; James, H.; Pietro, P.; Deva, R.; ar, P. D.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In ECCV, 740–755. Tsung-Yi, L.; Piotr, D.; Ross, G.; Kaiming, H.; Bharath, H.; and Serge, B. 2017. Feature Pyramid Networks for Object Detection. In CVPR, 2117–2125. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is All You Need. In NeurIPS, 6000––6010. Xizhou, Z.; Weijie, S.; Lewei, L.; Bin, L.; Xiaogang, W.; and Jifeng, D. 2021. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In ICLR. Yang, X.; Yumer, E.; Asente, P.; Kraley, M.; Kifer, D.; and Lee Giles, C. 2017. Learning to extract semantic structure from documents using multimodal fully convolutional neural networks. In CVPR, 5315–5324. Yu, Y.; Yulin, L.; Chengquan, Z.; Xiaoqiang, Z.; Zengyuan, G.; Xiameng, Q.; Kun, Y.; Junyu, H.; Errui, D.; and Jingdong, W. 2023. StrucTexTv2: Masked Visual-Textual Prediction for Document Image Pre-training. In ICLR. Yupan, H.; Tengchao, L.; Lei, C.; Yutong, L.; and Furu, W. 2022. LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking. In ACM Multimedia, 4083–4091. Zhao, Y.; Jiaqi, W.; Yansong, T.; Kai, C.; Hengshuang, Z.; and Torr, P. H. 2022. LAVT: Language-Aware Vision Transformer for Referring Image Segmentation. In CVPR, 18155– 18165. Zhong, X.; Tang, J.; and Jimeno Yepes, A. 2019. PubLayNet: Largest Dataset Ever for Document Layout Analysis. In ICDAR, 1015–1022. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7241
2024
804
18,634
Multi-View People Detection in Large Scenes via Supervised View-Wise Contribution Weighting Qi Zhang1, Yunfei Gong1, Daijie Chen2,1, Antoni B. Chan3, Hui Huang1* 1College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China 2Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China 3Department of Computer Science, City University of Hong Kong, Hong Kong SAR, China qi.zhang.opt@gmail.com, gongyunfei2021@email.szu.edu.cn, chendaijie2022@email.szu.edu.cn, abchan@cityu.edu.hk, hhzhiyan@gmail.com Abstract Recent deep learning-based multi-view people detection (MVD) methods have shown promising results on existing datasets. However, current methods are mainly trained and evaluated on small, single scenes with a limited number of multi-view frames and fixed camera views. As a result, these methods may not be practical for detecting people in larger, more complex scenes with severe occlusions and camera calibration errors. This paper focuses on improving multi-view people detection by developing a supervised view-wise contribution weighting approach that better fuses multi-camera information under large scenes. Besides, a large synthetic dataset is adopted to enhance the model’s generalization ability and enable more practical evaluation and comparison. The model’s performance on new testing scenes is further improved with a simple domain adaptation technique. Experimental results demonstrate the effectiveness of our approach in achieving promising cross-scene multi-view people detection performance. Introduction Multi-view people detection (MVD) has been studied to detect people’s locations on the ground of the scenes via synchronized and calibrated multi-cameras, which could be used for many different applications, such as public safety, autonomous driving, etc. Recent multi-view people detection methods are mainly based on deep learning, which train convolution neural networks (CNNs) with synchronized multi-view images as input and ground-plane occupancy map as output, and have achieved promising results on existing datasets, such as Wildtrack (Chavdarova et al. 2018) and MultiviewX (Hou, Zheng, and Gould 2020). However, the current DNNs-based multi-view people detection methods are trained and evaluated only on single small scenes (see Figure 1) with limited numbers of frames and fixed camera views. These datasets are collected on small scenes with only hundreds of frames for training and testing and several fixed camera views (7 in Wildtrack and 6 in MultiviewX). In summary, the weaknesses of current methods are 3 folds: 1) The methods are evaluated on small scenes (about 20m∗20m), while real-world scenes could be *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 80 72 36 25 0 12 16 58 90 m m CVCS CityStreet Wildtrack MultiviewX Figure 1: The scene area comparison of CVCS, CityStreet, Wildtrack, and MultiviewX. The scene size of the latter two datasets is quite smaller than the first two. much larger, with more severe occlusions and camera calibration errors; 2) The methods are evaluated on datasets containing limited frames and fixed camera views (e.g., 360 for training and 40 for testing, and 7 views in Wildtrack dataset), which could not validate and compare different methods thoroughly; 3) The methods cannot generalize to other scenes well since they are trained on the same single scenes, and potentially overfitted on the specific camera arrangement, making them not generalized to novel scenes and camera layouts. These settings in current multi-view detection methods should be adjusted to better validate and compare different multi-view detection methods. In this paper, we focus on the multi-view people detection task in large scenes (eg. CVCS and CityStreet, see Figure 1) with more occlusions and camera calibration errors, as well as the model’s generalization ability to novel unseen scenes in testing. We propose the supervised view-wise contribution weighting method to fuse multi-camera information on the scene ground plane based on each view’s prediction on the ground plane space. As shown in Figure 2, the proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7242 Ú»¿¬«®» Û¨¬®¿½¬·±² Ю±¶»½¬»¼ Í·²¹´»ó Ê·»© Ü»½±¼·²¹ Ю»¼·½¬·±² C Fi É»·¹¸¬Wi Ê·»© î Ê·»© í Ê·»© ï Ю»¼·½¬·±² ͽ»²»ó´»ª»´ Ú»¿¬«®» λ°®»­»²¬¿¬·±² F Ó«´¬·óª·»© Ü»½±¼·²¹ ls Ù®±«²¼óÌ®«¬¸ Ê·»© Ù®±«²¼óÌ®«¬¸ Vi Vs Vs lv Ю±¶»½¬·±² Vi Figure 2: The pipeline of the proposed view-wise contribution weighting method, which consists of 4 stages: Single-view feature extraction and projection, Projected single-view decoding, Supervised view-wise contribution weighted fusion, and Multi-view feature decoding. First, camera view features are extracted from the shared feature extraction net, and then they are projected to the ground plane. Second, each view’s projected feature Fi is fed into a decoder to predict the view’s people location map Vi on the ground, and the loss is ℓv, whose ground-truth is obtained from the scene ground-truth V gt s . Third, each view’s people location map prediction Vi is fed into a subnet C and then weighted across all camera views to obtain weight maps Wi for multi-view fusion. And the predicted weight maps Wi are used to fuse multi-view features Fi in a weighted summation way. Finally, the fused multi-view feature F is decoded to predict the whole scene’s people location map, and the loss is ℓs. posed supervised view-wise contribution weighting MVD model consists of 4 stages: Single-view feature extraction and projection, Projected single-view decoding, Supervised view-wise contribution weighted fusion, and Multi-view feature decoding. First, the features of each view are extracted and then projected to the ground plane in a shared subnet to handle possible different numbers of camera views. The projected single-view decoding subnet predicts each view’s people location map contained by the view on the ground plane in a supervised way, which could be used as the contribution of the current view to the final result. Thus, the predictions are further fed into a subnet and weighted across all camera views to obtain weight maps for multi-view fusion in the next step. Then the predicted weight maps are used to fuse multi-view features in a weighted summation way. Finally, the fused multi-view features are decoded to predict the whole scene’s people location map. Besides, in the experiments, instead of evaluating the multi-view people detection methods on small multi-view datasets, we adopt 2 large multi-view datasets, CitySteet (Zhang and Chan 2019) and CVCS (Zhang, Lin, and Chan 2021), for a more challenging and thorough method comparison and validation. Furthermore, a simple domain adaptation technique is also adopted to further improve the model’s cross-scene performance on testing scenes. In summary, the main contributions of our paper are as follows. • To our knowledge, this is the first study on large-scene multi-view people detection task with better generalization ability to novel unseen testing scenes with different camera layouts. • We propose a new multi-view people detection method, which considers the supervised view-wise contribution weighting for better multi-view feature fusion. • The proposed method’s cross-scene multi-view people detection performance is promising compared to previous methods trained on the same single scenes, extending multi-view people detection to more practical scenarios. Related Work Multi-View People Detection Traditional methods. Multi-view people detection has been studied for dealing with heavy occlusions in crowded scenes. Usually, information from synchronized and calibrated multi-camera views is combined to provide predictions for the whole scene. Early detection methods try to detect each person in the images by extracting hand-crafted features (Viola and Jones 2004; Sabzmeydani and Mori 2007; Wu and Nevatia 2007) and then training a classifier (Joachims 1998; Viola, Jones, and Snow 2005; Gall et al. 2011) using the extracted features. Fleuret et al. (2007) proposed the Probabilistic Occupancy Map (POM) to indicate the probability of people appearing on the grid of the scene ground. Traditional methods rely on hand-crafted features and background subtraction preprocessing, which limit their performance and application scenarios. Deep learning methods. With the development of geometric deep learning, recent learning-based MVD methods have achieved great progress. Chavdarova and Fleuret (2017) proposed to use CNNs for feature extraction and concatenate multi-view features to predict the occupancy map. However, the features from different camera views are not aligned before further fusion in the model, resulting in limited performance. Hou, Zheng, and Gould (2020) used camera calibrations to perform a perspective transformation to the ground for feature fusion and achieved state-of-the-art performance. Later work Song et al. (2021) improved the performance further by using multi-height projection with an attention-based soft selection module for different height projection fusion. Hou and Zheng (2021) adopted the deformable transformer framework (Zhu et al. 2020) and proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7243 posed a multi-head self-attention based multi-view fusion method. Qiu et al. (2022) proposed a data augmentation method by generating random 3D cylinder occlusions on the ground plane to relieve model overfitting. Overall, the existing multi-view people detection methods are trained and evaluated on single small scenes with only hundreds of multi-view frames and several fixed camera views, such as in Wildtrack (Chavdarova et al. 2018) and MultiviewX (Hou, Zheng, and Gould 2020). This is not suitable for better validating and comparing different multiview people detection methods, not to mention for generalizing to novel new scenes with different camera layouts, or other more practical real-world application scenarios. Qiu et al. (2022) noticed the issue and tried to solve the problem from the aspect of data augmentation, but still evaluated the methods only on small scenes. Besides, in contrast to SHOT (Song et al. 2021) or MVDeTr (Hou and Zheng 2021) which uses self-attention weights, the proposed method estimates the view fusion weights in a supervised way without extra labeling efforts, resulting in more stable performance. Other Multi-View Vision Tasks Multi-view counting. Multi-camera views can be combined to further improve the single-image counting (Cheng et al. 2019a; Huang et al. 2020; Zhang et al. 2022; Cheng et al. 2022, 2019b) performance for large scenes. Similar to multi-view people detection, traditional multi-view counting methods also rely on hand-crafted features and background subtraction techniques (Viola and Jones 2004; Sabzmeydani and Mori 2007; Chan and Vasconcelos 2012; Chen et al. 2012; Paragios and Ramesh 2001; Marana et al. 1998; Lempitsky and Zisserman 2010; Pham et al. 2015; Wang and Zou 2016; Xu and Qiu 2016). These traditional methods’ performance is limited by the weak feature representation power and the foreground/background extraction result. To deal with the issues of traditional methods, deep learning methods are explored in the area. Zhang and Chan (2019, 2022b) proposed the first end-to-end DNNs-based framework for multi-view crowd counting and a large city-scene multi-view vision dataset CityStreet. Zhang and Chan (2020, 2022a) proposed to solve the problem in 3D space with the 3D feature fusion and the 3D density map supervision. Zhang, Lin, and Chan (2021) proposed a large synthetic multi-view dataset CVCS to handle the cross-view crossscene setting, and the method is applied to novel new scenes with domain transferring steps. Zheng, Li, and Mu (2021) improved the late fusion model (Zhang and Chan 2019) by introducing the correlation between each pair of views. Zhai et al. (2022) proposed a graph-based multi-view learning model for multi-view counting. Multi-view counting methods mainly focus on predicting crowd density maps on the ground and the people count but with relatively weak localization ability. Multi-camera tracking. Multi-camera tracking can track the objects under multi-cameras to deal with occlusions or lighting variations (Iguernaissi et al. 2019). The existing methods can be categorized into centralized methods (overlapped) (Chavdarova et al. 2018; Fleuret et al. 2007; Xu et al. 2016; You and Jiang 2020) and distributed methods (non-overlapped) (Patino and Ferryman 2014; Taj and Cavallaro 2011; Yang et al. 2022). Here, we mainly review centralized methods with overlapping camera views. Centralized methods consist of 3 steps: camera view people detection/feature extraction, data fusion and tracking. You and Jiang (2020) followed the steps and proposed a real-time 3D multi-camera tracking by fusing 2D people location predictions on the ground plane and then tracking each person from the fused ground-plane maps. Nguyen et al. (2022) proposed to match the multi-camera trajectories by solving a global lifted multicut problem. In summary, the model generalization ability has been explored in other multi-view vision tasks, such as using large synthetic datasets in training. But in the area of multi-view people detection, the methods are only evaluated on the same single scenes due to limited data, which reduces the model generalization potential under real-world application scenarios. And no methods have tried estimating the view weights for fusion with the guidance of single-view groundplane ground-truth, requiring no extra labels. Method In this section, we describe the proposed supervised viewwise contribution weighting multi-view detection method, which consists of 4 stages (see Figure 2): Single-view feature extraction and projection, Projected single-view decoding, Supervised view-wise contribution weighted fusion, and Multi-view feature decoding. We first introduce the whole model’s subnets and modules, where the details about the proposed supervised view-wise contribution weight module are presented. Finally, we describe how we generalize the trained model to novel new scenes. Single-View Feature Extraction and Projection We choose ResNet (He et al. 2016)/VGG (Simonyan and Zisserman 2014) as the feature extraction backbone net for the multi-view people detection model. To handle the variable numbers of camera views in the training and testing scenes, the feature extraction subnet is shared across all input camera views. After feature extraction, each view’s features are projected to the scene ground plane for further processing via a projection layer with camera calibrations based on spatial transformation network (Jaderberg et al. 2015). The projection layer implemented in our model could be used with variable camera parameters instead of a fixed set of ones to handle camera view number change across different scenes. Projected Single-View Decoding We use a subnet to obtain each view’s people location prediction on the ground plane based on the projected singlecamera view features, which is shared across all input camera views to handle the possible variable camera views. The supervision for the decoding subnet training is the scene location map consisting of people that can be seen within the corresponding camera view. Since the decoding result only contains people that can be seen in the field-of-view of each camera (as shown in Figure 3), the prediction can be used The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7244 View 1 GT View 2 GT View 3 GT Scene GT Figure 3: ‘View GT’ is the ground-truth for each view in projected single-view decoding, which is the people occupancy map on the ground that can be seen by the corresponding view, and ‘Scene GT’ stands for the ground-truth for the whole scene of CityStreet. The lines in the ‘View GT’ indicate the field-of-view region of the camera view. as the confidence of the view on the corresponding regions in the final result. So, we use the single-view ground-plane prediction results to fuse the multi-camera information in the next step. Besides, the projected single-view decoding module also provides an extra constraint on the training of the model for the feature extraction module. Thus, the feature extracted from the multi-view images should be effective in the single-view decoding after projection. The projected single-view decoding loss ℓv can be calculated as follows. Denote n as the camera view number, i = 0, 1, ..., n −1 stands for the index of each view, and the prediction and ground truth for each view are Vi and V gt i , respectively. ℓv = 1 n X i ∥Vi−V gt i ∥2 2 = 1 n X i ∥Vi−V gt s ⊗Mi∥2 2. (1) V gt i = V gt s ⊗Mi means each view’s ground-truth in the projected single view is the scene-level ground-truth V gt s multiplied by the view’s field-of-view mask Mi on the ground. Supervised View-Wise Contribution Weighted Fusion We propose the supervised view-wise contribution weighted fusion approach for fusing multi-camera information. First, each view’s scene ground-plane prediction result Vi is fed in the shared subnet C to predict the weight map ˆWi for each camera view. Then, the weight maps { ˆWi} for all views are normalized to make sure the sum of the weights for all camera views of one pixel on the scene ground-plane map equals to 1, denoted as Wi. In addition to that, the regions that cannot be seen by a camera view are assigned to 0 weight under that view. Especially in the normalization process, these regions’ weights are not calculated in the final result. Therefore, the view-wise field-of-view mask Mi is multiplied with each camera view’s initial weight map ˆWi before the normalization. The process of the view-wise contribution weight maps can be calculated as follows. ˆWi = C(Vi), Wi = ˆWi ⊗Mi P i ˆWi ⊗Mi + σ , (2) where σ is a small value to avoid the zero denominator issue when a region pixel cannot be seen by any input views. Ì®¿·²·²¹ ͽ»²» Ú»¿¬«®» Û¨¬®¿½¬·±² É»·¹¸¬»¼ Ú«­·±² Ó«´¬·óª·»© Ü»½±¼·²¹ Ü·­½®·³·²¿¬±® Ì®¿·²·²¹ ͽ»²» Ю»¼·½¬·±² Ì®¿·²·²¹ ͽ»²»á Ì»­¬·²¹ ͽ»²»á Ì»­¬·²¹ ͽ»²» Figure 4: The domain adaptation approach used in our method for generalizing to novel new scenes. After that, each camera view’s projected features Fi are multiplied with the view-wise contribution weight maps Wi, and summed together to obtain the scene-level feature representation F = P i Fi ⊗Wi. To the best of our knowledge, this is the first work that uses the supervised view-wise contribution on the scene ground-plane map as a weighting method for fusing multi-camera view information in the field, which provides more guidance of the people contained in each view. Compared to other weighted methods, SHOT (Song et al. 2021) or MVDeTr (Hou and Zheng 2021), the proposed method is more stable on different datasets (see experiment section for more details). Multi-View Feature Decoding After obtaining the fused feature representation F for multicameras, F is fed in a decoder for predicting the scene-level prediction Vs of the people occupancy map on the ground. Note this decoder is different from the one used for projected single-view decoding because they are targeting different functions, one for decoding each camera view’s features, and the other one for the whole scene’s feature representation. The mse loss is also used in the multi-view feature decoding, denoted as ℓs = mse(Vs, V gt s ). And together with the projected single-view decoding loss ℓv, the model’s loss ℓcan be summarised as ℓ= ℓs + λℓv, where λ is used to adjust the two decoding losses’ importance in the training. Generalization to New Scenes Our proposed supervised view-wise contribution weighting method is trained on a large synthetic multi-view people dataset CVCS (Zhang, Lin, and Chan 2021), which can be applied to new scenes with promising results by slightly finetuning the model. To further reduce the large domain gap between the training scenes and testing new scenes, we also use a domain adaptation method to improve the performance (see Figure 4) after finetuning the trained model on the new scenes with limited labeled data. In particular, we add a discriminator in the trained model to reduce the gap between the training scene features and testing scene features. In the finetuning stage, we first trained the model by using 5% of the new scene training set images, and then both the training synthetic images and the testing new scene images are fed into the proposed model. Finally, both kinds of features The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7245 Dataset Frames Scene Resolution Counts Views Area CVCS 200k/80k 23/8 1920×1080 90-180 60-120 90×80 CityStreet 300/200 1 2704×1520 70-150 3 58×72 Wildtrack 360/40 1 1920×1080 20 7 12×36 MultiviewX 360/40 1 1920×1080 40 6 16×25 Table 1: Comparison of multi-view people datasets. ‘/’ stands for the training and testing statistics. are classified by the discriminator. The loss in the finetuning includes the new scene multi-view detection loss, the synthetic multi-view detection loss, and the discriminator classification loss. In experiments, the model’s cross-scene multi-view detection performance is promising compared to the previous methods trained on the same single scenes, which can extend the multi-view people detection to more general application scenarios. Experiments and Results In this section, we first introduce the datasets used in the experiments and then present the experiment settings, including the comparison methods, the implementation details, and evaluation metrics. Finally, we show and compare the experiment results, including the multi-view people detection performance on various datasets and the ablation study on the proposed view-wise contribution weighting module. Datasets We introduce 4 datasets used in the multi-view people detection, including CVCS (Zhang, Lin, and Chan 2021), CityStreet (Zhang and Chan 2019), Wildtrack (Chavdarova et al. 2018) and MultiviewX (Hou, Zheng, and Gould 2020), among which the latter 2 datasets are relatively smaller in the scene size (see dataset comparison in Table 1). CVCS is a synthetic multi-view people dataset, containing 31 scenes, where 23 are for training and the rest 8 for testing. The scene size varies from about 10m∗20m to 90m∗80m. Each scene contains 100 multi-view frames. The ground plane map resolution is 900×800, where each grid stands for 0.1 meter in the real world. In the training, 5 views are randomly selected for 5 times in each iteration per frame of each scene, and the same view number is randomly selected for 21 times in evaluation. CityStreet is a real-world city scene dataset collected around the intersection of a crowded street. The scene size of the dataset is around 58m×72m. The ground plane map resolution is 320×384. Wildtrack is a real-world dataset recorded on the square of a university campus. The ground plane map resolution 120 × 360, where each grid stands for 0.1m in the real world. MultiviewX is a synthetic dataset for multi-view people detection. The ground plane map resolution is 250×160, where each grid also stands for 0.1m in the real world. Compared to Wildtrack and MultiviewX, CVCS and CityStreet contain more scenes, more camera views and more images, which are more suitable for validating multiview people detection tasks in more practical environments. Thus, unlike other methods, we mainly evaluate on larger datasets CVCS and CityStreet. MVDeTr Ours SHOT 3DROM MVDet GT View 1 View 2 View 3 View 4 View 5 Camera view input Single-view pred 𝑉𝑖 View weight map 𝑊𝑖 Pred 𝑉𝑠 Figure 5: The result visualization of the method: camera view input, single-view prediction, view weight map and the corresponding ground-truth and prediction results. Experiment Settings Comparison methods. We compare the proposed viewwise contribution weighting method with several stateof-the-art multi-view people detection methods: MVDet (ECCV 2020) (Hou, Zheng, and Gould 2020), SHOT (ICCV 2021) (Song et al. 2021), MVDeTr (ACM MM 2021) (Hou and Zheng 2021), and 3DROM (ECCV 2022) (Qiu et al. 2022). We run these four latest multi-view people detection methods on large multi-view people datasets CVCS and CityStreet, using the code implemented by the corresponding paper authors. We also compare with other methods, such as RCNN (Xu et al. 2016), POM-CNN (Fleuret et al. 2007), DeepMCD (Chavdarova and Fleuret 2017), DeepOcc. (Baqu´e, Fleuret, and Fua 2017), and Volumetric (Iskakov et al. 2019), on Wildtrack and MultiviewX. Implementation details. The proposed model is based on ResNet/VGG backbone. For model setting, the layer setting of feature extraction and decoders for projected single-view decoding and multi-view decoding can be found in the supplemental. For the view-wise contribution weighted fusion, the single-view predictions are fed into a 4-layer subnet: [3×3×1×256, 3×3×256×256, 3×3×256×128, 3×3×128×1]. The map classification threshold is 0.4 for all datasets, and the distance threshold is 1m (5 pixels) on CVCS, 2m (20 pixels) on CityStreet, and 0.5m (5 pixels) on MultiviewX and Wildtrack. As to the model training, a 3-stage training is used: First, the 2D counting task is trained as the pretraining for the feature extraction subnet; Then, the projected singleview decoding subnet is trained after loading the pre-trained feature extraction subnet; Finally, the projected single-view decoding subnet and the multi-view decoding subnet are trained together, where the loss term weight λ = 1. We follow other training settings as in MVDet. Evaluation metrics. We use 5 metrics to evaluate and compare the multi-view people detection methods: Multiple Object Detection Accuracy (MODA), Multiple Object Detection Precision (MODP), Precision, Recall and F1 score. We calculate true positive (TP), false positive (FP), and false negative (FN) first to compute the metrics. MODA = The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7246 Dataset CVCS CityStreet Method MODA MODP Precision Recall F1 score Rank MODA MODP Precision Recall F1 score Rank Avg. Rank MVDet 36.6 71.0 79.4 49.4 60.9 4 44.6 65.7 79.8 59.8 68.4 5 4.5 SHOT 45.0 77.4 83.6 55.9 67.0 2 53.5 72.4 91.0 59.4 71.8 4 3 MVDeTr 39.8 84.1 95.3 44.9 61.0 3 58.3 74.1 92.8 63.2 75.2 3 3 3DROM 33.9 73.9 79.5 42.2 55.1 5 60.0 70.1 82.5 76.2 79.2 1 3 Ours 46.2 78.4 81.2 59.1 68.4 1 55.0 70.0 81.4 71.2 76.0 2 1.5 Table 2: Comparison of the multi-view people detection performance on the larger datasets CVCS and CityStreet using 5 metrics. The distance threshold is 1m on CVCS (5 pixels on the ground plane map), and 2m on CityStreet (20 pixels on the ground plane map). See results with other distance thresholds in the supplemental. Overall, all previous methods do not perform well on the 2 large datasets compared to Wildtrack and MultiviewX (see in Table 6). The proposed method ranks the best among all methods according to the average rank on the 2 datasets. Backbone Method MODA MODP P. R. F1. ResNet With 46.2 78.4 81.2 59.1 68.4 Without 36.6 71.0 79.4 49.4 60.9 VGG With 39.9 71.9 85.7 47.9 61.5 Without 38.1 77.1 86.3 45.3 59.4 Table 3: The ablation study on whether the proposed supervised view-wise contribution weighted fusion is used or not (with/without) on CVCS dataset. 1−(FP +FN)/(TP +FN), shows the detection accuracy. MODP = (P(1 −d[d < t]/t))/TP, shows the precision of detection, where d is the distance from a detected person point to its ground truth and t is the distance threshold. Precion = TP/(FP + TP), Recall = TP/(TP + FN), and F1 score = 2Precion ∗Recall/(Precion + Recall), where F1 score is a balance of Precion and Recall for detection performance evaluation. Additionally, the Rank and the average rank (Avg. Rank) of each method’s performance on CVCS and CityStreet are also presented to compare different methods’ overall performance. Experiment Results We show the performance on CVCS and CityStreet in Table 2. Overall, compared to results on Wildtrack and MultiviewX (see in Table 6), the performance on large scenes, CVCS and CityStreet, is much lower. On CVCS, compared with all other methods, our proposed method achieves the best performance. The proposed method shares the same backbone model with the MVDet method (Hou, Zheng, and Gould 2020), but our overall performance is better than MVDet, which shows the effectiveness of the proposed method. SHOT uses an extra multi-height projection and works well when calibration errors of the dataset are relatively small as in CVCS, and it performs much worse on CityStreet due to CityStreet having larger calibration errors, causing extra difficulties for the multi-height fusion. 3DROM is better than our method on CityStreet because it is a data augmentation method that deals with the data lacking issue better. But 3DROM works badly on CVCS because CVCS is already a very large dataset containing various camera and scene variations. On CityStreet, the proposed method (using VGG as the backbone) also achieves the second-best performance according to F1 score metric, Dataset Method MODA MODP P. R. F1. CVCS Supervised 46.2 78.4 81.2 59.1 68.4 Unsupervised 45.8 73.6 86.7 54.1 66.6 CityStreet Supervised 55.0 70.0 81.4 71.2 76.0 Unsupervised 49.5 67.1 78.3 68.5 73.1 Table 4: The ablation study on whether the view-wise contribution weighted fusion is supervised or unsupervised. Num. MODA MODP Precision Recall F1 score 3 37.1 73.4 70.7 62.1 66.1 5 46.2 78.4 81.2 59.1 68.4 7 50.5 76.6 90.1 56.8 69.7 9 50.3 78.3 92.5 54.7 68.8 Table 5: The ablation study on the variable testing camera number (3, 5, 7, 9) of the proposed method on the CVCS dataset, which is trained on 5 camera views. which is better than SHOT (Song et al. 2021), MVDeTr (Hou and Zheng 2021) and MVDet (Hou, Zheng, and Gould 2020). In addition, MVDeTr utilizes deformable transformer modules, which are relatively easy to learn on small datasets. However, on large datasets like CVCS with a high number of camera views that keep changing during training, it has difficulty stabilizing the weight learning process, limiting its detection performance Overall, the proposed supervised view-wise contribution weighting method achieves the best average rank (Avg. Rank) among all methods. The reason is the view-wise ground-plane supervision provides more clues for the people locations of each view, and thus the multi-view fusion performance is more stable and better than other methods. We also show the visualization result on CVCS dataset in Figure 5, where the first 3 rows are the multi-view inputs, the proposed method’ single-view predictions, and the view weight maps, indicating accurate people ground locations. Ablation Study With/without the supervised view-wise contribution weighted fusion. The first ablation study is on the effectiveness of the proposed supervised view-wise contribution weighted fusion. As shown in Table 3, no matter which backbone is used, the model with the supervised view-wise The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7247 Dataset Wildtrack MultiviewX Method MODA MODP Precision Recall F1 score MODA MODP Precision Recall F1 score RCNN (Xu et al. 2016) 11.3 18.4 68 43 52.7 18.7 46.4 63.5 43.9 51.9 POM-CNN (Fleuret et al. 2007) 23.2 30.5 75 55 63.5 DeepMCD (Chavdarova and Fleuret 2017) 67.8 64.2 85 82 83.5 70.0 73.0 85.7 83.3 84.5 DeepOcc. (Baqu´e, Fleuret, and Fua 2017) 74.1 53.8 95 80 86.9 75.2 54.7 97.8 80.2 88.1 Volumetric (Iskakov et al. 2019) 88.6 73.8 95.3 93.2 94.2 84.2 80.3 97.5 86.4 91.6 MVDet (Hou, Zheng, and Gould 2020) 88.2 75.7 94.7 93.6 94.1 83.9 79.6 96.8 86.7 91.5 SHOT (Song et al. 2021) 90.2 76.5 96.1 94.0 95.0 88.3 82.0 96.6 91.5 94.0 MVDeTr (Hou and Zheng 2021) 91.5 82.1 97.4 94.0 95.7 93.7 91.3 99.5 94.2 97.8 3DROM (Qiu et al. 2022) 93.5 75.9 97.2 96.2 96.7 95.0 84.9 99.0 96.1 97.5 Ours (ft) 73.9 72.4 86.8 87.2 87.0 81.1 77.2 95.0 85.6 90.1 Ours (ft+da) 78.9 73.6 88.7 90.4 89.5 83.8 76.5 97.1 86.4 91.4 Table 6: Comparison of the multi-view people detection performance on Wildtrack and MultiviewX using 5 metrics. All comparison methods train and test on Wildtrack or MultiviewX (single scene), while ours are trained on CVCS and finetuned on Wildtrack or MultiviewX with limited labeled data (‘Ours (ft)’) or with the domain adaptation technique (‘Ours (ft+da)’). contribution weighted fusion achieves better overall performance than the model without using it, which demonstrates the proposed approach’s effectiveness. Supervised/unsupervised view-wise contribution weighted fusion. The second ablation study is on whether the view-wise contribution weighted fusion module is supervised or unsupervised. As shown in Table 4, on both CVCS and CityStreet datasets, the supervised view-wise contribution weighted fusion achieves better results than the unsupervised one. The reason is the supervised one provides extra guidance for each view and it’s beneficial for better multi-view fusion results. Note that the supervision for each view is obtained from the scene-level ground-truth, and no extra labeling efforts are required. Variable camera number. The fourth ablation study is on the variable camera number in the testing stage. To generalize the model to novel new scenes requires that the model can be applied to variable camera view number inputs, because real testing scenes may contain different numbers of camera views. The proposed method is trained on 5-cameraview inputs on CVCS dataset (Zhang, Lin, and Chan 2021) while tested on variable camera view number inputs, namely 3, 5, 7, and 9. Note that the ground-truth for each testing setting is the people captured by the variable camera views. As shown in Table 5, while the camera view number is increased from 3 to 9, the MODA, MODP, and Precision metrics are also generally increasing, while the Recall metric is decreasing. The reason is, that when the camera view number is increased, the model can detect more TP cases with higher accuracy. But increasing the camera view number also means more people need to be detected (ground-truth people number increases), which causes more FN cases, too, and thus the Recall metric decreases. But overall, the F1 score is stable (decreases a little), which shows the model is relatively stable across camera view number changes. Generalization to new scenes. We show the cross-scene performance of the proposed method on Wildtrack and MultiviewX in Table 6, which is trained on the large dataset CVCS. We first finetune the trained model on Wildtrack and MultiviewX by using 5% of the training set images (‘Ours (ft)’) from the new scenes, then use a domain adaptation approach (Tzeng et al. 2017) to reduce the domain gap between source and target scenes and further improve the performance (‘Ours (ft+da)’). From Table 6, ‘Ours (ft)’ already outperforms 4 comparison methods which use 100% training set data and tested on the same single scene: RCNN (Xu et al. 2016), POM-CNN (Fleuret et al. 2007), DeepMCD (Chavdarova and Fleuret 2017), DeepOcc. (Baqu´e, Fleuret, and Fua 2017). With the domain adaptation approach, the target and source domain gap is reduced and the cross-scene performance is further improved on both datasets. On MultiviewX (with a larger crowd number than Wildtrack), ‘Ours (ft+da)’ achieves close performance to the state-of-the-art methods MVDet (Hou, Zheng, and Gould 2020) and Volumetric (Iskakov et al. 2019). Compared to the rest methods, the proposed method’s cross-scene performance with unsupervised domain adaptation (‘Ours (ft+da)’) is relatively worse, but considering that our method uses only 5% of the target scene labels and achieves very close performance to other state-of-the-art methods using 100% training set data and testing on the same single scenario, the proposed method’s result is still promising. Discussion and Conclusion In this paper, we present a novel supervised view-wise contribution weighting approach for multi-view people detection in large scenes. We evaluate its performance on large multi-view datasets, which is a departure from the typical approach of using small single-scene datasets. We have demonstrated that our proposed method performs better on larger and more complicated scenes, and achieves promising cross-scene multi-view people detection performance compared with existing state-of-the-art techniques trained on single scenes. To our knowledge, this is the first study on the large-scene multi-view people detection task. Our proposed method extends the applicability of multi-view people detection to more practical scenarios, making it a valuable tool for various applications in the fields of computer vision, surveillance, and security. Limitations: The adopted domain transferring method is simple and limited by the image style transferring a lot. A stronger domain-transferring module could be our future work. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7248 Ethical Statement We use four datasets CVCS, CityStreet, MultiviewX, and Wildtrack in the experiments, among which CVCS and MultiviewX are synthetic datasets and the rest 2 are public realscene datasets. Acknowledgements This work was supported in parts by NSFC (62202312, 62161146005, U21B2023, U2001206), DEGP Innovation Team (2022KCXTD025), CityU Strategic Research Grant (7005665), and Shenzhen Science and Technology Program (KQTD20210811090044003, RCJC20200714114435012, JCYJ20210324120213036). References Baqu´e, P.; Fleuret, F.; and Fua, P. 2017. Deep occlusion reasoning for multi-camera multi-target detection. In Proceedings of the IEEE International Conference on Computer Vision, 271–279. Chan, A. B.; and Vasconcelos, N. 2012. Counting people with low-level features and Bayesian regression. IEEE Transactions on Image Processing, 21(4): 2160–2177. Chavdarova, T.; Baqu´e, P.; Bouquet, S.; Maksai, A.; Jose, C.; Bagautdinov, T.; Lettry, L.; Fua, P.; Van Gool, L.; and Fleuret, F. 2018. WILDTRACK: A Multi-camera HD Dataset for Dense Unscripted Pedestrian Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5030–5039. Chavdarova, T.; and Fleuret, F. 2017. Deep multi-camera people detection. In 2017 16th IEEE international conference on machine learning and applications (ICMLA), 848– 853. IEEE. Chen, K.; Chen, L. C.; Gong, S.; and Xiang, T. 2012. Feature mining for localised crowd counting. In BMVC. Cheng, Z.-Q.; Dai, Q.; Li, H.; Song, J.; Wu, X.; and Hauptmann, A. G. 2022. Rethinking spatial invariance of convolutional networks for object counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19638–19648. Cheng, Z.-Q.; Li, J.-X.; Dai, Q.; Wu, X.; and Hauptmann, A. G. 2019a. Learning spatial awareness to improve crowd counting. In Proceedings of the IEEE/CVF international conference on computer vision, 6152–6161. Cheng, Z.-Q.; Li, J.-X.; Dai, Q.; Wu, X.; He, J.-Y.; and Hauptmann, A. G. 2019b. Improving the Learning of Multicolumn Convolutional Neural Network for Crowd Counting. In Proceedings of the 27th ACM International Conference on Multimedia, 1897–1906. Fleuret, F.; Berclaz, J.; Lengagne, R.; and Fua, P. 2007. Multicamera people tracking with a probabilistic occupancy map. IEEE transactions on pattern analysis and machine intelligence, 30(2): 267–282. Gall, J.; Yao, A.; Razavi, N.; Van Gool, L.; and Lempitsky, V. 2011. Hough forests for object detection, tracking, and action recognition. IEEE transactions on pattern analysis and machine intelligence, 33(11): 2188–2202. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hou, Y.; and Zheng, L. 2021. Multiview detection with shadow transformer (and view-coherent data augmentation). In Proceedings of the 29th ACM International Conference on Multimedia, 1673–1682. Hou, Y.; Zheng, L.; and Gould, S. 2020. Multiview detection with feature perspective transformation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VII 16, 1–18. Springer. Huang, S.; Li, X.; Cheng, Z.-Q.; Zhang, Z.; and Hauptmann, A. 2020. Stacked pooling for boosting scale invariance of crowd counting. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2578–2582. IEEE. Iguernaissi, R.; Merad, D.; Aziz, K.; and Drap, P. 2019. People tracking in multi-camera systems: a review. Multimedia Tools and Applications, 78: 10773–10793. Iskakov, K.; Burkov, E.; Lempitsky, V.; and Malkov, Y. 2019. Learnable Triangulation of Human Pose. In ICCV. Jaderberg, M.; Simonyan, K.; Zisserman, A.; et al. 2015. Spatial transformer networks. In Advances in neural information processing systems, 2017–2025. Joachims, T. 1998. Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning, 137–142. Springer. Lempitsky, V.; and Zisserman, A. 2010. Learning to count objects in images. In Advances in Neural Information Processing Systems, 1324–1332. Marana, A.; Costa, L. d. F.; Lotufo, R.; and Velastin, S. 1998. On the efficacy of texture analysis for crowd monitoring. In International Symposium on Computer Graphics, Image Processing, and Vision, 354–361. IEEE. Nguyen, D. M.; Henschel, R.; Rosenhahn, B.; Sonntag, D.; and Swoboda, P. 2022. LMGP: Lifted Multicut Meets Geometry Projections for Multi-Camera Multi-Object Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8866–8875. Paragios, N.; and Ramesh, V. 2001. A MRF-based approach for real-time subway monitoring. In Computer Vision and Pattern Recognition, volume 1. IEEE. Patino, L.; and Ferryman, J. 2014. Multicamera trajectory analysis for semantic behaviour characterisation. In 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 369–374. IEEE. Pham, V.-Q.; Kozakaya, T.; Yamaguchi, O.; and Okada, R. 2015. Count forest: Co-voting uncertain number of targets using random forest for crowd density estimation. In Proceedings of the IEEE International Conference on Computer Vision, 3253–3261. Qiu, R.; Xu, M.; Yan, Y.; Smith, J. S.; and Yang, X. 2022. 3D Random Occlusion and Multi-layer Projection for Deep Multi-camera Pedestrian Localization. In Computer The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7249 Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part X, 695–710. Springer. Sabzmeydani, P.; and Mori, G. 2007. Detecting pedestrians by learning shapelet features. In IEEE Conference on Computer Vision and Pattern Recognition, 1–8. IEEE. Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Song, L.; Wu, J.; Yang, M.; Zhang, Q.; Li, Y.; and Yuan, J. 2021. Stacked homography transformations for multi-view pedestrian detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6049–6057. Taj, M.; and Cavallaro, A. 2011. Distributed and decentralized multicamera tracking. IEEE Signal Processing Magazine, 28(3): 46–58. Tzeng, E.; Hoffman, J.; Saenko, K.; and Darrell, T. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7167–7176. Viola, P.; and Jones, M. J. 2004. Robust real-time face detection. International journal of computer vision, 57(2): 137– 154. Viola, P.; Jones, M. J.; and Snow, D. 2005. Detecting pedestrians using patterns of motion and appearance. International Journal of Computer Vision, 63(2): 153–161. Wang, Y.; and Zou, Y. 2016. Fast visual object counting via example-based density estimation. In IEEE International Conference on Image Processing (ICIP), 3653–3657. IEEE. Wu, B.; and Nevatia, R. 2007. Detection and tracking of multiple, partially occluded humans by bayesian combination of edgelet based part detectors. International Journal of Computer Vision, 75(2): 247–266. Xu, B.; and Qiu, G. 2016. Crowd density estimation based on rich features and random projection forest. In IEEE Winter Conference on Applications of Computer Vision (WACV), 1–8. IEEE. Xu, Y.; Liu, X.; Liu, Y.; and Zhu, S. C. 2016. Multi-view People Tracking via Hierarchical Trajectory Composition. In Computer Vision and Pattern Recognition, 4256–4265. Yang, S.; Ding, F.; Li, P.; and Hu, S. 2022. Distributed multicamera multi-target association for real-time tracking. Scientific Reports, 12(1): 11052. You, Q.; and Jiang, H. 2020. Real-time 3d deep multicamera tracking. arXiv preprint arXiv:2003.11753. Zhai, Q.; Yang, F.; Li, X.; Xie, G.-S.; Cheng, H.; and Liu, Z. 2022. Co-Communication Graph Convolutional Network for Multi-View Crowd Counting. IEEE Transactions on Multimedia. Zhang, J.; Cheng, Z.-Q.; Wu, X.; Li, W.; and Qiao, J.-J. 2022. Crossnet: Boosting crowd counting with localization. In Proceedings of the 30th ACM International Conference on Multimedia, 6436–6444. Zhang, Q.; and Chan, A. B. 2019. Wide-Area Crowd Counting via Ground-Plane Density Maps and Multi-View Fusion CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8297–8306. Zhang, Q.; and Chan, A. B. 2020. 3D Crowd Counting via Multi-View Fusion with 3D Gaussian Kernels. In AAAI Conference on Artificial Intelligence. Zhang, Q.; and Chan, A. B. 2022a. 3D Crowd Counting via Geometric Attention-Guided Multi-view Fusion. International Journal of Computer Vision, 130(12): 3123–3139. Zhang, Q.; and Chan, A. B. 2022b. Wide-area crowd counting: Multi-view fusion networks for counting in large scenes. International Journal of Computer Vision, 130(8): 1938–1960. Zhang, Q.; Lin, W.; and Chan, A. B. 2021. Cross-View Cross-Scene Multi-View Crowd Counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 557–567. Zheng, L.; Li, Y.; and Mu, Y. 2021. Learning Factorized Cross-View Fusion for Multi-View Crowd Counting. In 2021 IEEE International Conference on Multimedia and Expo (ICME), 1–6. IEEE. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7250
2024
805
18,635
Aligning Geometric Spatial Layout in Cross-View Geo-Localization via Feature Recombination Qingwang Zhang, Yingying Zhu* College of Computer Science and Software Engineering, Shenzhen University, China zhangqingwang2022@email.szu.edu.cn, zhuyy@szu.edu.cn Abstract Cross-view geo-localization holds significant potential for various applications, but drastic differences in viewpoints and visual appearances between cross-view images make this task extremely challenging. Recent works have made notable progress in cross-view geo-localization. However, existing methods either ignore the correspondence between geometric spatial layout in cross-view images or require high costs or strict constraints to achieve such alignment. In response to these challenges, we propose a Feature Recombination Module (FRM) that explicitly establishes the geometric spatial layout correspondences between two views. Unlike existing methods, FRM aligns geometric spatial layout by directly recombining features, avoiding image preprocessing, and introducing no additional computational and parameter costs. This effectively reduces ambiguities caused by geometric misalignments between ground-level and aerial-level images. Furthermore, it is not sensitive to frameworks and applies to both CNN-based and Transformer-based architectures. Additionally, as part of the training procedure, we also introduce a novel weighted (B+1)-tuple loss (WBL) as optimization objective. Compared to the widely used weighted soft margin ranking loss, this innovative loss enhances convergence speed and final performance. Based on the two core components (FRM and WBL), we develop an end-to-end network architecture (FRGeo) to address these limitations from a different perspective. Extensive experiments show that our proposed FRGeo not only achieves state-of-the-art performance on cross-view geo-localization benchmarks, including CVUSA, CVACT, and VIGOR, but also is significantly superior or competitive in terms of computational complexity and trainable parameters. Our project homepage is at https://zqwlearning.github.io/FRGeo. Introduction The goal of cross-view geo-localization is to determine the geographical location of a ground image (known as a query image) from geo-tagged aerial images (known as reference images) without relying on GPS or other positioning devices. Existing methods to cross-view geo-localization commonly frame the problem as a retrieval task. In practical deployment, the task involves retrieving the reference image that is most similar to the query image and utilizing its location label *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Performance comparison on CVUSA R@1. Bubble size indicates the number of trainable parameters. Ours♣ indicates the integration of FRM and WBL into the TransGeo 1-stage, which is a pure Transformer-based method. Ours (FRGeo) achieves the highest R@1 while enjoying significantly less number of trainable parameters and GFLOPs. as predictive result. This task offers an alternative means for geo-localization in real scenarios, particularly crucial in environments where GPS signals are obstructed or perturbed by noise. The potential applications of this task are extensive, encompassing areas such as autonomous driving (H¨ane et al. 2017; Kim and Walter 2017), robotic navigation (McManus et al. 2014), and 3D reconstruction (Middelberg et al. 2014). Despite the enticing potential for application, the task of cross-view matching presents substantial challenges due to the dramatic changes in viewpoints and visual appearances between ground and aerial images. Consequently, it is paramount to understand and correspond both image content (appearances and semantics) and geometric spatial layout across views. Considering that the correspondence of geometric spatial layout can be implicitly modeled by models autonomously or guided by external priors, existing methods (Shi et al. 2019, 2020) for aligning the geometric spatial layout between different views achieve alignment by warping aerial images to match ground images. This help The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7251 Figure 2: Geometric spatial layout correspondence between two views. The context information contained in the same indexed regions is closely related. For example, a building in region ③of the ground image is also located in region ③of the aerial view (the red boxes indicate the building). reduce ambiguities caused by geometric misalignments between cross-view images. However, such a method results in obvious distortions in appearance, introduces additional image preprocessing steps, and requires assumptions of spatial alignment to the center and orientation. Other endeavors (Liu and Li 2019) improve performance by introducing orientation information for each pixel through the addition of orientation maps, yet this also introduces strict constraints and augmented computational costs. While these methods hold considerable promise, they either entail intricate designs, incurring substantial preprocessing time and computational costs, or entail strict dataset requirements. The presence of the aforementioned issues limits the applicability of these methods, prompting us to seek a low-cost method to minimize cross-view image misalignment, accomplish alignment of geometric spatial layout, and relax strict dataset requirements, thereby enabling wider applications. Contrary to traditional strategies, our method is centered around establishing a clear geometric spatial layout correspondence between two views at the region level, which derives discriminative matching cues from these approximately aligned regions. Specifically, we observe that both ground and aerial images cover the same field of view (FoV, observed visible region) range, and they can be naturally divided into 4 regions according to North (0◦), East (90◦), South (±180◦) and West (−90◦). We use 4 indexes ①, ②, ③, and ④to represent these regions, as shown in Figure 2. Since the regions with the same index are different representations of the same FoV under different views, the context information they contain are closely related. Inspired by the above observation, we propose the Feature Recombination Module (FRM). FRM uses the division in Figure 2 to divide the feature maps into different regions and performs spatial average pooling within each region, then recombines to obtain final representations. Significantly, unlike polar transform or the addition of orientation maps to the network, our method does not need to strictly align the geometric spatial layout between two views at the pixel level. Instead, we adopt a simpler and more flexible alignment method to approximately align at the region level, which has no additional transformations or precise alignment, and is therefore more realistic, more tractable and more widely applicable (even to datasets with non-central alignment, e.g., VIGOR). In addition, our method operates directly on feature maps, thereby avoiding any appearance distortion and image preprocessing. Remarkably, it introduces no additional computational or parameter costs, and thanks to its simple design, FRM can be plugged into any CNN or Transformer (Vaswani et al. 2017) architecture. We also delve into the loss, which is one of the crucial part of the cross-view geo-localization task. Previous works have widely used weighted soft margin ranking loss (Hu et al. 2018), which has the limitation of considering only one negative sample during the construction of a triplet while not interacting with the other negative samples in each update. To address this issue, we propose a novel weighted (B + 1)tuple loss (WBL) as our optimization objective that allows joint comparison multiple negative samples and introduces a weighted coefficient α. This proposed loss enhances convergence speed and final performance. Extensive experiments demonstrate that our method (named FRGeo with FRM and WBL as key components) not only achieves state-of-theart performance but also exhibits significant advantages or competitiveness in terms of computational complexity and trainable parameters, as illustrated in Figure 1. Our main contributions can be summarized as follows: • We propose a novel Feature Recombination Module (FRM), which explicitly establishes the correspondence of geometric spatial layouts between two views at the region level, to reduce ambiguities caused by geometric misalignments. It has the advantages of no image preprocessing, being lightweight, and plug-and-play. • We design a weighted (B + 1)-tuple loss (WBL) as part of the training procedure, enabling simultaneously pushing away of multiple negative samples, which effectively speeds convergence and improves performance compared to traditional weighted soft margin ranking loss. • The Feature Recombination Geo-localization network (FRGeo) outperforms previously developed deep networks for the cross-view geo-localization task on CVUSA, CVACT, and VIGOR datasets. Furthermore, FRGeo exhibits a noteworthy advantage or competitiveness in terms of computational complexity and trainable parameters. Related Work We roughly categorize existing cross-view geo-localization methods into feature-based and geometry-based methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7252 Feature-based Methods Feature-based methods focus on learning discriminative image representations to differentiate similar images. Workman, Souvenir, and Jacobs (2015) first introduce CNNs to cross-view matching, drawing inspiration from the success of CNNs in the computer vision (Krizhevsky, Sutskever, and Hinton 2012). Subsequently, Hu et al. (2018) integrate the NetVlad (Arandjelovic et al. 2016) with a dual-branch VGG (Simonyan and Zisserman 2015) backbone network to obtain viewpoint-invariant representations. They also propose a weighted soft margin ranking loss to expedite network training, an optimization objective that has found widespread application in subsequent research. Despite promising, most of the above feature-based methods rely on models implicitly modeling spatial information and rarely pay enough attention to the importance of explicitly aligning geometric spatial layouts. In this study, we explicitly establish the geometric spatial layout correspondence between views via a Feature Recombination Module, to reduce ambiguities caused by geometric misalignments. Geometry-based Methods Geometry-based methods aim to reduce ambiguities caused by geometric misalignments between ground and aerial images. Liu and Li (2019) introduce orientation maps to inject the orientation information of each pixel into the network. Shi et al. (2019) uses polar transform to warp aerial images, aligning the geometric spatial layout of ground-aerial image pairs. Subsequently, the same authors introduced DSM (Shi et al. 2020) using a sliding window for geo-localization of limited field of view ground images. CDE (Toker et al. 2021) combines GAN (Goodfellow et al. 2014) and SAFA (Shi et al. 2019) for geo-localization and ground image synthesis. GeoDTR (Zhang et al. 2023) extracts geometric layout descriptors from raw features, also proposing layout simulation and semantic data augmentations. While the above methods have improved performance, many still rely on polar transform for fine-grained geometric spatial layout alignment, leading to appearance distortions and additional preprocessing. Moreover, these methods exhibit strict dataset requirements, rendering them unsuitable for non-centrally aligned datasets, e.g., VIGOR (Zhu, Yang, and Chen 2021b). Our method, however, aligns at the more macro level, avoiding pixel-level micro geometry alignment. Consequently, it does not demand data to possess strict center alignment properties, accommodating non-centrally aligned datasets, e.g., VIGOR. Remarkably, benefitted from the model design, our method does not rely on polar transform while introducing no additional computational or parameter costs. Recent researches, several methods employing the Transformer as backbone have emerged. L2LTR (Yang, Lu, and Zhu 2021) explores a hybrid ViT-based model, whereas TransGeo (Zhu, Shah, and Chen 2022) introduces a pure Transformer-based model. These methods exclusively rely on the Transformer to implicitly model spatial information. Nevertheless, our method explicitly aligns the geometric spatial layout across different views, thereby reducing ambiguities caused by geometry misalignments and leading to enhanced performance. Furthermore, in comparison to L2LTR, FRGeo exhibits conspicuous advantages in terms of computational complexity and trainable parameters, all without necessitating the 2-stage training paradigm proposed by TransGeo. Methodology Problem Formulation A set of ground-aerial image pairs is denoted as {(Ig i , Ia i )}N, where the superscripts g and a denote ground and aerial images, respectively; N denotes the number of pairs. Each ground-aerial image pair corresponds to distinct geo-location, where the geo-tags is unknown for ground images {Ig i }N and known for aerial images {Ia i }N. In cross-view geolocalization task, given a query ground image Ig q with index q, q ∈{1, 2, ..., N}, the objective is to retrieve the optimal matching reference aerial image Ia r, r ∈{1, 2, ..., N}, to determine the specific geo-location of Ig q. For a given set {(Ig i , Ia i )}N, we infer the corresponding image representations as {(f g i , f a i )}N. These representations are expected to possess the following properties: the distance between matched image pairs is smaller than the distance between unmatched image pairs, expressed as d(f g q , f a q ) < {d(f g q , f a i )|∀i ∈{1, ..., N}, i ̸= q}, d(·, ·) denotes the L2 distance. Consequently, the cross-view geo-localization task can be made explicit as: r = arg min i∈{1,...,N} d(f g q , f a i ) (1) If the retrieval is correct, r equals q. For the sake of notation simplicity, we will omit the subscript i in the subsequent sections, except when discussing the loss function. FRGeo Model Model Overview. The propose model (FRGeo) introduces a Siamese neural network composed of two branches: the ground and aerial view branches, as depicted in Figure 3 (a). For a given ground-aerial image pair (Ig, Ia), a preliminary stage involves the extraction of raw features utilizing either CNN-based or Transformer-based backbone. This extraction obtains Fg ∈RHg×W g×C and Fa ∈RHa×W a×C, where Hg, W g, Ha, and W a correspond to the height and width of raw features from the ground and aerial images, and C denotes channel of raw features. Subsequently, the raw features Fg and Fa undergo processing through the Feature Recombination Module (FRM) to obtain the final image feature representations, f g ∈R4C and f a ∈R4C. The optimization of model parameters is achieved by employing our proposed weighted (B +1)-tuple loss (WBL). In the following, we will describe the core components of FRGeo in detail. Feature Recombination Module. The FRM utilizes raw features Fg and Fa extracted by the backbone as its inputs, obtaining the final image feature representations f g and f a as outputs. On the spatial dimension, the raw feature of each view is divided into 4 regions according to the division method shown in Figure 2, as shown in Figure 3 (b). Considering Fg SW, Fg WN, Fg NE and Fg ES denote ①, ②, ③, and ④regions of the ground raw feature Fg; Fa SW, Fa WN, Fa NE The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7253 Figure 3: (a) Overview of our proposed FRGeo model. (b) Illustration of the proposed Feature Recombination Module (FRM). (c) Illustration of the proposed Weighted (B + 1)-tuple Loss (WBL). and Fa ES denote ①, ②, ③, and ④regions of the aerial raw feature Fa, respectively. Their formal representations: Fg SW = Fg(:, 0 : W g/4, :) (2) Fg WN = Fg(:, W g/4 : W g/2, :) (3) Fg NE = Fg(:, W g/2 : 3W g/4, :) (4) Fg ES = Fg(:, 3W g/4 : W g, :) (5) Fa SW = Fa(Ha/2 : Ha, 0 : W a/2, :) (6) Fa WN = Fa(0 : Ha/2, 0 : W a/2, :) (7) Fa NE = Fa(0 : Ha/2, W a/2 : W a, :) (8) Fa ES = Fa(Ha/2 : Ha, W a/2 : W a, :) (9) where / denotes integer division. The final image feature representations f g and f a are calculated by: f g = Cat(Pavg(Fg SW), Pavg(Fg WN), Pavg(Fg NE), Pavg(Fg ES)) (10) f a = Cat(Pavg(Fa SW), Pavg(Fa WN), Pavg(Fa NE), Pavg(Fa ES)) (11) where Cat(·, ·) denotes the concatenation, and Pavg(·) denotes spatial average pooling. Optimization Objective. In previous works (Hu et al. 2018; Shi et al. 2019; Yang, Lu, and Zhu 2021; Zhu, Shah, and Chen 2022), the most widely employed loss is weighted soft margin ranking loss (Hu et al. 2018), which is computed by constructing triplets within each mini-batch. The problem lies in the fact that this loss uses only one negative sample in each update, consequently limiting the effective utilization of information from other negative samples. It ends up with slow convergence and suboptimal performance. Drawing inspiration from the work by Sohn (2016), we propose the weighted (B + 1)-tuple loss (WBL). WBL employs the construction of (B + 1)-tuple thus pushing away the distance between the anchor sample and all other B −1 negative samples within mini-batch during each update, as depicted in Figure 3 (c). The formulation of WBL is provided below. For a set of mini-batch samples {(Ig i , Ia i )}B, corresponding to this are collections of image feature representations {(f g i , f a i )}B, where B denotes the number of pairs in the mini-batch. When f g i is chosen as the anchor sample, the corresponding positive sample is denoted as f a i , while the set of negative samples is denoted as {f a j }B j̸=i. Within each mini-batch, it is feasible to construct 2B (B + 1)-tuples. To improve the convergence rate, we introduce a weighted coefficient α to d(f g i , f a i ) −d(f g i , f a j )  , resulting in WBL, which serves as our optimization objective: LW BL(f g i , f a i , {f a j }B j̸=i) = log  1 + B X j=1,j̸=i exp  α d(f g i , f a i ) −d(f g i , f a j )   (12) where d(·, ·) denotes the L2 distance. Loss in a mini-batch can be calculated by the following equation: L({(f g i , f a i )}B) = 1 2B X c∈C B X i=1 LW BL(f c i , f c i , {f C−c j }B j̸=i) (13) where C denotes the set of superscripts on the view {g, a}. Experiment Datasets and Experimental Settings Datasets. We evaluate our method on three public crossview geo-localization datasets: CVUSA (Zhai et al. 2017), CVACT (Liu and Li 2019) and VIGOR (Zhu, Yang, and Chen 2021b). CVUSA and CVACT support standard and fine-grained cross-view geo-localization, both of which are one-to-one retrievals; VIGOR supports beyond one-to-one retrieval, i.e., one-to-many retrieval. • CVUSA contains 35,532 image pairs for training and 8,884 image pairs for testing. This dataset consists of images mainly collected at suburban areas. • CVACT provides 35,532 image pairs for training and 8,884 image pairs for validation (CVACT val). It also provides 92,802 image pairs to support fine-grained cityscale geo-localization (CVACT test). These images cover the urban area (Canberra) densely. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7254 • VIGOR comprises 105,214 ground images and 90,618 aerial images, which assuming that the query ground images can belong to arbitrary locations in the target area without center-aligned settings. We follow the setting of VIGOR with both Same-area and Cross-area protocols. Evaluation Metrics. Following previous works (Hu et al. 2018; Liu and Li 2019; Shi et al. 2019, 2020), we utilize the R@K, K = {1, 5, 10, 1%} metrics to evaluate our model, which represents the probability of correct matches among the top-K retrieved results. Additionally, for VIGOR, we report the hit rate, which denotes the probability that the top-1 retrieved reference image covers the query image. Implementation Details. We employ a ConvNeXt-T (Liu et al. 2022) as the backbone with off-the-shelf pre-trained parameters on ImageNet-1K (Deng et al. 2009). α is set 10 in Equation (12). We train the model on a NVIDIA V100 Server with AdamW (Loshchilov and Hutter 2017) optimizer. Comparison with State-of-the-art Methods We compare our method with 8 state-of-the-art methods on the CVUSA and CVACT datasets, including SAFA (Shi et al. 2019), DSM (Shi et al. 2020), CDE (Toker et al. 2021), L2LTR (Yang, Lu, and Zhu 2021), TransGeo (Zhu, Shah, and Chen 2022), SEH (Guo et al. 2022), and GeoDTR (Zhang et al. 2023). On the VIGOR dataset, we compare our method with 5 state-of-the-art methods, including Siamese-VGG (Zhu, Yang, and Chen 2021a), SAFA, SAFA+Mining (Zhu, Yang, and Chen 2021b), VIGOR (Zhu, Yang, and Chen 2021b), and TransGeo. In the main paper, we evaluate the performance of our model for three tasks, including standard cross-view geo-localization, fine-grained cross-view geo-localization and beyond one-to-one retrieval. Standard Cross-view Geo-localization. We first evaluate our model on the standard cross-view geo-localization task. The results on the CVUSA and CVACT val datasets are shown in Table 1 and 2, respectively. The findings lead to the results that, in comparison to previous works, FRGeo achieves state-of-the-art or competitive performance. Remarkably, even without resorting to polar transform, FRGeo outperforms methods employing it. This highlights the benefit of our method in aligning geometric spatial layouts. Furthermore, we propose FRM and WBL can be seamlessly integrated into the TransGeo 1-stage, surpassing the raw TransGeo on metrics such as R@1, R@5, and R@10, and obtaining comparable performance in R@1%. This demonstrates that FRM and WBL are pluggable and not only applicable to CNN-based models, but also can significantly improve the performance of Transformer-based models. Fine-grained Cross-view Geo-localization. In order to thoroughly evaluate the representational capacity of the model, we conducte a comprehensive evaluation of our method in the fine-grained cross-view geo-localization task. Specifically, we compare FRGeo with state-of-the-art methods on the challenging large-scale CVACT test dataset - viz. 10× bigger than CVACT validation set, which is fully GPStagged for accurate localization. The results are shown in Table 2. Furthermore, we also report the experimental results Method R@1 R@5 R@10 R@1% SAFA 81.15% 94.23% 96.85% 99.49% SAFA† 89.84% 96.93% 98.14% 99.64% DSM† 91.93% 97.50% 98.54% 99.67% CDE† 92.56% 97.55% 98.33% 99.57% L2LTR 91.99% 97.68% 98.65% 99.75% L2LTR† 94.05% 98.27% 98.99% 99.67% TransGeo 94.08% 98.36% 99.04% 99.77% SEH† 95.11% 98.45% 99.00% 99.78% GeoDTR 93.76% 98.47% 99.22% 99.85% GeoDTR† 95.43% 98.86% 99.34% 99.86% Ours♣ 95.52% 98.66% 99.13% 99.74% Ours 97.06% 99.25% 99.47% 99.85% Table 1: Comparisons between FRGeo (Ours) and state-ofthe-art methods on the CVUSA dataset. † indicates applying polar transform to aerial images. Ours♣indicates FRM and WBL integrating into the TransGeo 1-stage. Best and second best results shown in bold and underline, respectively. of integrating FRM and WBL with the TransGeo 1-stage, arriving at conclusions consistent with the standard crossview geo-localization. In comparison with all previous works, FRGeo achieves state-of-the-art or competitive performances. The results also demonstrate the superiority of our method. Beyond One-to-one Retrieval. The beyond one-to-one retrieval task is performed on the recently introduced VIGOR. This dataset assumes that query images can belong to arbitrary locations in the target area, thus is not spatially aligned to the center. Consequently, it is the more complex and realistic benchmark. Many existing one-to-one retrieval methods falter in this context, however, our method remains well performing. In Table 3, our proposed method outperforms the competing methods by a substantial amount. For both Samearea and Cross-area evaluation protocols, the R@1 of our method achieve 71.26% (+9.78%) and 37.54% (+18.55%), respectively, with relative improvements of 15.91% and 97.68%. The above results demonstrate the powerful learning ability and widespread applicability of our method, as well as its robustness to cross-distribution shifts and the advantage of handling the datasets of spatially unaligned to the center. Computational Costs In Figure 1, the proposed method is compared with 5 stateof-the-art methods in terms of computational complexity and trainable parameters. It is intuitively clear from the figure that our method uses the least GFLOPs, which are just less than one-third of those of SAFA, DSM, L2LTR and GeoDTR. This observation implies that our method holds the potential for achieving faster processing speed and higher efficiency in practical applications. In terms of trainable parameters, our method is also competitive, especially when compared with L2LTR. It is important to emphasize that our method not only achieves the state-of-the-art performance, but also maintains the least computational complexity and competitive trainable parameters. These experimental results reflect the powerful The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7255 Method CVACT val CVACT test R@1 R@5 R@10 R@1% R@1 R@5 R@10 R@1% SAFA 78.28% 91.60% 93.79% 98.15% SAFA† 81.03% 92.80% 94.84% 98.17% 55.50% 79.94% 85.08% 94.49% DSM† 82.49% 92.44% 93.99% 97.32% 35.63% 60.07% 69.10% 84.75% CDE† 83.28% 93.57% 95.42% 98.22% 61.29% 85.13% 89.14% 98.32% L2LTR 83.14% 93.84% 95.51% 98.40% 58.33% 84.23% 88.60% 95.83% L2LTR† 84.89% 94.59% 95.96% 98.37% 60.72% 85.85% 89.88% 96.12% TransGeo 84.95% 94.14% 95.78% 98.37% SEH† 84.75% 93.97% 95.46% 98.11% GeoDTR 85.43% 94.81% 96.11% 98.26% 62.96% 87.35% 90.70% 98.61% GeoDTR† 86.21% 95.44% 96.72% 98.77% 64.52% 88.59% 91.96% 98.74% Ours♣ 88.60% 95.35% 96.26% 98.13% 69.52% 89.79% 92.36% 98.20% Ours 90.35% 96.45% 97.25% 98.74% 72.15% 91.93% 94.05% 98.66% Table 2: Comparison between FRGeo (Ours) and state-of-the-art methods on CVACT dataset. Notations are the same as Table 1. Method Same-area Cross-area R@1 R@5 R@10 R@1% Hit R@1 R@5 R@10 R@1% Hit Siamese-VGG 18.69% 43.64% 55.36% 97.55% 21.90% 2.77% 8.61% 12.94% 62.64% 3.16% SAFA 33.93% 58.42% 68.12% 98.24% 36.87% 8.20% 19.59% 26.36% 77.61% 8.85% SAFA+Mining 38.02% 62.87% 71.12% 97.63% 41.81% 9.23% 21.12% 28.02% 77.84% 9.92% VIGOR 41.07% 65.81% 74.05% 98.37% 44.71% 11.00% 23.56% 30.76% 80.22% 11.64% TransGeo 61.48% 87.54% 91.88% 99.56% 73.09% 18.99% 38.24% 46.91% 88.94% 21.21% Ours 71.26% 91.38% 94.32% 99.52% 82.41% 37.54% 59.58% 67.34% 94.28% 40.66% Table 3: Comparison between FRGeo (Ours) and state-of-the-art methods on VIGOR dataset, including Same-area and Crossarea protocols. Hit means hit rate (Zhu, Yang, and Chen 2021b). Notations are the same as Table 1. practical value (lighter and more accurate) of our method. Ablation Study Effectiveness of Components. To demonstrate the effectiveness of our proposed components (FRM and WBL), we conducte a series of experiments by sequentially integrating these components into the Baseline model (i.e., Baseline, Baseline + FRM, Baseline + WBL, Baseline + FRM + WBL). Specifically, the Baseline model adopts a Siamese architecture with ConvNeXt-T (Liu et al. 2022) serving as the backbone. To enable a fair comparison, the choice of hyperparameters and training strategy for all subsequent models remain entirely consistent with those of the Baseline. The results on the CVUSA, CVACT, and VIGOR are shown in Table 4. It is evident that upon the introduction of either FRM or WBL, there is a remarkable improvement in model performance. The optimal performance is achieved when both components are simultaneously applied, as observed in our FRGeo. These experimental results successfully validate the effectiveness of our proposed FRM and WBL. Additionally, we monitored the evolution of the R@1 metric during the initial 1 to 20 epochs of training on the CVUSA and CVACT datasets, as depicted in Figure 4 (Left and Middle). It is apparent that the utilization of FRM and WBL not only improves performance but also speeds up convergence. Remarkably, after a mere 10 epochs of training, our method achieves performance comparable to, if not superior to, the other state-of-the-art methods that typically require at least 100 epochs to reach similar results. This excellent performance is attributed to the rationalization of the geometric spatial layouts of the FRM explicitly aligned cross-view images and the effectiveness of the WBL in pushing away multiple negative samples simultaneously. Few-shot Training. To further verify the effectiveness of FRM and WBL, we conduct a series of few-shot training aimed at training a model capable of generalizing from a limited set of training samples (He et al. 2020). To support this task, we randomly select a certain percentage (100%/80%/60%/40%/20%) of samples from the CVUSA training set for training, while keeping the test set unchanged. These training subsets are employed to train the models, and subsequent testing is performed on the raw test set to observe the impact of varying training subset sizes on both the Baseline model and FRGeo, as shown in Figure 4 (Right). The results consistently demonstrate that the simultaneous omission of FRM and WBL continues to impair performance, particularly evident when the training subset size is smaller. For instance, with only 20% of samples participating in training, the Baseline drops by as much as 18.84% compared to the R@1 performance of FRGeo. This shows that the combiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7256 Method CVUSA VIGOR Same-area R@1 R@5 R@10 R@1% R@1 R@5 R@10 R@1% Hit Baseline 94.10% 98.57% 99.22% 99.83% 56.25% 83.98% 88.96% 99.00% 68.93% Baseline + FRM 96.70% 99.12% 99.39% 99.84% 67.38% 88.95% 92.38% 99.37% 77.86% Baseline + WBL 95.27% 99.03% 99.38% 99.82% 66.58% 90.83% 93.99% 99.51% 80.53% Baseline + FRM + WBL (Ours) 97.06% 99.25% 99.47% 99.85% 71.26% 91.38% 94.32% 99.52% 82.41% CVACT val CVACT test Baseline 84.77% 95.24% 96.66% 98.77% 59.19% 86.59% 90.74% 98.71% Baseline + FRM 88.79% 96.20% 97.02% 98.71% 68.35% 90.34% 93.08% 98.69% Baseline + WBL 87.42% 96.06% 97.04% 98.75% 65.28% 89.28% 92.49% 98.76% Baseline + FRM + WBL (Ours) 90.35% 96.45% 97.25% 98.74% 72.15% 91.93% 94.05% 98.66% Table 4: Effectiveness of the proposed components. Sequentially integrate FRM and WBL into the baseline model, and their performance is reported on the CVUSA, CVACT and VIGOR datasets. Best results shown in bold. Figure 4: Left and Middle: The training curve (R@1) on CVUSA (Left) and CVACT (Middle) datasets. The red dot data indicates the performance of FRGeo (Ours) at the 10th training epoch. Right: Few-shot training on CVUSA dataset. 100% indicates using all training samples. Best viewed on screen with zoom-in. nation of FRM and WBL not only improves performance but also enhances generalization ability. Visualization Analysis Figure 5: Heatmap visualization of the Baseline and FRGeo model. Best viewed on screen with zoom-in. To comprehend what FRGeo has learned, and to compare the differences in the regions that different models focus on, we visualize heatmaps. Figure 5 shows the heatmaps of both the Baseline and FRGeo model. It is discernible that the Baseline mainly focuses on road information, while FRGeo pays more attention to some salient buildings in addition to road information. We argue that these buildings are more discriminative localization cues for cross-view geo-localization task, since buildings in different ground-aerial images often have substantial differences in appearance and layout. We attribute the heightened focus of FRGeo on discriminative regions (e.g., salient buildings) to the efficacy of the FRM, which aligns geometric spatial layout alignment cross-view images, thereby making it easier for the model to learn them. Conclusion In this paper, we propose a novel and efficient cross-view geo-localization method for aligning the geometric spatial layout between cross-view images by feature recombination, reducing ambiguities caused by geometry misalignments and making discriminative localization cues easier to be learned. Moreover, we introduce the weighted (B + 1)-tuple loss, and show that it notably accelerates training speed and improves the performance of our method. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the CVUSA, CVACT, and VIGOR datasets with significant advantages or competitiveness in terms of computational complexity and trainable parameters. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7257 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 62072318, in part by the Key Project of Department of Education of Guangdong Province under Grant 2023ZDZX1016, and in part by Shenzhen Science and Technology Program under Grant 20220810142553001. References Arandjelovic, R.; Gronat, P.; Torii, A.; Pajdla, T.; and Sivic, J. 2016. NetVLAD: CNN architecture for weakly supervised place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5297–5307. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; WardeFarley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. Guo, Y.; Choi, M.; Li, K.; Boussaid, F.; and Bennamoun, M. 2022. Soft exemplar highlighting for cross-view image-based geo-localization. IEEE transactions on image processing, 31: 2094–2105. H¨ane, C.; Heng, L.; Lee, G. H.; Fraundorfer, F.; Furgale, P.; Sattler, T.; and Pollefeys, M. 2017. 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image and Vision Computing, 68: 14–27. He, J.; Hong, R.; Liu, X.; Xu, M.; Zha, Z.-J.; and Wang, M. 2020. Memory-augmented relation network for few-shot learning. In Proceedings of the 28th ACM International Conference on Multimedia, 1236–1244. Hu, S.; Feng, M.; Nguyen, R. M.; and Lee, G. H. 2018. Cvm-net: Cross-view matching network for image-based ground-to-aerial geo-localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7258–7267. Kim, D.-K.; and Walter, M. R. 2017. Satellite image-based localization via learned embeddings. In 2017 IEEE international conference on robotics and automation (ICRA), 2073– 2080. IEEE. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. Liu, L.; and Li, H. 2019. Lending orientation to neural networks for cross-view geo-localization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5624–5633. Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; and Xie, S. 2022. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11976–11986. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. McManus, C.; Churchill, W.; Maddern, W.; Stewart, A. D.; and Newman, P. 2014. Shady dealings: Robust, long-term visual localisation using illumination invariance. In 2014 IEEE international conference on robotics and automation (ICRA), 901–906. IEEE. Middelberg, S.; Sattler, T.; Untzelmann, O.; and Kobbelt, L. 2014. Scalable 6-dof localization on mobile devices. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part II 13, 268–283. Springer. Shi, Y.; Liu, L.; Yu, X.; and Li, H. 2019. Spatial-aware feature aggregation for image based cross-view geo-localization. Advances in Neural Information Processing Systems, 32. Shi, Y.; Yu, X.; Campbell, D.; and Li, H. 2020. Where am i looking at? joint location and orientation estimation by crossview matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4064–4072. Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. Sohn, K. 2016. Improved deep metric learning with multiclass n-pair loss objective. Advances in neural information processing systems, 29. Toker, A.; Zhou, Q.; Maximov, M.; and Leal-Taix´e, L. 2021. Coming down to earth: Satellite-to-street view synthesis for geo-localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6488– 6497. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. Neural Information Processing Systems,Neural Information Processing Systems. Workman, S.; Souvenir, R.; and Jacobs, N. 2015. Widearea image geolocalization with aerial reference imagery. In Proceedings of the IEEE International Conference on Computer Vision, 3961–3969. Yang, H.; Lu, X.; and Zhu, Y. 2021. Cross-view geolocalization with layer-to-layer transformer. Advances in Neural Information Processing Systems, 34: 29009–29020. Zhai, M.; Bessinger, Z.; Workman, S.; and Jacobs, N. 2017. Predicting ground-level scene layout from aerial imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 867–875. Zhang, X.; Li, X.; Sultani, W.; Zhou, Y.; and Wshah, S. 2023. Cross-view geo-localization via learning disentangled geometric layout correspondence. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 3480–3488. Zhu, S.; Shah, M.; and Chen, C. 2022. Transgeo: Transformer is all you need for cross-view image geo-localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1162–1171. Zhu, S.; Yang, T.; and Chen, C. 2021a. Revisiting street-toaerial view image geo-localization and orientation estimation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 756–765. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7258 Zhu, S.; Yang, T.; and Chen, C. 2021b. Vigor: Cross-view image geo-localization beyond one-to-one retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3640–3649. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7259
2024
806
18,636
MobileInst: Video Instance Segmentation on the Mobile Renhong Zhang1*, Tianheng Cheng1∗, Shusheng Yang1, Haoyi Jiang1, Shuai Zhang2, Jiancheng Lyu2, Xin Li2, Xiaowen Ying2, Dashan Gao2, Wenyu Liu1, Xinggang Wang1† 1 School of EIC, Huazhong University of Science & Technology 2 Qualcomm AI Research, Qualcomm Technologies, Inc Abstract Video instance segmentation on mobile devices is an important yet very challenging edge AI problem. It mainly suffers from (1) heavy computation and memory costs for frameby-frame pixel-level instance perception and (2) complicated heuristics for tracking objects. To address these issues, we present MobileInst, a lightweight and mobile-friendly framework for video instance segmentation on mobile devices. Firstly, MobileInst adopts a mobile vision transformer to extract multi-level semantic features and presents an efficient query-based dual-transformer instance decoder for mask kernels and a semantic-enhanced mask decoder to generate instance segmentation per frame. Secondly, MobileInst exploits simple yet effective kernel reuse and kernel association to track objects for video instance segmentation. Further, we propose temporal query passing to enhance the tracking ability for kernels. We conduct experiments on COCO and YouTube-VIS datasets to demonstrate the superiority of MobileInst and evaluate the inference latency on one single CPU core of the Snapdragon 778G Mobile Platform, without other methods of acceleration. On the COCO dataset, MobileInst achieves 31.2 mask AP and 433 ms on the mobile CPU, which reduces the latency by 50% compared to the previous SOTA. For video instance segmentation, MobileInst achieves 35.0 AP and 30.1 AP on YouTube-VIS 2019 & 2021. Introduction Deep visual understanding algorithms with powerful GPUs have achieved great success, but their performance is reaching a plateau. Edge AI, which enables massive low-resource computing devices, is becoming increasingly popular. In this paper, we study a very challenging edge AI task, namely video instance segmentation (VIS) on mobile devices. The goal of VIS (Yang, Fan, and Xu 2019) is to simultaneously identify, segment, and track objects in the video sequence and it attracts a wide range of applications, e.g., robotics, autonomous vehicles, video editing, and augmented reality. The advances in deep convolutional neural networks and vision transformers have made great progress in video instance *These authors contributed equally. †Xinggang Wang (xgwang@hust.edu.cn) is the corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Latency (ms) COCO Mask AP 0 20 25 30 500 1000 1500 SparseInst-TF Mask2Former-TF YOLACT-TF YOLACT-MBv2 SOLOv2-MV2 SparseInst-R50 YOLACT-R50 MobileInst-TF-512 Faster Stronger MobileInst-SF-640 SOLOv2Lite-R18 CondInst-TF § MBv2: MobileNet-V2 § SF: SeaFormer § TF: TopFormer § R: ResNet Figure 1: Speed-and-Accuracy Trade-off. We evaluate all models on COCO test-dev and inference speeds are measured on one mobile CPU, i.e., Snapdragon 778G. The proposed MobileInst outperforms other methods in both speed and accuracy on mobile devices. segmentation and achieved tremendous performance (Bertasius and Torresani 2020; Athar et al. 2020; Lin et al. 2021) on GPUs. Nevertheless, many real-world applications tend to require those VIS methods to run on resource-constrained devices, e.g., mobile phones, and inference with low latency. It’s challenging but urgent to develop and deploy efficient approaches for VIS on mobile or embedded devices. Albeit great progress has been witnessed in the VIS field, there are several obstacles that prevent modern VIS frameworks from being deployed on edge devices with limited resources, such as mobile chipsets. Prevalent methods for video instance segmentation can be categorized into two groups: offline methods (clip-level) and online methods (frame-level). Offline methods (Wang et al. 2021b; Hwang et al. 2021; Yang et al. 2022; Wu et al. 2022a; Heo et al. 2022; Lin et al. 2021) divide the video into clips, generate the instance predictions for each clip, and then associate the instances by instance matching across clips. However, inference with clips (multiple frames) is infeasible in mobile devices in terms of computation and memory cost. Whereas, online methods (Yang, Fan, and Xu 2019; Yang et al. 2021; Cao et al. 2020; Fu et al. 2020; Wu et al. 2022b) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7260 forward and predict with frame-level input but require complicated heuristic procedures to associate instances across frames, e.g., NMS, which are inefficient in mobile devices. In addition, recent methods for video instance segmentation tend to employ heavy architectures, especially for the methods based on transformers, which incur a large computation burden and memory costs. Directly scaling down the model size for lower inference latency will inevitably cause severe performance degradation, which limits the practical application of recent methods. Designing and deploying video instance segmentation techniques for resource-constrained devices have not been well explored yet, which are not trivial but crucial for real-world applications. In this paper, we introduce MobileInst to achieve performant video instance segmentation on mobile devices for the first time. MobileInst is efficient and mobile-friendly from two key aspects (1) lightweight architectures for segmenting objects per frame and (2) simple yet effective temporal modeling for tracking instances across frames. Specifically, MobileInst consists of a query-based dual transformer instance decoder, which exploits object queries to segment objects, updates object queries through global contexts and local details, and then generates the mask kernels and classification scores. To efficiently aggregate multi-scale features and global contexts for mask features, MobileInst employs a semantic-enhanced mask decoder. The object queries are forced to represent objects in a one-to-one manner and we discover that mask kernels (generated by object queries) tend to be temporally consistent in consecutive frames, i.e., the same kernel (query) corresponds to the same objects in nearby frames, as shown in Fig. 2. Therefore, we exploit simple yet effective kernel reuse and kernel association to track objects by reusing kernels in a T-frame clips and associate objects across clips by kernel cosine similarity. Further, we present temporal query passing to enhance the tracking ability for object queries during training with video sequences. MobileInst can one-the-fly segment and track objects in videos on mobile devices. The main contributions can be summarized as follows: • We present a cutting-edge and mobile-friendly framework named MobileInst for video instance segmentation on mobile devices, which is the first work targeting VIS on mobile devices to the best of our knowledge. • We propose a dual transformer instance decoder and semantic-enhanced mask decoder in MobileInst for efficiently segmenting objects in frames. • We present kernel reuse and kernel association for tracking objects across frames which are simple and efficient along with the temporal training strategy. • We benchmark the mobile VIS problem by implementing a wide range of lightweight VIS methods for comparisons. The proposed MobileInst can achieve state-of-theart mobile VIS performance, i.e., 35.0 AP with 188 ms on YouTube-VIS-2019 (Yang, Fan, and Xu 2019) and 31.2 AP with 433 ms on COCO (Lin et al. 2014) test-dev, when deployed on the CPU of Snapdragon 778G, without using mixed precision, low-bit quantization, or the inside hardware accelerator for neural network inference. Related Work Instance Segmentation Most methods address instance segmentation by extending object detectors with mask branches, e.g., Mask R-CNN (He et al. 2017) adds an RoI-based fully convolutional network upon Faster R-CNN (Ren et al. 2017) to predict object masks. (Tian, Shen, and Chen 2020; Bolya et al. 2019; Xie et al. 2020; Zhang et al. 2020) present single-stage methods for instance segmentation. Several methods (Wang et al. 2020a,b; Cheng et al. 2022b) present detector-free instance segmentation for simplicity and efficiency. Recently, querybased detectors (Carion et al. 2020; Zhu et al. 2021; Fang et al. 2021b; Cheng, Schwing, and Kirillov 2021; Fang et al. 2021a) reformulate object detection with set prediction and show promising results on instance segmentation. Considering the inference speed, YOLACT (Bolya et al. 2019) and SparseInst (Cheng et al. 2022b) propose real-time methods and achieve a good trade-off between speed and accuracy. However, existing methods are still hard to deploy to mobile devices for practical applications due to the large computation burden and complex post-processing procedures. Video Instance Segmentation Offline Methods. Several methods (Wang et al. 2021b; Hwang et al. 2021; Yang et al. 2022; Wu et al. 2022a; Heo et al. 2022) take a video clip as the input once, achieving good performance due to the rich temporal information. VisTR (Wang et al. 2021b) proposes the first transformerbased offline VIS framework. Several works effectively alleviate the computation burden brought by self-attention by building Inter-frame Communication Transformers (Hwang et al. 2021), using messengers to exchange temporal information in the backbone (Yang et al. 2022), and focusing on temporal interaction of instance between frames (Wu et al. 2022a; Heo et al. 2022). However, clip-level input is difficult to apply to resource-constrained mobile devices. Online Methods. Previous methods (Yang, Fan, and Xu 2019; Yang et al. 2021; Han et al. 2022) address online VIS by extending CNN-based image segmentation models to handle temporal coherence with extra embeddings to identify instances and associate instances with heuristic algorithms. However, those methods require extra complex post-processing steps, e.g., NMS, which hinders end-to-end inference on mobile devices. Recently, transformer-based models address VIS by using simple tracking heuristics with object queries which have capabilities of distinguishing instances (Huang, Yu, and Anandkumar 2022). IDOL (Wu et al. 2022b) obtains performance comparable to offline VIS by contrastive learning of the instance embedding across frames. InsPro (He et al. 2023a) and InstanceFormer (Koner et al. 2023) respectively use proposals and reference points to establish correspondences between instances for online temporal propagation. Unfortunately, existing works rely on large-scale models like Mask2Former (Cheng et al. 2022a) and Deformable DETR (Zhu et al. 2021) beyond the capabilities of many mobile devices. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7261 Reuse Reuse & Associate T = 1 T = 7 T = 11 T = 15 T = 19 Figure 2: Reusing kernels for tracking. We train MobileInst for single-frame instance segmentation on YouTube-VIS 2019, and then apply MobileInst to infer the per-frame segmentation and track objects via reusing mask kernels. The upper row: We adopt the predicted mask kernel in T = 1 frame to obtain the segmentation results in the video sequence. In a short time, the reused mask kernels provide accurate segmentation and tracking results. The bottom row: We divide the videos into K-frame clips and reuse the mask kernels of every first frame. In addition, we adopt simple yet effective cosine similarity to associate the kernels in consecutive clips (K is set to 3). Reusing kernels with association performs well and is efficient. Mobile Vision Transformers Vision transformers (ViT) (Dosovitskiy et al. 2021) have demonstrated immense power in various vision tasks. Subsequent works (Liu et al. 2021b; Wang et al. 2021a; Fang et al. 2022) adopt hierarchical architectures and incorporate spatial inductive biases or locality into vision transformers for better feature representation. Vision transformers tend to be resource-consuming compared to convolutional networks due to the multi-head attention (Vaswani et al. 2017). To facilitate the mobile applications, MobileViT (Mehta and Rastegari 2022), Mobile-Former (Chen et al. 2021) and TopFormer (Zhang et al. 2022) design mobile-friendly transformers by incorporating efficient transformer blocks into MobileNetV2 (Sandler et al. 2018). Recently, Wan et al.propose SeaFormer (Wan et al. 2023) with efficient axial attention. In this paper, MobileInst aims for video instance segmentation on mobile devices, which is more challenging than designing mobile backbones. MobileInst Overall Architecture We present MobileInst, a video instance segmentation framework tailor-made for mobile devices. Fig. 3 gives an illustration of our framework. Given input images, MobileInst firstly utilizes a mobile transformer backbone to extract multi-level pyramid features. Following (Zhang et al. 2022; Wan et al. 2023), our backbone network consists of a series of convolutional blocks and transformer blocks. It takes images as inputs and generates both local features (i.e.., X3, X4, and X5 in Fig. 3) and global features (i.e., X6). Considering the global features X6 contain abundant high-level semantic information, we present (1) dual transformer instance decoder which adopts a query-based transformer decoder based on the global image features and local image features and generates the instance predictions, i.e., instance kernels and classification scores; (2) semanticenhanced mask decoder which employs the multi-scale features from the backbone and a semantic enhancer to enrich the multi-scale features with semantic information. Dual Transformer Instance Decoder Queries are good trackers. Detection transformers with a sparse set of object queries (Carion et al. 2020) can get rid of heuristic post-processing for duplicate removal. Previous methods (Yang, Fan, and Xu 2019; Yang et al. 2021) extend dense detectors (Ren et al. 2017; Lin et al. 2017b; Tian et al. 2022) for VIS by designing heuristic matching to associate instances across frames, which is inefficient and hard to optimize in mobile devices. Whereas, as shown in Fig. 2, object queries are good trackers and can be used to associate objects in videos based on three reasons: (1) object queries are trained to segment the foreground of corresponding visual instance, thus naturally comprising contextualized instance features; (2) object queries are forced to match objects in a one-to-one manner and duplicate queries are suppressed; (3) the object query tends to be temporally consistent and represents the same instance in consecutive frames, which can be attributed to the temporal smoothness in adjacent frames. Therefore, using object queries as trackers can omit complex heuristics post-process for associating objects and is more efficient on mobile devices. However, directly attaching transformer decoders like (Carion et al. 2020) on the mobile backbone leads to unaffordable computation budgets for mobile devices, and simply reducing decoder layers or parameters leads to unsatisfactory performance. Striking the balance and designing mobile-friendly architectures is non-trivial and critical for real-world applications. For efficiency, we present dual transformer instance encoder, which simplifies the prevalent 6-stage decoders in (Carion et al. 2020; Zhu et al. 2021) into 2-stage dual decoders, i.e., the global instance decoder and the local instance decoder, which takes the global features XG and local features XL as key and value for updating object queries. We follow (Cheng et al. 2022a) and adopt the sine position embedding for both global and local features. The object queries Q are learnable and random initialized. Global and Local Instance Decoder. Adding transformer encoders (Carion et al. 2020; Zhu et al. 2021) for the global contexts will incur a significant computation burden. InThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7262 X3 X4 X5 X6 Mobile Transformer Semantic-enhanced Mask Decoder Dual Transformer Instance Decoder C C C SE Xl XG SE: Semantic Enhancer kernel Output Instances Kernels Upsample Xmask FFN Global Instance Decoder Local Instance Decoder Local Features XL kernel class score Global Features XG Object Queries Q Q K V Q K V Dual Transformer Instance Decoder SE C C C Object Queries Global Features Local Features Figure 3: Overall architecture of MobileInst. MobileInst contains a mobile transformer as the backbone, a dual transformer instance decoder with learnable object queries to obtain object classes and kernels (Sec. ), and a semantic-enhanced mask decoder to obtain single-level features of high-semantics (Sec. ) by the semantic enhancers with global features XG (X6 from the mobile transformer). The generated kernels from instance queries and mask features Xmask can directly output the instance masks through the dot product. ‘C’ in the square denotes 3 × 3 convolution. stead, we adopt high-level features (X6 in Fig. 3) X6 as global features XG for query update, which contains highlevel semantics and coarse localization. Inspired by recent works (Cheng, Schwing, and Kirillov 2021), we adopt the fine-grained local features, i.e., the mask features Xmask, to compensate for spatial details for generating mask kernels. For efficiency, we downsample the mask features to 1 64× through max pooling, i.e., XL = fpool(Xmask), which can preserve more details. The dual transformer instance decoder acquires contextual features from the global features XG and refines queries with fine-grained local features XL. Semantic-enhanced Mask Decoder Multi-scale features are important for instance segmentation due to the severe scale variation in natural scenes. In addition, generating masks requires high-resolution features for accurate localization and segmentation quality. To this end, prevalent methods (Cheng, Schwing, and Kirillov 2021; Cheng et al. 2022a) stack multi-scale transformers (Cheng et al. 2022a) as pixel decoders to enhance the multi-scale representation and generate high-resolution mask features. Stacking transformers for high-resolution features leads to large computation and memory costs. Instead of using transformers, (Cheng et al. 2022b) presents a FPN-PPM encoder with 4 consecutive 3 × 3 convolutions as mask decoder, which also leads to a huge burden, i.e., 7.6 GFLOPs. For mobile devices, we thus present an efficient semanticenhanced mask decoder, as shown in Fig. 3. The mask decoder adopts the multi-scale features {X3, X4, X5} and outputs single-level high-resolution mask features ( 1 8×). Motivated by FPN (Lin et al. 2017a), we use iterative top-down and bottom-up multi-scale fusion. Furthermore, we present the semantic enhancers to strengthen the contextual information for the mask features with the global features X6, as shown in the green blocks of Fig. 3. Then the mask features Xmask and the generated kernels K are fused by M = K · Xmask to obtain the output segmentation masks. Tracking with Kernel Reuse and Association As discussed in Sec. , mask kernels (generated by object queries) are temporally consistent due to the temporal smoothness in adjacent frames. Hence, mask kernels can be directly adopted to segment and track the same instance in the nearby frames, e.g., 11 frames as shown in Fig. 2. We thus present the efficient kernel reuse to adopt the mask kernels from the keyframe to generate the segmentation masks for the consecutive T −1 frames as follows: M t = Kt · Xt mask, M t+i = Kt · Xt+i mask, i ∈(0, ..., T −1), (1) where {M i}T −1+t i=t are the segmentation masks for the same instance in the T-frame clip, and Kt is the reused mask kernel. Compared to clip-based methods, kernel reuse performs on-the-fly segmentation and tracking given per-frame input. However, kernel reuse tends to fail in long-time sequences or frames with drastic changes. To remedy these issues, we follow (Huang, Yu, and Anandkumar 2022) and present a simple yet effective kernel association, which uses cosine similarity between the consecutive keyframes. Under oneto-one correspondence, duplicate queries (kernels) tend to be suppressed, which enables simple similarity metrics to associate kernels of consecutive keyframes. Compared to previous methods (Yang, Fan, and Xu 2019; Yang et al. 2021) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7263 based on sophisticated metrics and post-processing methods, the proposed kernel association is much simple and easy to deploy on mobile devices. MobileInst can be straightforwardly extended to video instance segmentation by incorporating the presented kernel reuse and association. And experimental results indicate that MobileInst using T = 3 can achieve competitive performance, as discussed in Tab. 4. For simpler videos or scenes, the reuse interval T can be further extended for more efficient segmentation and tracking. Temporal Training via Query Passing How to fully leverage temporal contextualized information in video for better temporal segmentation is a long-standing research problem in VIS. Whereas, adding additional temporal modules introduces extra parameters and inevitably modifies the current architecture of MobileInst. To leverage temporal information in videos, we present a new temporal training strategy via query passing to enhance the feature representation for temporal inputs, which is inspired by (Yang et al. 2021). Specifically, we randomly sample two frames, e.g., frame t and frame t + δ, from a video sequence during training, as shown in Fig. 4. We adopt the object queries Qt G generated from the global instance decoder as passing queries. For frame t+δ, we can obtain the mask features Xt+δ mask and local features Xt+δ L by normal forwarding. During temporal training, the passing queries Qt G, as ˜Qt+δ G , are input to the local instance decoder with local features Xt+δ L to obtain the kernels and generate masks ˜ M t+δ. The generated ˜ M t+δ shares the same mask targets with M t+δ, and is supervised by the mask losses mentioned in Sec. . Loss Function MobileInst outputs N predictions and uses bipartite matching for label assignment (Carion et al. 2020). As the query passing does not require extra module and loss, we follow previous work (Cheng et al. 2022b) and use the same loss function for training MobileInst, which is defined as follows: L = λc · Lcls + λmask · Lmask + λobj · Lobj, (2) where Lcls indicates the focal loss for classification, Lmask is the combination of dice loss and pixel-wise binary cross entropy loss for mask prediction, and Lobj indicates the binary cross-entropy loss for IoU-aware objectness. λc, λmask and λobj are set to 2.0, 2.0 and 1.0 respectively. Experiments In this section, we mainly evaluate MobileInst on the challenging COCO (Lin et al. 2014) and Youtube-VIS (Yang, Fan, and Xu 2019) datasets to demonstrate the effects of MobileInst in terms of speed and accuracy. In addition, we conduct extensive ablation studies to reveal the effects of the components in MobileInst. We refer the reader to the arXiv version for additional experiments and visualizations. Datasets COCO. COCO dataset is a touchstone for instance segmentation methods, which has 118k, 5k, and 20k images Global Instance Decoder Local Instance Decoder Frame ! Frame ! + # Global Instance Decoder Local Instance Decoder pass object queries $! "#$ $! " Figure 4: Temporal Training via Query Passing. We sample two frames with an interval δ, e.g., the frame t and the frame t + δ. During temporal training, we adopt the object queries Qt G of frame t from the global instance decoder as the object queries ˜Qt+δ G and pass it to the local instance decoder with local features Xt+δ L to generate ˜ M t+δ. for training, validation, and testing respectively. MobileInst is trained on train2017 and evaluated on val2017 or test-dev2017. YouTube-VIS. YouTube-VIS 2019 is a large-scale dataset for VIS, which has 2,883 videos and 4,883 instances covering 40 categories. YouTube-VIS 2021 expands it to 1.5× videos and 2× instances with improved 40 categories. We evaluate our methods on the validation set of both datasets1. Implementation Details Instance Segmentation. We use the AdamW optimizer with an initial learning rate 1 × 10−4 and set the backbone multiplier to 0.5. Following the training schedule and data augmentation as (Cheng et al. 2022b), all models are trained for 270k iterations with a batch size of 64 and decay the learning rate by 10 at 210k and 250k. We apply random flip and scale jitter to augment the training images. More precisely, the shorter edge varies from 416 to 640 pixels, while the longer edge remains under 864 pixels. Video Instance Segmentation. The models are initialized with weights from the instance segmentation model pretrained on the COCO train2017. We set the learning rate to 5 × 10−5 and train for 12 epochs with a 10× decay at the 8-th and 11-th epochs. We only employ basic data augmentation, such as resizing the shorter side of the image to 360, without using any additional data or tricks. 1ALL Datasets were solely downloaded and evaluated by the University. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7264 method backbone size latency(ms) AP AP50 AP75 APS APM APL Mask R-CNN (He et al. 2017) R-50 800 37.5 59.3 40.2 21.1 39.6 48.3 CondInst (Tian, Shen, and Chen 2020) R-50 800 4451 37.8 59.1 40.5 21.0 40.3 48.7 SOLOv2-Lite (Wang et al. 2020c) R-50 448 1234 34.0 54.0 36.1 10.3 36.3 54.4 YOLACT (Bolya et al. 2019) R-50 550 1039 28.2 46.6 29.2 9.2 29.3 44.8 SparseInst (Cheng et al. 2022b) R-50 608 1752 34.7 55.3 36.6 14.3 36.2 50.7 YOLACT (Bolya et al. 2019) MobileNetV2 550 463 22.2 37.7 22.5 6.0 21.3 35.5 SOLOv2 (Wang et al. 2020c) MobileNetV2 640 1443 30.5 49.3 32.1 4.2 49.6 67.9 YOLACT (Bolya et al. 2019) TopFormer 550 497 20.8 37.6 20.2 6.0 20.1 33.5 CondInst (Tian, Shen, and Chen 2020) TopFormer 640 1418 27.0 44.8 28.0 11.4 27.7 39.0 SparseInst (Cheng et al. 2022b) TopFormer 608 769 30.0 49.2 30.9 11.0 29.5 46.2 Mask2Former† (Cheng et al. 2022a) TopFormer 640 930 32.0 51.9 33.4 6.9 49.3 68.7 FastInst (He et al. 2023b) TopFormer 640 965 31.0 50.8 32.0 9.7 31.1 51.7 MobileInst MobileNetV2 640 410 30.0 49.7 30.8 10.3 30.2 46.0 MobileInst TopFormer 512 346 29.9 49.4 30.6 9.0 29.2 48.5 MobileInst TopFormer 640 433 31.2 51.4 32.1 10.4 31.3 49.1 MobileInst SeaFormer 640 438 31.6 51.8 32.6 10.0 31.5 50.8 Table 1: Instance Segmentation on COCO test-dev. Comparisons with state-of-the-art methods for mask AP and inference latency on COCO test-dev. The method denoted with † was implemented by us. method backbone GPU (ms) Mobile (ms) YouTube-VIS 2019 YouTube-VIS 2021 AP AP50 AP75 AR1 AP AP50 AP75 AR1 MT R-CNN (Yang, Fan, and Xu 2019) R-50 30.1 30.3 51.1 32.6 34 28.6 48.9 29.6 26.5 SipMask (Cao et al. 2020) R-50 29.3 33.7 54.1 35.8 35.4 31.7 52.5 34.0 30.8 SGMask (Liu et al. 2021a) R-50 31.9 34.8 56.1 36.8 35.8 STMask (Li et al. 2021) R-50 28.2 33.5 52.1 36.9 31.1 30.6 49.4 32.0 26.4 CrossVIS (Yang et al. 2021) R-50 25.0 981 34.8 54.6 37.9 34.0 33.3 53.8 37.0 30.1 CrossVIS TopFormer 24.9 614 32.7 54.3 35.4 34 28.9 50.9 29.0 27.8 SparseInst-VIS† TopFormer 25.4 389 33.3 55.1 34.1 35.3 29.0 50.5 29.2 29.3 MobileInst TopFormer 22.3 188 35.0 55.2 37.3 38.5 30.1 50.6 30.7 30.1 Table 2: Video Instance Segmentation on Youtube-VIS 2019 and Youtube-VIS 2021. ‘GPU’ denotes NVIDIA 2080 Ti and ‘Mobile’ denotes Snapdragon 778G. The method denoted with † was implemented by us. Inference. The inference of MobileInst is simple. MobileInst can directly output the instance segmentation results for single-frame images without non-maximum suppression (NMS). The inference speeds of all models are measured using TNN framework2 on the CPU core of Snapdragon 778G without other methods of acceleration. Experiments on Instance Segmentation Firstly, we evaluate the proposed MobileInst on COCO test-dev dataset for mobile instance segmentation. As the first instance segmentation model designed specifically for mobile devices, we benchmark our approach against real-time instance segmentation methods. Tab. 1 shows the comparisons between MobileInst and previous approaches. Among all the methods which use ResNet (He et al. 2016) backbone, Mask R-CNN and CondInst naturally achieve AP above 37. However, the deployment challenges of Mask RCNN as a two-stage model and CondInst make them less desirable for mobile applications. We observe that MobileInst achieves higher accuracy than the popular real-time approach YOLACT based on R-50, with an increase of 3.4 AP and 600 ms faster speed. Notably, MobileInst obtains faster inference speed and higher accuracy compared to those 2TNN: a uniform deep learning inference framework methods (Bolya et al. 2019; Wang et al. 2020b; Tian, Shen, and Chen 2020; Cheng et al. 2022b,a) with lightweight backbones (Sandler et al. 2018; Zhang et al. 2022). Tab. 1 shows a remarkable speed improvement of up to 50% compared to the previous state-of-the-art method SparseInst. Compared to the well-established Mask2Former, MobileInst has a similar AP with 100% speed improvement. Fig. 1 illustrates the trade-off curve between speed and accuracy, which further clearly shows the great performance of MobileInst. Experiments on Video Instance Segmentation In Tab. 2, we evaluate MobileInst YouTube-VIS 2019 and YouTube-VIS 2021 for video instance segmentation. In terms of latency and accuracy, we mainly compared MobileInst with online methods. As shown In Tab. 2, MobileInst can obtain better accuracy and speed than (Yang et al. 2021; Cheng et al. 2022b) under the same setting. Considering that TopFormer aims for mobile devices and it’s less efficient on GPU. However, it is still evident that MobileInst has superior inference speed on mobile devices. Ablation Study Ablation on Instance Decoder. In Tab. 3, We evaluate the performance and speed of different configurations of the instance decoder. Tab. 3 shows that using a single global inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7265 global local AP AP50 AP75 latency FLOPs ✓ 28.7 46.9 29.5 413ms 24.17G ✓ 29.3 48.9 30.2 412ms 24.15G ×2 28.5 46.3 29.5 427ms 24.24G ×2 29.8 49.3 30.4 426ms 24.22G ✓ ✓ 29.8 49.4 30.4 427ms 24.24G Table 3: Ablation on the Instance Decoder (COCO val2017). Both the global decoder and local decoder contribute to improvement. ×2 indicates stacking two decoders. Despite the similar performance, global-local is better than local-local for VIS tasks (refer to Tab. 4). decoder w/o tem. training w/ tem. training T=1 T=3 T=6 T=1 T=3 T=6 global-local 28.8 29.3 28.0 30.1 30.5 29.2 local-local 28.1 28.5 27.8 28.9 29.5 28.6 latency (ms) 184 174 171 184 174 171 Table 4: Ablation on the Query Reuse & Temporal Training (YouTube-VIS 2021). ‘T’ refers to the length of the clip within which we reuse mask kernels of the keyframe. Singleframe clips (T = 1) only associate kernels without reuse. stance decoder or a single local instance decoder leads to a performance drop, which demonstrates the effectiveness of the instance decoder with global features for semantic contexts and local features for spatial details. Stacking two local instance decoders obtains a similar performance with the proposed instance decoder, i.e., 29.8 mask AP. However, Tab. 4 indicates that the proposed instance decoder with aggregating global contexts is superior to stacking two local decoders in terms of segmenting and tracking in videos. In Tab. 5, we mainly focus on the local instance decoder and compare different methods of extracting local features from mask features: no pooling, max pooling with the kernel size of 4 or 8, and average pooling with a kernel size of 8. Although no pooling provides a gain of 0.9 in AP, it also incurs a 50% increase in latency, making it not cost-effective. Additionally, it is worth noting that using max pooling leads to a 0.4 AP gain compared to using average pooling. We believe max pooling naturally provides more desirable local information by filtering out unimportant information, forming a better complementary relationship with the global features used in the global instance decoder. Kernel Reuse & Temporal Training. We conduct a comparative study of two decoder designs (refer to Tab. 3), i.e., (1) global-local: the combination of a global and a local instance decoder and (2) local-local: two local instance decoders, as shown in Tab. 4. For kernel Reuse, T refers to the length of the clip within which we reuse the mask kernels of the keyframe. Regardless of the model architecture, the reuse mechanism in short-term sequences improves inference speed without performance loss. Compared to the training with only frame-level information, the proposed temporal training brings 1.3 and 0.8 AP improvement for the two designs, respectively. In terms of the global-local and locallocal decoders, Tab. 4 shows that global-local achieves betsize pool AP AP50 AP75 latency FLOPs ori. 30.7 51.1 31.3 613ms 25.85G 4×4 max 30.0 50.2 30.8 434ms 24.32G 8×8 max 29.8 49.4 30.4 427ms 24.24G 8×8 avg 29.4 48.4 30.3 427ms 24.24G Table 5: Ablation on the Local Instance Decoder (COCO val2017). The pooling is used to extract local features from the mask features for the local instance decoder. Decreasing the pool size can further improve the accuracy but lower the speed. Notably, max pooling brings 0.4 AP gain. mask decoder AP AP50 AP75 latency FLOPs SparseInst, 4×conv 30.4 49.6 31.2 524ms 34.69G SparseInst, 2×conv 29.7 49.2 30.4 445ms 24.11G SparseInst, 1×conv 29.1 48.8 29.7 405ms 18.82G FPN, 1×conv 28.7 48.1 29.2 400ms 18.48G ours 29.8 49.4 30.4 427ms 24.24G ours w/ SE 30.1 49.9 30.9 433ms 24.37G Table 6: Ablation on the Semantic-enriched Mask Decoder (COCO val2017). ‘SparseInst’ denotes the FPN-PPM used in (Cheng et al. 2022b). ter performance on video instance segmentation. Compared to the local-local decoder, the queries (kernels) from the global-local decoder aggregate more global contextual features and benefits more from temporal smoothness in videos, as discussed in Sec. , which is more suitable for videos. Tab. 4 well demonstrates the proposed dual transformer instance decoder for video instance segmentation. Ablation on the Mask Decoder. Mask features play a crucial role in segmentation quality. Here, we investigate different designs of mask decoders in Tab 6. Compared to FPN with 1× conv, our method achieves 1.1 AP improvement by iteratively utilizing multi-scale information, with a latency overhead of only 6ms. Although stacking convolutions still improves the performance, as seen from the results of SparseInst with 4 stacked 3 × 3 convs, it leads to a significant burden for mobile devices. The proposed semantic-enhancer (SE) brings a 0.3 AP gain and bridges the gap with less cost. Conclusion In this paper, we propose MobileInst, an elaborate-designed video instance segmentation framework for mobile devices. To reduce computation overhead, we propose an efficient query-based dual-transformer instance decoder and a semantic-enhanced mask decoder, with which MobileInst achieves competitive performance and maintains a satisfactory inference speed simultaneously. We also propose an efficient method to extend our MobileInst to video instance segmentation tasks without introducing extra parameters. Experimental results on both COCO and Youtube-VIS datasets demonstrate the superiority of MobileInst in terms of both accuracy and inference speed. We hope our work can facilitate further research on instance-level visual recognition on resource-constrained devices. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7266 Acknowledgments This work was partially supported by the National Key Research and Development Program of China under Grant 2022YFB4500602, the National Natural Science Foundation of China (No. 62276108), and the University Research Collaboration Project (HUA-474829) from Qualcomm. References Athar, A.; Mahadevan, S.; Osep, A.; Leal-Taix´e, L.; and Leibe, B. 2020. STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos. In ECCV. Bertasius, G.; and Torresani, L. 2020. Classifying, Segmenting, and Tracking Object Instances in Video with Mask Propagation. In CVPR. Bolya, D.; Zhou, C.; Xiao, F.; and Lee, Y. J. 2019. YOLACT: Real-Time Instance Segmentation. In ICCV. Cao, J.; Anwer, R. M.; Cholakkal, H.; Khan, F. S.; Pang, Y.; and Shao, L. 2020. Sipmask: Spatial information preservation for fast image and video instance segmentation. In ECCV. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-End Object Detection with Transformers. In ECCV. Chen, Y.; Dai, X.; Chen, D.; Liu, M.; Dong, X.; Yuan, L.; and Liu, Z. 2021. Mobile-Former: Bridging MobileNet and Transformer. In CVPR. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022a. Masked-attention Mask Transformer for Universal Image Segmentation. In CVPR. Cheng, B.; Schwing, A. G.; and Kirillov, A. 2021. Per-Pixel Classification is Not All You Need for Semantic Segmentation. arXiv preprint arXiv: 2107.06278. Cheng, T.; Wang, X.; Chen, S.; Zhang, W.; Zhang, Q.; Huang, C.; Zhang, Z.; and Liu, W. 2022b. Sparse Instance Activation for Real-Time Instance Segmentation. In CVPR. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Fang, J.; Xie, L.; Wang, X.; Zhang, X.; Liu, W.; and Tian, Q. 2022. MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens. In CVPR. Fang, Y.; Liao, B.; Wang, X.; Fang, J.; Qi, J.; Wu, R.; Niu, J.; and Liu, W. 2021a. You only look at one sequence: Rethinking transformer in vision through object detection. Advances in Neural Information Processing Systems, 34: 26183–26197. Fang, Y.; Yang, S.; Wang, X.; Li, Y.; Fang, C.; Shan, Y.; Feng, B.; and Liu, W. 2021b. Instances as Queries. In ICCV. Fu, Y.; Yang, L.; Liu, D.; Huang, T. S.; and Shi, H. 2020. Compfeat: Comprehensive feature aggregation for video instance segmentation. In AAAI. Han, S. H.; Hwang, S.; Oh, S. W.; Park, Y.; Kim, H.; Kim, M.-J.; and Kim, S. J. 2022. Visolo: Grid-based space-time aggregation for efficient online video instance segmentation. In CVPR. He, F.; Zhang, H.; Gao, N.; Jia, J.; Shan, Y.; Zhao, X.; and Huang, K. 2023a. InsPro: Propagating Instance Query and Proposal for Online Video Instance Segmentation. NIPS. He, J.; Li, P.; Geng, Y.; and Xie, X. 2023b. FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation. In CVPR. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. B. 2017. Mask R-CNN. In ICCV. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In CVPR. Heo, M.; Hwang, S.; Oh, S. W.; Lee, J.-Y.; and Kim, S. J. 2022. Vita: Video instance segmentation via object token association. NIPS. Huang, D.-A.; Yu, Z.; and Anandkumar, A. 2022. Minvis: A minimal video instance segmentation framework without video-based training. NIPS. Hwang, S.; Heo, M.; Oh, S. W.; and Kim, S. J. 2021. Video instance segmentation using inter-frame communication transformers. NIPS. Koner, R.; Hannan, T.; Shit, S.; Sharifzadeh, S.; Schubert, M.; Seidl, T.; and Tresp, V. 2023. InstanceFormer: An Online Video Instance Segmentation Framework. AAAI. Li, M.; Li, S.; Li, L.; and Zhang, L. 2021. Spatial feature calibration and temporal fusion for effective one-stage video instance segmentation. In CVPR. Lin, H.; Wu, R.; Liu, S.; Lu, J.; and Jia, J. 2021. Video instance segmentation with a propose-reduce paradigm. In ICCV. Lin, T.; Doll´ar, P.; Girshick, R. B.; He, K.; Hariharan, B.; and Belongie, S. J. 2017a. Feature Pyramid Networks for Object Detection. In CVPR. Lin, T.; Goyal, P.; Girshick, R. B.; He, K.; and Doll´ar, P. 2017b. Focal Loss for Dense Object Detection. In ICCV. Lin, T.; Maire, M.; Belongie, S. J.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In ECCV. Liu, D.; Cui, Y.; Tan, W.; and Chen, Y. 2021a. Sg-net: Spatial granularity network for one-stage video instance segmentation. In CVPR. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021b. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In ICCV. Mehta, S.; and Rastegari, M. 2022. MobileViT: Lightweight, General-purpose, and Mobile-friendly Vision Transformer. In ICLR. Ren, S.; He, K.; Girshick, R. B.; and Sun, J. 2017. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. Sandler, M.; Howard, A. G.; Zhu, M.; Zhmoginov, A.; and Chen, L.-C. 2018. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In CVPR. Tian, Z.; Shen, C.; and Chen, H. 2020. Conditional Convolutions for Instance Segmentation. In ECCV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7267 Tian, Z.; Shen, C.; Chen, H.; and He, T. 2022. FCOS: A Simple and Strong Anchor-Free Object Detector. IEEE Trans. Pattern Anal. Mach. Intell. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In NeurIPS. Wan, Q.; Huang, Z.; Lu, J.; Yu, G.; and Zhang, L. 2023. SeaFormer: Squeeze-enhanced Axial Transformer for Mobile Semantic Segmentation. In ICLR. Wang, W.; Xie, E.; Li, X.; Fan, D.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021a. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. In ICCV. Wang, X.; Kong, T.; Shen, C.; Jiang, Y.; and Li, L. 2020a. SOLO: Segmenting Objects by Locations. In ECCV. Wang, X.; Zhang, R.; Kong, T.; Li, L.; and Shen, C. 2020b. SOLOv2: Dynamic and Fast Instance Segmentation. In NeurIPS. Wang, X.; Zhang, R.; Kong, T.; Li, L.; and Shen, C. 2020c. Solov2: Dynamic and fast instance segmentation. NIPS. Wang, Y.; Xu, Z.; Wang, X.; Shen, C.; Cheng, B.; Shen, H.; and Xia, H. 2021b. End-to-end video instance segmentation with transformers. In CVPR. Wu, J.; Jiang, Y.; Bai, S.; Zhang, W.; and Bai, X. 2022a. Seqformer: Sequential transformer for video instance segmentation. In ECCV. Wu, J.; Liu, Q.; Jiang, Y.; Bai, S.; Yuille, A.; and Bai, X. 2022b. In defense of online models for video instance segmentation. In ECCV. Xie, E.; Sun, P.; Song, X.; Wang, W.; Liu, X.; Liang, D.; Shen, C.; and Luo, P. 2020. PolarMask: Single Shot Instance Segmentation With Polar Representation. In CVPR. Yang, L.; Fan, Y.; and Xu, N. 2019. Video Instance Segmentation. In ICCV. Yang, S.; Fang, Y.; Wang, X.; Li, Y.; Fang, C.; Shan, Y.; Feng, B.; and Liu, W. 2021. Crossover learning for fast online video instance segmentation. In ICCV. Yang, S.; Wang, X.; Li, Y.; Fang, Y.; Fang, J.; Liu, W.; Zhao, X.; and Shan, Y. 2022. Temporally efficient vision transformer for video instance segmentation. In CVPR. Zhang, R.; Tian, Z.; Shen, C.; You, M.; and Yan, Y. 2020. Mask Encoding for Single Shot Instance Segmentation. In CVPR. Zhang, W.; Huang, Z.; Luo, G.; Chen, T.; Wang, X.; Liu, W.; Yu, G.; and Shen, C. 2022. TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation. In CVPR. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2021. Deformable DETR: Deformable Transformers for End-toEnd Object Detection. In ICLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7268
2024
807
18,637
Scalable Geometric Fracture Assembly via Co-creation Space among Assemblers Ruiyuan Zhang1 *, Jiaxiang Liu1 *, Zexi Li1, Hao Dong3, Jie Fu2 *†, Chao Wu1 † 1 Zhejiang University 2 Hong Kong University of Science and Technology 3 Peking University {zhangruiyuan,zjljx,zexi.li,chao.wu}@zju.edu.cn, jiefu@ust.hk, hao.dong@pku.edu.cn Abstract Geometric fracture assembly presents a challenging practical task in archaeology and 3D computer vision. Previous methods have focused solely on assembling fragments based on semantic information, which has limited the quantity of objects that can be effectively assembled. Therefore, there is a need to develop a scalable framework for geometric fracture assembly without relying on semantic information. To improve the effectiveness of assembling geometric fractures without semantic information, we propose a co-creation space comprising several assemblers capable of gradually and unambiguously assembling fractures. Additionally, we introduce a novel loss function, i.e., the geometric-based collision loss, to address collision issues during the fracture assembly process and enhance the results. Our framework exhibits better performance on both PartNet and Breaking Bad datasets compared to existing state-of-the-art frameworks. Extensive experiments and quantitative comparisons demonstrate the effectiveness of our proposed framework, which features linear computational complexity, enhanced abstraction, and improved generalization. Our code is publicly available at https://github.com/Ruiyuan-Zhang/CCS. Introduction Fracture assembly aims to reconstruct a broken object by composing its fractures. Manually assembling these fragments is time-consuming and requires precision. The task is complicated due to the vast number of potential combinations and the lack of clear instructions. Therefore, geometric fracture assembly is a practical but challenging task. Previous studies have focused on tasks like archaeological fragment matching and 3D furniture assembly, using methods such as generative 3D part assembly with graph neural networks to understand part relationships (Funkhouser et al. 2011; Toler-Franklin et al. 2010; Zhang et al. 2022; Zhan et al. 2020; Narayan, Nagar, and Raman 2022; Lee, Hu, and Lim 2021; Li et al. 2020). However, these approaches often face limitations in handling shapes with numerous fragments, also called scalability issue or multi-parts issue. Futhermore, they rely heavily on semantic information like *These authors contributed equally. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Comparison of Transformer-based assembly (left) and our method (right). Left: Assemblers may become confused when presented with multiple fractures to assemble simultaneously, leading to an inability to assemble any parts. Right: In our method, assemblers focus on essential fractures of parts, enabling efficient and effective task completion. part segmentation and ground-truth annotations (Zhang et al. 2022; Zhan et al. 2020; Narayan, Nagar, and Raman 2022; Lee, Hu, and Lim 2021; Li et al. 2020). To address part relationship constraints, (Zhang et al. 2022) introduced a transformer-based framework, IET, which assigns positions to parts and uses self-attention for positional interactions. However, this method’s computational complexity increases significantly with the number of parts due to the quadratic scaling of attention mechanisms (Goyal et al. 2021). In this paper, we introduce the Co-creation Space, a novel approach that enables assemblers to predict the 6-DoF pose of geometric parts or fractures. This method, illustrated in Figure 2, involves assemblers competing for write access to update a shared workspace, thereby predicting part poses (Baars 1993; Dehaene, Lau, and Kouider 2021). The task of geometric fracture assembly is divided into specialized tasks, such as identifying the core fracture and locating relevant ones. Assemblers collaborate in a shared workspace, which replaces the pairwise interactions found in traditional dot-product attention methods. This approach ensures global coherence and reduces computational complexity to a linear scale relative to the number of assemblers (Baars 1993). As depicted in Figure 1, the Co-creation Space also acts as an information bottleneck, denoted as Rt in Eq 2, limiting the capacity of information channels between specialists. This ensures that only essential information is written into the workspace. Fig.1(a): when the assembler considers multiple parts/fractures simultaneously, it discovers that multiple The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7269 components can match each other (e.g., a chair seat can connect with both the chair legs and the chair back; a fragment can link with other fragments at multiple angles). As the volume of information increases, it would cause confusion issue as other baselines (Fig.1(b)). Just as drivers pay more attention to important traffic signals rather than the onboard music while driving, the assembler requires a focus. Moreover, the presence of identical or similar information in fractures during assembly can cause ambiguity and hinder escaping local optima. We address this by introducing a geometric-based collision loss, depicted in Figure 3. This loss function actively separates identical or similar fractures that share the same location, guiding them towards more appropriate positions. We carried out comprehensive experiments on two major geometric fracture assembly datasets: PartNet (Mo et al. 2019) and Breaking Bad (Sell´an et al. 2022). Through numerous comparative experiments and ablation analyses, we compared our method with state-of-theart works (Zhang et al. 2022; Zhan et al. 2020; Narayan, Nagar, and Raman 2022) and verified the effectiveness of our proposed framework. Our method emphasizes scalability in assembly processes and can be integrated as a plugand-play module to enhance geometric information extraction research. Related Works Geometric Shape Assembly Geometric shape assembly involves combining multiple shapes to create a target object (Zhang et al. 2022; Zhan et al. 2020; Narayan, Nagar, and Raman 2022; Grason 2016; Chen et al. 2022; Funkhouser et al. 2011; Jones et al. 2021; Lee, Hu, and Lim 2021; Li et al. 2020; Liu et al. 2023a), and it is an important problem in science and engineering (Funkhouser et al. 2011; Zhang et al. 2022; Zhan et al. 2020; Narayan, Nagar, and Raman 2022). Previous research has focused on specific cases that simplify the problem, such as using identical fragments (Funkhouser et al. 2011; Lee et al. 2022) or textured fragments(Lee et al.). However, in practical applications, the shapes and number of fractures can be arbitrary, requiring more general methods. Building on PartNet (Mo et al. 2019), studies like (Zhan et al. 2020; Narayan, Nagar, and Raman 2022) have proposed graph-based learning methods for predicting 6-DoF poses of each part and assembling a shape. Similarly, (Zhang et al. 2022) use a transformer-encoder to understand part relationships. However, these methods often rely heavily on semantic information of object parts, such as instance encoding (Zhang et al. 2022), instance label (Zhan et al. 2020), and order information (Narayan, Nagar, and Raman 2022), and become less effective with more parts. Recently, (Sell´an et al. 2021) introduced the Breaking Bad dataset, highlighting the challenge of assembling non-semantic fractures into complete shapes. This shows that fractured shape assembly is an ongoing issue. In response, we propose a new framework to tackle the scalability challenge in geometric shape assembly, demonstrating good performance on nonsemantic datasets. Shared Global Workspace In cognitive neuroscience, the Global Workspace Theory (SGW) (Baars 1993; Dehaene, Lau, and Kouider 2021) has been proposed to suggest an architecture allowing specialist modules to interact through a shared representation called workspace, a bandwidth-limited communication channel. The advantage of this approach is that it enables global coordination and coherence among different components, beyond just local or pairwise interactions. Workspace can be modified by any specialist, and that is broadcast to all specialists. (Goyal et al. 2021) explore the use of such a communication channel in the context of deep learning, leading to greater abstraction and better generalization, which will be effective in large geometric fracture assembly tasks without semantic information. Based on this theory, (Liu et al. 2022) propose a Centralized Training Decentralized Execution learning approach called Stateful Active Facilitator that enables agents to work efficiently in high-coordination and high-heterogeneity environments. In this paper, co-creation is proposed so that the assemblers competing for write access can update the workspace. Looking at it from another perspective, the process in SGW is similar to an information bottleneck (IB) that distills and extracts crucial information for our conscious awareness. In this paper, IB corresponds to Rt, used to address scalability issues. Method Let P = {pi}N i=1 represent a set of geometric fracture point clouds, where pi ∈Rnpc×3 and N denotes the number of parts which may vary for different 3D shapes. Our goal is to predict a set of 6-DoF fracture poses as Z = {(Ri, Ti)}N i=1 in SE(3) space, where Ri ∈R4 and Ti ∈R3 denotes the rigid rotation and translation for each fracture, respectively. Then, we apply the predicted pose to transform the point cloud of each part and get the i-th fracture’s predicted point cloud P′ i = Zi(pi), in which Zi is the joint transformation of (Ri, Ti). And the complete shape can be assembled into S = SN i=1 P′ i as our predicted assembly result. In this work, we present a scalable geometric fractures assembly framework via a co-creation space among assemblers to assemble 3D shapes, which is illustrated in Figure 2. Co-creation Space among Assemblers In this section, we introduce co-creation space among assemblers, which serves as the core module of this framework. In this module at each computation stage indexed by t, na assemblers compete for write access to co-creation space, na = N. The contents of the co-creation space, in turn, are broadcast to all assemblers simultaneously. Step 1: Generating the message of assemblers. Each assembler i receives a message containing geometric fractures’ information mi,t at each time step t. The initialization information is created through the routing function, which carries information about the current fracture and its relationship with other fractures. The first step is external to the co-creation space, for details, please refer to . we denote the set of messages generated by the assemblers at time step t The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7270 Figure 2: The overall architecture of our framework. (a) geometric fractures feature extraction using shared parameters PointNet (Qi et al. 2017), (b) geometric fractures relational reasoning and information routing, (c) complete the assembly task among assemblers using the co-creation space, (d) MLP predictor for pose estimation. by Mt: Mt = {mi,t|1 ≤i ≤N}. (1) Step 2: Writing into the co-creation space. The message Mt generated in step one is distilled into a latent state which we term as a Co-creation Space. The assemblers compete to write into the co-creation space, whose contents need to be updated in the context of new information received from different assemblers. This step ensures that only the critically important signals make it to the co-creation space, therefore preventing the co-creation space from being cluttered. We represent the co-creation space state at time step t by Rt. Rt consists of L slots {l0, l1, ..., lL−1}, each of dimension dl so that Rt ∈RL×dl. The messages in Mt compete to write into each co-creation space’s memory slot via a key-query-value attention mechanism. In this case, the query is a linear projection of the state of the current co-creation space memory content Rt, i.e., ∼ Q = Rt ∼ W q, whereas the keys and values are linear projections of the messages Mt. Co-creation space state is updated as: Rt = softmax( ∼ Q(Mt ∼ W e)T √de )Mt ∼ W v. (2) In this work, we use a top-k softmax (Ke et al. 2018) to select a fixed number of assemblers allowed to write in the cocreation space. Similar to transformer (Vaswani et al. 2017), we apply multiple heads to improve its expressive ability. Step 3: Reading from the co-creation space. Each assembler then updates its state using the information broadcast from the co-creation space. We again utilize an attention mechanism to perform this operation. All the assemblers create queries Qt = {qi,t|1 ≤i ≤N} ∈RN×de where qi,t = W q readai,t and ai,t are encoded partial observations of one assembler. Generated queries are matched with the keys K = RtW e ∈Rl×de from the updated memory slots of co-creation space. As a result, the attention mechanism can be written as: M ′ t = softmax(QtKT √de )RtW v, (3) where M ′ t = {m′ i,t|1 ≤i ≤N}. After receiving the broadcast information from the co-creation space, each assembler updates its state by a feedforward layer. This yield the new value Mt+1 for the k-th assembler, from which we start the next stage (t + 1). Co-creation is a shared workspace with limited capacity, which encourages specialization and compositionality among assemblers. IET (Zhang et al. 2022) relies on pairwise interactions captured via an attention mechanism. Unfortunately, such attention mechanisms scale quadratically with the number of assemblers. Here, the computational complexity of the proposed method is linear in the number of assemblers. Geometric Fractures Information Routing Inspired by modular deep learning (Pfeiffer et al. 2023), before the parts assembly task, we employed an independent module to perform geometric information extraction and fractures relation reasoning, and route the aggregation information to the next module. A transformer-based architecture is recommended to learn the relationships between fractures. Transformer-encoder applies multiple self-attention layers(Zhu et al. 2023a,b) that aggregate information (e.g., geometry and posture) from the entire input sequence (here is a set of geometric fractures). The positional encoding is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7271 geometric-based collision loss (c) ground truth assembly by IET details of similar parts ground truth assembly by ours w/o collision loss details of similar parts (a) (b) point cloud of two similar parts. Figure 3: The illustration of geometric-based collision loss (The red circles and the blue triangles represent 2 similar fractures). omitted as the input already contains the information about 3-dimensional XYZ coordinates. We didn’t follow the standard formulation of the transformer. Instead, in order to keep the model architecture concise, this framework used the same Transformer structure as the co-creation space. Here the co-creation space can support information routing and relation reasoning at structure, but these two modules are different in terms of their functions in our framework. The module discussed mainly performs information extraction and routing, while the co-creation space primarily completes the logical task of fracture assembly. Learning routing is a challenging research of modular deep learning, including training instability (Pfeiffer et al. 2023), module collapse (Kirsch, Kunze, and Barber 2018), and overfitting (Pfeiffer et al. 2023). This is not the focus of this paper to be discussed, we will discuss the learned routing of Assembly in future work. We have provided several options to replace the geometric fracture information routing network structure, including ResNet (He et al. 2016), Transformer (Vaswani et al. 2017), and GNN (Scarselli et al. 2008). Geometric-based Collision Loss During assembling, fractures with similar or identical semantic information will be placed in the same location. As shown in Figure 3, other chair legs or similar parts are assembled in the same position. To solve this problem, previous methods manually input the semantic information of the parts, such as Instance Encoding (Zhang et al. 2022), Instance label (Zhan et al. 2020), and order information (Narayan, Nagar, and Raman 2022). However, these methods are not suitable for assembly tasks that do not involve semantic information. Additionally, adding too much bias into the model will limit the generalization ability of these methods to new shape distributions. We hypothesize that the problem arises because of fractures may assemble at a local optimum point, which is extremely difficult to escape. When one of the fractures is gently moved, the model will still pull the moved part back to its local optimum position during optimization. To verify our hypothesis, we propose a loss function called collision loss, which warns and pushes away one of the fractures when placed in the same location. The displaced fracture will affect other losses, such as the shape chamfer distance loss, and gradually optimize it to a reasonable position. Experimental results show that our method effectively solves the ambiguity between fractures ci, where 1 ≤i ≤N. we define the Collision loss as: Lc = 2 × PN i PN j<i(1 −log(C ||ci −cj||2)) N × (N −1) , (4) where C is the hyperparameter of collision loss, that can be adjusted through grid search (Liu et al. 2023b). Which presents a correlation with the distance to the cloud point distribution. ∥ci −cj∥2 represents the distance d ≥0 between two parts or fractures. We only consider the loss li,j = 1 −log(C × d). The corresponding derivative is l′ i,j = −1 d. As d increases, l′ i,j becomes larger, meaning its absolute value decreases because the derivative is negative. The increase of the derivative leads to the slope of the Li,j becoming flatter. It indicates that the rate of decrease of the function slows down as d increases. This characteristic explains that Lc can timely separate two overlapping fractures without affecting the total loss of two non-overlapping parts. This can effectively solve the ambiguous issue. We add an ablation study for C with artifact dataset in Table 2. Training Details In the task of geometric fracture assembly, there are multiple possible solutions due to the interchangeable positions of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7272 fractures and the ability to place decorative parts in different locations. In order to establish an uncertainty model and explore structural diversity, the Min-of-N (MoN) loss (Zhang et al. 2022; Zhan et al. 2020; Narayan, Nagar, and Raman 2022) and random noise zj ∼N(0, 1) were adopted. Here the overall framework was defined as f, while the ground truth pose was defined as f ∗. The MoN loss is used to calculate the error, which is defined as follows: LMoN = min 1≤j≤nL(f(P, zj), f ∗(P)). (5) Given a set of fracture point clouds P, f make n predictions by perturbing the input with n random vector zj. Intuitively, it ensures at least one prediction as close as the ground-truth space. Following (Zhang et al. 2022; Zhan et al. 2020; Narayan, Nagar, and Raman 2022), we set n = 5 in the experiment. The loss function, L is split into four categories similar to (Zhan et al. 2020), collision loss was proposed by this work, for global and part-wise structural integrity. The translation is supervised by Euclidean loss Lt, which measures the distance between the predicted translation Ti and ground-truth translation T ∗ i for each fracture, Lt = N X i=1 ||Ti −T ∗ i ||2 2 . (6) The rotation is supervised via Chamfer distance on the rotated fracture point cloud: Lr = N X i=1   X x∈Ri(pi) min y∈R∗ i (pi) ∥x −y∥2 2 + X x∈R∗ i (pi) min y∈Ri(pi) ∥x −y∥2 2  , (7) in which the Ri(pi) and R∗ i (pi) repesent the rotated fracture points pi using the estimated rotation Ri and the groundtruth R∗ i , respectively. To ensure comprehensive assembly quality, we have employed Chamfer distance (CD) to monitor the entire shape assembly S. Ls = X x∈S min y∈S∗∥x −y∥2 2 + X y∈S∗ min x∈S ∥x −y∥2 2, (8) where S is the assembled shape and S∗denotes the ground-truth. The total loss is defined as follows: L = wcLc + wtLt + wrLr + wsLs, (9) where wc, wt, wr and ws denote the weight of different losses, which are empirically determined. Experiments and Analysis Dataset and Baselines We evaluated our method and baselines on PartNet (Mo et al. 2019) datasets and Breaking Bad (Sell´an et al. 2022, 2021). PartNet is a large-scale shape dataset with fine-grained and hierarchical part segmentation. We utilize the Chair dataset with default train/validation/test being split in the dataset, which includes 6,323 chairs. Breaking Bad dataset (Sell´an et al. 2021) contains a diverse set of shapes spanning everyday objects, artifacts without any manually annotated semantic information, e.g., instance label. It combines one million geometrically natural fracture patterns, which meets our needs. We used the categories of Everyday and Artifact. The number of parts or fractures used in all datasets ranges from 2 to 20. We compare the proposed method with Global (Li et al. 2020; Schor et al. 2019), LSTM (Wu et al. 2020a), CM (Sung et al. 2017), DGL (Zhan et al. 2020), RGL (Narayan, Nagar, and Raman 2022) ,IET (Zhang et al. 2022). To ensure a fair comparison, the implementations of other baselines are based on benchmark (Sell´an et al. 2022). Evaluation Metrics The performance of our proposed method and baselines are measured by generating a variety of shapes and finding the closest shape to the ground truth using minimum matching distance (Achlioptas et al. 2018). To ensure a thorough evaluation, we use the metrics of part accuracy (PA) (Li et al. 2020), connectivity accuracy (CA) (Zhan et al. 2020), and shape Chamfer distance (SCD) (Zhan et al. 2020), as employed by (Zhan et al. 2020). PA and CA assess the precision of each individual part and the quality of the connections between them, respectively, while SCD evaluates the overall quality of the assembled shape. In addition, we computed the root mean squared error RMSE(R) (Sell´an et al. 2022) and root mean squared error RMSE(T) (Sell´an et al. 2022) to evaluate both rotation and translation prediction when conducting experiments using the Breaking Bad dataset. Experimental Results and Analysis We conducted a performance evaluation of our proposed method and various baselines, as illustrated in Table 4. Our proposed method demonstrated superior performance in most columns, particularly in the part and connectivity accuracy metrics. The red numerical value represents the number of instances where the proposed method outperforms the second-best method, and the green numerical value signifies the extent by which the proposed method lags behind the best-performing method. The visual outcomes depicted in Figure 4 and Figure B.2, Figure B.3 provide empirical evidence that our proposed approach surpasses the baseline methods in generating meticulously organized geometries. In contrast, the baseline methods often fail to achieve satisfactory assembly results. Subjectively speaking, our framework shows results that are almost indistinguishable from the ground truth. Figure 4 [dh] shows that our framework can perform well even with many parts. This is because our framework includes a cocreation space, which has an information bottleneck property that promotes the emergence of better assembler professional skills, resulting in better generalization and logical reasoning abilities for the network(Tishby, Pereira, and Bialek 2000; Tishby and Zaslavsky 2015; Wu et al. 2020b; Ahuja et al. 2021; Goyal et al. 2021). From Figure 4 [c-h], The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7273 Metrics Category LSTM Global CM DGL RGL IET Ours SCD (×10−3)↓ Chair 13.10 14.60 24.10 09.10 08.70 09.40 07.00 Everyday 20.42 15.12 14.29 14.26 Artifact 25.42 17.47 16.46 15.49 PA↑ Chair 21.77 15.70 08.78 39.00 49.06 37.50 53.59 Everyday 18.30 26.40 26.94 28.57 Artifact 06.22 16.09 17.64 19.78 CA↑ Chair 06.89 09.90 09.19 23.87 32.26 24.47 38.97 RMSE (R)↓ Everyday 83.50 80.99 80.49 79.18 Artifact 86.20 82.64 79.94 77.59 RMSE (T) (×10−2)↓ Everyday 16.63 15.35 15.07 15.10 Artifact 17.50 16.05 15.72 15.50 Table 1: Quantitative evaluation on PartNet (Chair) and Breaking Bad datasets (Everyday and Artifact). wc C SCD×10−3 ↓ PA ↑ RSMET ↓RSMER ↓ 0.10 5 1.717 19.53 0.1618 80.686 0.10 10 1.610 19.81 0.1585 81.049 0.10 15 1.651 19.10 0.1589 80.854 0.10 20 1.615 19.83 0.1566 80.054 0.10 25 1.603 19.83 0.1560 79.977 0.10 30 1.581 20.53 0.1554 80.811 0.10 35 1.623 19.39 0.1562 80.509 0 N/A 1.631 19.81 0.1579 80.507 0.05 30 1.685 19.10 0.1565 81.028 0.15 30 1.587 20.54 0.1553 80.261 0.20 30 1.645 19.74 0.1573 79.837 0.25 30 1.710 19.38 0.1561 80.442 Table 2: The grid search of wc and C in artifact. Workspace PA↑ SCD(×10−3)↓ CA↑ w/o any workspace 45.89 08.10 31.73 k = 1 w/o Col. loss 49.87 07.60 35.65 k = 5 w/o Col. loss 49.19 07.30 33.75 k = 10 w/o Col. loss 52.53 07.10 38.40 k = 15 w/o Col. loss 52.52 07.32 36.90 k = 20 w/o Col. loss 50.23 07.30 35.56 k = 10 with Col. loss 53.59 07.00 38.97 Table 3: The grid search of k in our settings and ablation study. “Col. loss” denotes collision loss. we can see that in the baseline, similar part problems cannot be handled well when collisions occur. However, this problem is resolved in our framework. The success of our framework relies on the proposed collision loss, which can identify local optimal points during model training and overcome this state, and lead better results. Figure 4 [a-b] shows that our framework still performs well on Breaking Bad, demonstrating that our framework has better relational reasoning abilities than previous baselines, even in assembly tasks without semantic information. Ablation Study Co-creation Space Analysis Co-creation space is a collaborative environment for all assemblers. In this context, the parameter k represents the memory capacity of cocreation. This means that k determines how much information each assembler can hold and process. Table 3 displays five different values of k, which range from low to high memory capacities. As the value of k increases, assemblers can hold and process more information. However, there is a trade-off between memory capacity and communication efficiency among assemblers. In this particular task, the optimal result is achieved when k=10. This means assemblers with a memory capacity of 10 can hold and process enough information to contribute effectively to the assembly process without becoming overwhelmed. The visual assembly effect, as shown in Figure B.1, demonstrates the impact of memory capacity on the assembly process. At k=10, the assembly is best because co-creators can hold and process enough information to assemble the product correctly. However, when k = x (where x is a lower or higher value than 10), problems such as misplaced parts or interference between similar parts can arise due to the information bottleneck. It is worth noting that our method failed to achieve a satisfactory result without co-creat space. Thus, it can be observed that our proposed co-creation space brought outstanding performance improvement consistent with qualitative analysis. Collision Loss We carried out experiments to assess the effectiveness of the collision loss. As illustrated in Figure 5, we compared the performance of our method with and without the collision loss. The results show a significant enhancement in the model’s capability to place similar parts in different locations with collision loss. This improvement is particularly noticeable when there are numerous comparable parts or a large number of parts. Thus, collision loss is effective to resolving ambiguity between similar parts during the fracture assembly procedure. Furthermore, we add an ablation study for wc and C on Table 2. The values of wt, wr and ws follow previous works (Sell´an et al. 2022). Coarse-to-fine Coarse-to-fine (CTF) is a sequential process that involves utilizing x different networks to combine distinct parts into a complete shape(Zhan et al. 2020), with each network focusing on more detailed information (x = 5 in this paper). This approach enhances efficiency and accuracy by progressively refining the output at each step. The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7274 Figure 4: Visual comparisons between our algorithm and the baseline methods on Breaking Bad (a, b) and PartNet dataset (c, d, e, f, g and h). Figure 5: Collision loss ablation study visualization. strategy was employed in both the DGL and IET. To verify its impact on experimental results, we implemented a version of the IET that employs CTF with five iterations. As depicted in Table 4, our method still outperformed the others under this condition. PA↑ SCD (×10−3)↓ CA↑ IET 37.50 09.40 24.47 Ours 53.59 07.00 38.97 IET + CTF 41.71 07.80 31.13 Ours + CTF 55.92 06.20 42.69 Table 4: Coarse-to-fine ablation study. Conclusion In this paper, we introduce a novel assembly paradigm that effectively addresses scalability issue without relying on semantic information. Firstly, we propose a co-creation space where assemblers compete for write access. Which facilitates step-by-step assembly, reducing confusion when dealing with multiple fractures. Secondly, we have developed a geometric-based collision loss to minimize collision issues during assembly. Our method has been validated across all public datasets, and we are currently exploring its application in robotic hands for assembling real objects. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7275 Acknowledgments This work is supported by the National Key Research and Development Project of China (2021ZD0110505), National Natural Science Foundation of China (U19B2042), the Zhejiang Provincial Key Research and Development Project (2023C01043), University Synergy Innovation Program of Anhui Province (GXXT-2021-004), Academy Of Social Governance Zhejiang University, Fundamental Research Funds for the Central Universities (226-2022-00064). Furthermore, we thank the anonymous reviewers for their valuable comments and suggestions. References Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; and Guibas, L. 2018. Learning representations and generative models for 3d point clouds. In International conference on machine learning, 40–49. PMLR. Ahuja, K.; Caballero, E.; Zhang, D.; Gagnon-Audet, J.-C.; Bengio, Y.; Mitliagkas, I.; and Rish, I. 2021. Invariance principle meets information bottleneck for out-of-distribution generalization. Advances in Neural Information Processing Systems, 34: 3438–3450. Baars, B. J. 1993. A cognitive theory of consciousness. Cambridge University Press. Chen, Y.-C.; Li, H.; Turpin, D.; Jacobson, A.; and Garg, A. 2022. Neural shape mating: Self-supervised object assembly with adversarial shape priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12724–12733. Dehaene, S.; Lau, H.; and Kouider, S. 2021. What is consciousness, and could machines have it? Robotics, AI, and Humanity: Science, Ethics, and Policy, 43–56. Funkhouser, T.; Shin, H.; Toler-Franklin, C.; Casta˜neda, A. G.; Brown, B.; Dobkin, D.; Rusinkiewicz, S.; and Weyrich, T. 2011. Learning how to match fresco fragments. Journal on Computing and Cultural Heritage (JOCCH), 4(2): 1–13. Goyal, A.; Didolkar, A.; Lamb, A.; Badola, K.; Ke, N. R.; Rahaman, N.; Binas, J.; Blundell, C.; Mozer, M.; and Bengio, Y. 2021. Coordination among neural modules through a shared global workspace. arXiv preprint arXiv:2103.01197. Grason, G. M. 2016. Perspective: Geometrically frustrated assemblies. The Journal of Chemical Physics, 145(11): 110901. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Jones, B.; Hildreth, D.; Chen, D.; Baran, I.; Kim, V. G.; and Schulz, A. 2021. Automate: A dataset and learning approach for automatic mating of cad assemblies. ACM Transactions on Graphics (TOG), 40(6): 1–18. Ke, N. R.; ALIAS PARTH GOYAL, A. G.; Bilaniuk, O.; Binas, J.; Mozer, M. C.; Pal, C.; and Bengio, Y. 2018. Sparse attentive backtracking: Temporal credit assignment through reminding. Advances in neural information processing systems, 31. Kirsch, L.; Kunze, J.; and Barber, D. 2018. Modular networks: Learning to decompose neural computation. Advances in neural information processing systems, 31. Lee, J.; Kim, J.; Chung, H.; Park, J.; and Cho, M. ???? Fragment Relation Networks for Geometric Shape Assembly. In Learning Meets Combinatorial Algorithms at NeurIPS2020. Lee, J.; Kim, J.; Chung, H.; Park, J.; and Cho, M. 2022. Learning to Assemble Geometric Shapes. arXiv preprint arXiv:2205.11809. Lee, Y.; Hu, E. S.; and Lim, J. J. 2021. IKEA furniture assembly environment for long-horizon complex manipulation tasks. In 2021 ieee international conference on robotics and automation (icra), 6343–6349. IEEE. Li, Y.; Mo, K.; Shao, L.; Sung, M.; and Guibas, L. 2020. Learning 3d part assembly from a single image. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16, 664–682. Springer. Liu, D.; Shah, V.; Boussif, O.; Meo, C.; Goyal, A.; Shu, T.; Mozer, M.; Heess, N.; and Bengio, Y. 2022. Stateful active facilitator: Coordination and Environmental Heterogeneity in Cooperative Multi-Agent Reinforcement Learning. arXiv preprint arXiv:2210.03022. Liu, J.; Hao, J.; Lin, H.; Pan, W.; Yang, J.; Feng, Y.; Wang, G.; Li, J.; Jin, Z.; Zhao, Z.; et al. 2023a. Deep learningenabled 3D multimodal fusion of cone-beam CT and intraoral mesh scans for clinically applicable tooth-bone reconstruction. Patterns, 4(9). Liu, J.; Hu, T.; Zhang, Y.; Feng, Y.; Hao, J.; Lv, J.; and Liu, Z. 2023b. Parameter-Efficient Transfer Learning for Medical Visual Question Answering. IEEE Transactions on Emerging Topics in Computational Intelligence. Mo, K.; Zhu, S.; Chang, A. X.; Yi, L.; Tripathi, S.; Guibas, L. J.; and Su, H. 2019. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 909–918. Narayan, A.; Nagar, R.; and Raman, S. 2022. RGL-NET: A recurrent graph learning framework for progressive part assembly. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 78–87. Pfeiffer, J.; Ruder, S.; Vuli´c, I.; and Ponti, E. M. 2023. Modular deep learning. arXiv preprint arXiv:2302.11529. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 652–660. Scarselli, F.; Gori, M.; Tsoi, A. C.; Hagenbuchner, M.; and Monfardini, G. 2008. The graph neural network model. IEEE transactions on neural networks, 20(1): 61–80. Schor, N.; Katzir, O.; Zhang, H.; and Cohen-Or, D. 2019. Componet: Learning to generate the unseen by part synthesis and composition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8759–8768. Sell´an, S.; Chen, Y.-C.; Wu, Z.; Garg, A.; and Jacobson, A. 2022. Breaking Bad: A Dataset for Geometric Fracture and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7276 Reassembly. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Sell´an, S.; Luong, J.; Da Silva, L. M.; Ramakrishnan, A.; Yang, Y.; and Jacobson, A. 2021. Breaking Good: Fracture Modes for Realtime Destruction. ACM Transactions on Graphics (TOG). Sung, M.; Su, H.; Kim, V. G.; Chaudhuri, S.; and Guibas, L. 2017. ComplementMe: Weakly-supervised component suggestions for 3D modeling. ACM Transactions on Graphics (TOG), 36(6): 1–12. Tishby, N.; Pereira, F. C.; and Bialek, W. 2000. The information bottleneck method. arXiv preprint physics/0004057. Tishby, N.; and Zaslavsky, N. 2015. Deep learning and the information bottleneck principle. In 2015 ieee information theory workshop (itw), 1–5. IEEE. Toler-Franklin, C.; Brown, B.; Weyrich, T.; Funkhouser, T.; and Rusinkiewicz, S. 2010. Multi-feature matching of fresco fragments. ACM Transactions on Graphics (TOG), 29(6): 1–12. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wu, R.; Zhuang, Y.; Xu, K.; Zhang, H.; and Chen, B. 2020a. Pq-net: A generative part seq2seq network for 3d shapes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 829–838. Wu, T.; Ren, H.; Li, P.; and Leskovec, J. 2020b. Graph information bottleneck. Advances in Neural Information Processing Systems, 33: 20437–20448. Zhan, G.; Fan, Q.; Mo, K.; Shao, L.; Chen, B.; Guibas, L. J.; Dong, H.; et al. 2020. Generative 3d part assembly via dynamic graph learning. Advances in Neural Information Processing Systems, 33: 6315–6326. Zhang, R.; Kong, T.; Wang, W.; Han, X.; and You, M. 2022. 3D Part Assembly Generation With Instance Encoded Transformer. IEEE Robotics and Automation Letters, 7(4): 9051–9058. Zhu, D.; Li, Y.; Yuan, J.; Li, Z.; Kuang, K.; and Wu, C. 2023a. Universal domain adaptation via compressive attention matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6974–6985. Zhu, D.; Li, Y.; Zhang, M.; Yuan, J.; Liu, J.; Kuang, K.; and Wu, C. 2023b. Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance. arXiv preprint arXiv:2306.15955. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7277
2024
808
18,638
S3A: Towards Realistic Zero-Shot Classification via Self Structural Semantic Alignment Sheng Zhang1, Muzammal Naseer1, Guangyi Chen1, 2, Zhiqiang Shen1, Salman Khan1, 3, Kun Zhang1, 2, Fahad Shahbaz Khan1, 4 1Mohamed bin Zayed University of Artificial Intelligence 2Carnegie Mellon University 3Australian National University 4Link¨oping University {sheng.zhang, muzammal.naseer, guangyi.chen, zhiqiang.shen, salman.khan}@mbzuai.ac.ae kunz1@cmu.edu, fahad.khan@liu.se Abstract Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification. Despite the success, most traditional VLMs-based methods are restricted by the assumption of partial source supervision or ideal target vocabularies, which rarely satisfy the open-world scenario. In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary. To address the new problem, we propose the Self Structural Semantic Alignment (S3A) framework, which extracts the structural semantic information from unlabeled data while simultaneously selflearning. Our S3A framework adopts a unique Cluster-VotePrompt-Realign (CVPR) algorithm, which iteratively groups unlabeled data to derive structural semantics for pseudosupervision. Our CVPR algorithm includes iterative clustering on images, voting within each cluster to identify initial class candidates from the vocabulary, generating discriminative prompts with large language models to discern confusing candidates, and realigning images and the vocabulary as structural semantic alignment. Finally, we propose to self-train the CLIP image encoder with both individual and structural semantic alignment through a teacher-student learning strategy. Our comprehensive experiments across various generic and fine-grained benchmarks demonstrate that the S3A method substantially improves over existing VLMsbased approaches, achieving a more than 15% accuracy improvement over CLIP on average. Our codes, models, and prompts are publicly released at https://github.com/shengeatamath/S3A. Introduction In recent years, large-scale pre-trained Vision Language Models (VLMs) such as CLIP (Radford et al. 2021; Ren et al. 2022), ALIGN (Li et al. 2021), and BLIP (Li et al. 2022, 2023) have garnered significant attention for their remarkable zero-shot generalization ability on multifarious downstream tasks, particularly in recognizing unseen categories (Zhang et al. 2023a). The common practice to leverage this ability is packing category names into a textual prompt (e.g., “A photo of a [CLS]”) and aligning image embeddings with text embeddings of filled prompts in Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. VLM joint embedding space for classification. To adapt pre-trained VLMs to downstream unseen data, existing prevailing methods (Wu et al. 2023; Zang et al. 2022; Ghiasi et al. 2021) usually assume the access to source labeled data (Chen et al. 2022; Khattak et al. 2023; Zhou et al. 2022a) (e.g., in zero-shot learning (Zhou et al. 2022b; Gao et al. 2021)), target label distribution (e.g., in unsupervised prompt tuning (Kahana, Cohen, and Hoshen 2022)), or an ideal vocabulary that exactly matches the ground-truth label set or with very few open words (e.g., in open-vocabulary learning (Wu et al. 2023; Zang et al. 2022; Ghiasi et al. 2021)). However, this ideal vocabulary is unattainable without exhaustive annotation of all unseen data; whereas, human annotations are exorbitant and difficult to scale. Therefore, both assumptions are restrictive and impractical in open-world scenarios with diverse and dynamic nature. In this paper, we embark on a journey towards Realistic Zero-Shot Classification (RZSC), a more challenging yet practical problem compared with conventional zero-shot learning due to its realistic conditions. Here, we term Realistic as the realistic nature of RZSC which aims to recognize categories on unseen datasets without annotation and ideal vocabulary, but with a vast, comprehensive vocabulary with more than 20K category names encompassing all common classes (Sariyildiz et al. 2021; Ridnik et al. 2021). However, it is challenging since the vast vocabulary can lead to alignment confusion among fine-grained options; as we witness the consistent and dramatic CLIP (Radford et al. 2021) performance drops and reduced neighborhood ranges in Fig. 1. To confront this challenge, we introduce the Self Structural Semantic Alignment (S3A) framework, which iteratively discovers structural semantic alignment from unlabeled data for joint self-learning. This is orchestrated through our unique Cluster-Vote-Prompt-Realign (CVPR) algorithm, a principled process comprising four key steps: (1) Clustering unearths inherent grouping structures of image embeddings, producing meaningful image semantics. (2) Voting associates each cluster with initial category candidates, representing potential structural semantic alignments. These two steps can be executed iteratively to obtain more reliable candidates. (3) Prompting leverages the power of large language models (LLMs) to discern The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7278 Setting Vocab. Anno. Train Zero-Shot Transfer Ytgt % Ytgt Zero-Shot Classification Ytgt ! Ybase Open-Vocabulary Learning <2K ! Ybase Unsupervised Fine-tuning Ytgt % Ytgt RZSC >20K % Ytgt Table 1: Our realistic zero-shot classification and other related settings. Here, following (Wu et al. 2023), we denote Ybase and Ytgt as sets of base training classes and target testing classes, which satisfies Ybase T Ytgt = ϕ. The learning goal of all settings is to recognize Ytgt in test data. nuanced candidates by augmenting prompts with discriminative attributes. (4) Re-alignment represents calibrating the cluster-vocabulary alignment with LLM-augmented prompts as pseudo structural semantic alignment labels. Incorporating our CVPR algorithm, our S3A framework selftrains a student model based on derived individual and structural semantic alignment labels from a stable teacher. Simultaneously, the teacher is updated by student weights to produce more reliable pseudo semantic alignments. We extensively evaluate our S3A framework across multiple setups, spanning various generic and fine-grained benchmarks. The results show that S3A not only consistently outperforms previous adapted State-of-The-Arts (SOTAs) under the RZSC setting on all benchmarks, but excels in outof-vocabulary evaluation, where category names can fall outside the S3A vocabulary. Comprehensive evaluations evidence the effectiveness of our S3A framework in RZSC. Our contributions include: (1) We propose a Self Structural Semantic Alignment (S3A) framework, to address the challenging Realistic Zero-Shot Classification problem, which jointly extracts and self-learns on the individual and structural semantic alignment. (2) We propose a ClusterVote-Prompt-Realign algorithm to reliably derive reliable structural semantic alignments between images and the large vocabulary. (3) S3A achieves SOTA performance on various generic and fine-grained benchmarks, remarkably boosting CLIP by over 15% accuracy, and even in the out-ofvocabulary scenarios. Related Work Zero-Shot Learning/Open-Vocabulary Learning with VLMs. Traditional (Generalized) Zero-Shot Classification (ZSC) aims to categorize novel classes in unseen test data with training on annotated base/seen classes with or without unlabeled target novel classes (Wang et al. 2019; Pourpanah et al. 2022). However, they usually assume auxiliary semantic information of both seen and unseen classes, e.g., category attributes (Lampert, Nickisch, and Harmeling 2009), knowledge graph (Akata et al. 2015), textual keywords (Lei Ba et al. 2015; Cappallo, Mensink, and Snoek 2016). Recently, large-scale pre-trained VLMs have been introduced to alleviate these assumptions (Jia et al. 2021; Radford et al. 2021; Zhang et al. 2023a). Furthermore, OpenFigure 1: (a) Performance comparison between CLIP w/ an ideal vocabulary (Green) and w/ a large vocabulary of 20K categories (Pink). (b) Distribution plot of text-to-text average 3-Nearest Neighbors cosine similarity of each text embedding for three types of vocabulary: with ImageNet-100, ImageNet-1K, and 20K category names. Vocabulary Learning (OVL) (Wu et al. 2023; Zhou et al. 2023; Zhou, Loy, and Dai 2022; Karazija et al. 2023) aims to train the models with some annotated data, i.e., base classes, or large-scale image-text pairs, and to test them on target novel classes (Xu et al. 2023; Shin, Albanie, and Xie 2023). Our RZSC setting differs from conventional ZSC and OVL in not requiring any labeled training data, and not assuming an ideal vocabulary with a ground-truth target label set or one with few open words (Wu et al. 2023; Xu et al. 2023; Karazija et al. 2023). Zero-Shot Transfer/Unsupervised Fine-tuning of VLMs. Both Zero-Shot Transfer (ZST) and Unsupervised Fine-tuning (UF) assume no annotations of target datasets, which are essentially visual concept discovery problems (Vaze et al. 2022; Wen, Zhao, and Qi 2023; Zhang et al. 2023b) with vocabulary prior. ZST (Radford et al. 2021; Ren et al. 2022) directly uses the pre-trained VLMs for zero-shot prediction without fine-tuning. UF further transductively adapts pre-trained models with task-specific training, e.g., with self-training or prompt tuning (Li, Savarese, and Hoi 2022; Kahana, Cohen, and Hoshen 2022; Shin, Albanie, and Xie 2023). However, both ZST&UF assume known ground-truth target label sets or distribution (Kahana, Cohen, and Hoshen 2022; Li, Savarese, and Hoi 2022). In this paper, we aim to alleviate the reliance on these assumptions and propose a new setting, RZSC. Besides, an extended ZST work, SCD (Han et al. 2023), iteratively refines CLIP zero-shot inference predictions on a WordNet vocabulary (Miller 1995) with a heuristic clustering algorithm. However, they have limited adaptability (Li, Savarese, and Hoi 2022), a mismatched linguistic vocabulary still based on the closed-world assumption. Discussion on Zero-Shot Settings. Here, we summarize the main differences between our RZSC setting and others in Table 1. Previous related settings adopt restrictive assumptions including an ideal vocabulary, the target label distribution, and labeled base classes. By contrast, our RZSC aims to learn to categorize an unlabeled dataset with a huge vocabulary based on a large visual taxonomy with over 20K classes. An expanded vocabulary presents significant challenges for RZSC problem, as evidenced by the consistent and substantial CLIP performance drop (Fig. 1a) on all datasets when the vocabulary scales up. The primary challenge arises from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7279 Figure 2: Illustration of our Self Structural Semantic Alignment (S3A) framework, which fine-tunes pre-trained CLIP encoder with a teacher-student architecture. The teacher is updated by the student in an exponentially moving average manner. The student is guided by on-the-fly one-hot instance alignment predicted by the teacher, and self-trains with structural semantic alignment labels derived by our perepoch CVPR algorithm on all teacher image embeddings. increased confusing open words, complicating fine-grained category discrimination for pre-trained VLMs. As displayed in Fig. 1b, the averaged cosine similarity between a query text embedding and its 3-nearest text neighbors grows with the vocabulary size. Methodology Problem: Realistic Zero-Shot Classification Existing methods that adapt pre-trained VLMs to unseen data usually rely on specific knowledge of target datasets, such as prior distributions or an ideal vocabulary. These conditions are often challenging to fulfill in real-world environments. In this paper, we explore a more practical task, Realistic Zero-Shot Classification, abbreviated as RZSC. RZSC is formally defined as follows: Consider an unlabeled dataset Du = {(xi, yi)}N i=1 ⊂X × Y with N images, where Y is the underlying category set, and a pretrained VLM such as CLIP, equipped with image and text encoders fI and fT , respectively. Then, we assume no information of Y and instead with a comprehensive vocabulary that contains more than 20,000 distinct category names, i.e., |Y| ≪|W|. We build our vocabulary from all visual categories from ImageNet1K (Deng et al. 2009) and ImageNet21K (Ridnik et al. 2021) datasets since they are annotated with expert taxonomic knowledge (Miller 1995) and encompasses most commonly-seen visual categories in the real world (Sariyildiz et al. 2021). The goal of the RZSC task is to adapt the pre-trained VLM, i.e., fI, fT to predict the correct category of an unseen dataset: ˆyi = arg max wj∈W zi · hj, (1) where zi = fI(xi) denotes the image embedding while text embedding hj = fT (wj) are obtained with a text prompt, e.g., “a photo of a {category}”, and the category name wj. Here, we denote · as cosine similarity. Overview: Self Structural Semantic Alignment RZSC presents a more formidable challenge than previous tasks, primarily owing to the absence of label information and an increased vocabulary size. As illustrated in part (a) of Fig. 1, the performance of CLIP declines sharply as the vocabulary size increases. This decline can be attributed to the inclusion of confusing open words as hard negative samples, which introduces noise to pre-trained CLIP, hindering its ability to accurately identify image-category alignments. We are motivated to propose our Self Structural Semantic Alignment (S3A) framework, which discovers the structural semantics through iterative self-alignment between visual images and textual vocabulary. As shown in Fig. 2, our S3A incorporates a Cluster-Vote-Prompt-Realign (CVPR) algorithm to derive structural semantics as alignment labels, and both models and pseudo alignments are iteratively refined during self-training. Our CVPR algorithm and S3A self-training procedure can achieve a synergistic effect: as training progresses in adapting representations, the teacher model can provide increasingly reliable pseudo alignments in subsequent iterations. Concurrently, the CVPR algorithm contributes structural semantics as a refined supervisory signal for subsequent self-training. We elaborate on all components in the sections that follow. Cluster-Vote-Prompt-Realign (CVPR) Our Cluster-Vote-Prompt-Realign algorithm lies at the heart of the S3A framework, representing an innovative approach to uncovering structural semantics in data. As illustrated in Fig. 3, our CVPR algorithm consists of four key stages, each contributing to the alignment and identification of structural relationships between visual images and textual vocabulary, including discovering semantic clusters, voting category names on large vocabulary, prompting LLM to discriminate the nuanced candidates, and refine the clustervocabulary alignment. Each step is explained in detail in the subsequent paragraphs. Below we delineate these stages and their functions within the algorithm. Clustering. Based on existing evidence (Radford et al. 2021) and our observation, the pre-trained CLIP excels at grouping instances with the same or similar semantic labels in the image embedding space. We thus produce the pseudo supervision by semantic clustering and aligning the clusters with vocabulary. Specifically, given image embeddings zi in Du, we apply KMeans (Arthur and Vassilvitskii 2007) to obtain the K clusters, Γ = {Γk}K k=1, where Γk denotes the k-th set of image embeddings. Voting. Given the semantic cluster results Γ, we compute a vocabulary voting distribution matrix M ∈RK×|W|, where Mk,j represents the normalized frequency of the prototype of category wj being the nearest neighbor to all instances in the k-th cluster. Specifically, it is computed as Mk,j = 1 K|Γk| X z∈Γk I(wj = arg max w z · h) (2) where I is an indicator function, and |Γk| denotes the size of the k-th cluster. M is cluster-wise and vocabulary-wise normalized, with ||M||1 = 1. Rather than naively assigning each cluster to the argmax prototype in the vocabulary, we keep the top-m frequent words for each cluster as potential candidates which are treated equally. For each row Mk = (Mk,j)|W| j=1, we set all entries but the highest m ones as 0. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7280 Figure 3: An illustrative toy example for our CVPR algorithm, comprising four steps: (1) We cluster all image embeddings. (2) We conduct 1-nearest neighbor voting on all text prototypes of the large vocabulary for each cluster. Since the results of the naive assignment in this step are susceptible to the noise of text embeddings, we generate cluster-wise candidate categories instead. (3) We augment CLIP text prompts with visual discriminative descriptions from the large language model to discern nuanced candidates. (4) With augmented prompts, the cluster-vocabulary alignment is calibrated and refined. Nonetheless, the initial clustering and voting may introduce noise, leading to low-quality pseudo-labels. To mitigate this issue, we iteratively refine the previous clusters based on the current voting outcomes. In particular, we utilize the Hungarian matching (Kuhn 1955) for textual embeddings and clusters to align each cluster with a single prototype. Subsequently, we reassign the image embeddings, using these prototypes as the updated cluster centers (Han et al. 2023). We iterate this process three times. Prompting. Through our empirical studies, we observed that CLIP representation struggles to differentiate nuanced candidates effectively. This observation spurred our efforts to refine the embeddings of textual candidates. We speculate that the challenge in distinguishing fine-grained options arises from the presence of noisy or ambiguous imagecaption alignments during CLIP pre-training. To address this challenge, our approach is to enhance the conventional CLIP prompts by accentuating the subtle semantic differences. We achieve this by integrating auxiliary textual knowledge drawn from LLMs, which are effective in knowledge retrieval (Dale 2021; Yang et al. 2021). Specifically, we feed m candidate category words of the k-th cluster into a single LLM prompt template, each accompanied by their specific definition. Then, we add an instruction to the prompt to extract nuanced visual attributes of each category from the LLM. Our prompt template is structured as: Prompt: Given visual concepts: [CLS-1]: [DEF-1], ..., [CLS-m]: [DEF-m]. Instruction: To discriminate these visual concepts in a photo. Please list all possible visual descriptive phrases for each visual concept. In this template, [CLS] represents the category name, and [DEF] stands for its definition from WordNet (Miller 1995). The LLM then generates a list of distinctive attributes for each category, such as ‘red-and-black tail’. To avoid linguistic ambiguity arising from the polysemy phenomenon, we utilize all possible synset-definition pairs in WordNet (Miller 1995) for a single category as the input visual concepts for the LLM prompt. Finally, each (category, attribute) pair is filled into a CLIP prompt for augmentation, e.g., “A photo of a {category} with {attribute}.”. An ensemble of augmented text embeddings for each category name is constituted. Re-alignment. During the re-alignment phase, our goal is to enhance the structural semantic alignments in Eq. 2. The refined re-alignment matrix, ˜ M ∈RK×|W|, is derived by casting votes on all augmented text embeddings generated in the previous prompting stage. Specifically, the re-alignment probability between the k-th cluster and word wj is determined by the frequencies of augmented embeddings of the word wj in Ak being the top-3 nearest neighbors of z ∈Γk. We denote Ak as the set of augmented embeddings of all candidate category words of Γk. It can be formulated as: ˜ Mk,j = αwj 3K|Γk| X z∈Γk I  wj ∈arg top3 w (z, Ak)  (3) where arg extracts the category name linked with the augmented text embedding in Ak. To avoid the imbalance issue raised by varied numbers of augmented embeddings of different category names, we consider the weight factor αwj = 1 |Ak(wj)|, which uniformly distributes total mass 1 to all augmented embeddings of wj. Therefore, each row of ˜ M sums to be 1 K , and || ˜ M||1 = 1. We again employ the maximum Hungarian matching (Kuhn 1955) on a bipartite graph between clusters and category words, with the cost matrix ˜ M. Consequently, the structural alignment is obtained from the solution, which enforces a one-to-one mapping between clusters and category names. Self-training with Semantic Alignment In this section, we present our S3A self-training framework, as depicted in Fig. 2. The self-training process leverages The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7281 both instance-wise and structural alignment pseudo labels which are derived by our CVPR algorithm with an exponentially moving averaged (EMA) teacher model (Grill et al. 2020). Throughout this process, we adapt CLIP image encoder to enhance its representation and fix its text encoder. Structural Semantic Alignment. To incorporate the structural semantic alignments into online learning, one challenge needs to be addressed. Obtaining high-quality structural alignment pseudo-labels requires consistent model embeddings from the entire dataset, which is computationally costly; while determining the optimal execution interval of CVPR across datasets is challenging. To mitigate these issues, we introduce a slowly updated EMA teacher model. It provides stably refined embeddings and executes the CVPR algorithm once per epoch to yield stable and reliable structural pseudo alignments, which then guides the self-training of the student model. We define the structural semantic alignment loss as the cross-entropy between the predictions of the student model and the pseudo structural alignments generated by the teacher model. Formally, this loss for the i-th instance can be expressed as: Lstr(xi) = −ˆpT T (i) log pS(xi). (4) In this equation, ˆpT (i) represents the one-hot pseudo structural alignment for the i-th instance, which is inferred from the teacher CVPR results during the last epoch. On the other hand, pS denotes the softmax prediction of the student model over the entire vocabulary, computed for the input xi. As a result, the sharpened pseudo labels can cluster images with the same semantics as well as align clusters. Individual Semantic Alignment. In addition to the structural semantic alignment loss, we also guide our model with instance-wise pseudo alignments, which are generated onthe-fly by the EMA teacher model. Without this guidance, our model would likely converge to suboptimal solutions rapidly. We formulate the individual semantic alignment loss for the i-th instance as follows: Lin(xi) = −I(˜pT (xi) > τ)˜pT T (xi) log pS(xi). (5) In this equation, ˜pT represents the one-hot sharpened pseudo label produced by the teacher model at each iteration. The symbol τ denotes a confidence threshold, which ensures that the loss is computed only for samples for which the teacher model has a high level of confidence. To strike a balance between the structural and instance alignment losses, we introduce a weighted combination of both. In this way, individual alignment retains original instance alignment information, while structural alignment groups and aligns similar semantics. Consequently, our total loss function for the i-th instance is formulated as: L(xi) = Lstr(xi) + γLin(xi). (6) Here, γ represents a balancing factor that weights the contribution of the instance alignment loss relative to the structural alignment loss. This total loss is computed at each iteration, based on our CVPR algorithm which is executed once per epoch on the teacher model. Experiments Evaluation Datasets. We evaluate S3A on two generic and five fine-grained benchmarks, i.e., the generic benchmarks of sampled ImageNet-100 (IN100) and ImageNet-1K (IN1K) (Deng et al. 2009), and fine-grained benchmarks of StanfordDogs-120 (SDogs) (Khosla et al. 2011), Living1768 (LV17), Nonliving26-104 (NL26), Entity13-260 (ET13), and Entity30-240 (ET30) in BREEDS (Santurkar, Tsipras, and Madry 2020)). Furthermore, we evaluate our S3A on three benchmarks for the out-of-vocabulary evaluation (containing categories out of our vocabulary), i.e., Oxford-IIIT Pet (Pet) (Parkhi et al. 2012), CIFAR100 (Krizhevsky, Hinton et al. 2009), and Caltech101(Clatech) (Fei-Fei, Fergus, and Perona 2004). Metrics. We adopt the top-1 classification accuracy and clustering accuracy (following SCD (Han et al. 2023) and defined below) for the evaluation. ACCclu = 1 N N X i=0 max ρ I(yi = ρ( ˆyi)), (7) where ρ is a permutation assignment of cluster indices. yi and ˆyi are ground-truth predicted categories. Meanwhile, we adopt Intersection-over-Union (IoU) score as an auxiliary metric in ablations to inspect the overlap between our predictions Ypred and the ground-truth label set Ygt, i.e., |Ypred T Ygt| |Ypred S Ygt|. In the out-of-vocabulary experiments, some class names cannot be found in the vocabulary. Thus, we instead apply a soft accuracy score, defined as the similarity between the predicted word (in vocabulary) and the ground truth label. Inspired by BertScore (Zhang et al. 2019), we adopt a language model, Sentence-Bert (Reimers and Gurevych 2019), to calculate the similarity. Baselines. RZSC is a new setting in which few baselines are ready-to-use. Thus, we evaluate the baseline methods by reproducing them with officially released codes in our setting. Specifically, we consider CLIP as the naive baseline, and two state-of-the-art methods in ZST and UF, i.e., SCD (Han et al. 2023) and MUST (Li, Savarese, and Hoi 2022). In summary, the following baselines are included for performance comparisons: • DINO+KMeans (Caron et al. 2021): DINO is an contrastive self-supervised learning method. We include it here for clustering quality comparisons. We only report its clustering accuracy as it cannot classify. • CLIP (Radford et al. 2021): a large-scale VLM pretrained with massive image-caption pairs conducts zeroshot prediction given our vocabulary. • CLIP (Group) (Radford et al. 2021): We sequentially conduct clustering, voting, and Hungarian matching on CLIP image embeddings for structural zero-shot transfer, using S3A vocabulary. • CLIP (Ideal) (Radford et al. 2021): it denotes zero-shot transfer with pre-trained CLIP but given an ideal vocabulary, showcasing the upper bound performance of CLIP representation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7282 Methods SDogs IN100 IN1K LV17 NL26 ET13 ET30 Avg CLIP (Ideal) 57.59/58.07 84.67/84.90 65.38/65.53 75.24/75.53 80.03/80.05 78.43/78.50 77.99/78.03 74.19/74.37 DINO+KMeans -/45.99 -/75.16 -/55.27 -/72.52 -/62.81 -/67.37 -/64.69 -/63.40 CLIP 53.43/55.43 29.39/38.54 32.21/39.77 37.31/47.24 33.35/38.96 30.09/40.00 31.23/39.90 35.29/42.83 CLIP (Group) 19.37/55.92 40.62/77.68 26.41/56.92 38.33/68.81 41.09/70.51 32.85/71.08 36.36/70.78 33.57/67.38 MUST 57.20/60.61 33.37/52.56 28.97/37.00 31.71/49.35 35.30/48.68 38.46/58.25 33.41/47.08 36.92/50.50 SCD 52.63/55.93 48.89/77.39 37.06/57.00 43.33/68.81 52.18/71.84 40.46/71.25 46.29/70.89 45.83/67.59 S3A (Our) 58.94/62.19 52.08/82.76 42.43/63.15 48.34/75.57 56.20/75.97 45.21/76.92 50.41/76.14 50.51/73.24 Table 2: Transductive evaluation on seven benchmarks. Top-1 classification accuracy scores (left of ‘/‘) and clustering accuracy scores (right of ‘/‘) are reported in percentage. We highlight the highest scores except for the upper bound. #Row Prompt S.T. Lstr ImageNet-100 Living17 Acc Cluster Acc Cluster 1 % % % 48.89 77.39 43.33 68.81 2 ! % % 51.81 79.38 44.60 68.69 3 % ! % 46.23 81.49 46.28 74.60 4 % ! ! 49.00 82.08 46.55 73.04 5 ! ! ! 52.08 82.76 48.34 75.57 Table 3: Top-1 accuracy and Clustering results for our method ablations on IN-100 and LV17. We conduct ablations on our discriminative prompt augmentation (Prompt), self-training stage (S.T.), and structural semantic alignment loss (Lstr). • MUST (Li, Savarese, and Hoi 2022): it is an unsupervised ZSC method leveraging instance-wise unsupervised self-training jointly with self-supervised maskedimage prediction. We adapt it with our huge vocabulary. • SCD (Han et al. 2023): it is an unsupervised/semisupervised zero-shot transfer method with WordNet vocabulary. Its iterative algorithm aligns each cluster with one category name. We adapt it with our S3A vocabulary. Implementation Details. In our method, we fix m = 3 and γ = 0.25 on all datasets. Considering efficiency, we only compute prompting at the first epoch. We adopt ViT-B/16 (Dosovitskiy et al. 2020) as our CLIP backbone. Our data augmentations and optimizer follow MUST (Li, Savarese, and Hoi 2022). We train on all datasets for up to 30K iterations, with 60 epochs for Pet and 30 epochs for other datasets. Besides, we linearly warmup the EMA decay parameter to 0.9998 within specified iterations. We set the initial EMA weight decay of Pet and other datasets as 0.99 and 0.999, respectively. The warmup iterations are 500 for CIFAR, 100 for Pet, and 2000 for other datasets. The threshold τ are 0.3 for CIFAR, 0.7 for Pet, and 0.5 for other datasets. During inference, we adopt the teacher model for prediction on entire S3A vocabulary. Experiments are conducted on a single A6000 GPU. Main Results To validate the effectiveness of our proposed method, S3A, we conducted an extensive evaluation under RZSC setting. We compared our S3A with various baselines on both finegrained and generic datasets. The results are in Table 2. Our method, S3A, consistently achieves SOTA results, outperforming CLIP by a substantial margin, i.e., +15% in top-1 accuracy. Furthermore, S3A notably excels over our adapted SOTA baselines, with nearly +5% top-1 accuracy and +6% clustering accuracy. Generally, we can observe that more classes introduce challenges, and fine-grainedness decreases clustering quality but improves alignment accuracy, e.g., IN100, NL26. Besides, CLIP (Group) encounters alignment issues though with quality clustering, as seen on IN1K and SDogs. We argue that our S3A can dynamically calibrate noisy clustering during self-training. Note that the existing UF SOTA, MUST, sometimes degenerates its initial representation when on S3Avocabulary. This underlines the significance of structural alignment learning for RZSC. Ablations and Analysis Method Ablations. To validate the contribution of S3A components, we conduct method ablations on one generic and fine-grained dataset, i.e., IN100 and LV17. We present the results in Table 3. The last row represents our full method. When we only keep the initial iterative clustering in our CVPR (the 1st row), our method is equivalent to SCD (Han et al. 2023). The 2nd row denotes our CVPR without all self-training-related components; while, the 3rd row conducts self-training only with instance-wise semantic alignment. The 4th row indicates our S3Awithout LLM knowledge guidance. Based on the results, we can conclude that: (1) From Row 4&5, although the clustering quality remains comparable without our discriminative prompt augmentation, the semantic alignment degrades, as witnessed by the drop in top-1 accuracy. (2) From Row 1&2&3, selftraining with structural alignment dominates the contribution in representation adaptation, witnessed by the cluster performance boosts. (3) From Row 3&4, we observe that the structural alignment w/o prompt augmentation yields great improvements on generic datasets, while its effect is less pronounced on fine-grained datasets due to the lack of language signals to discriminate among similar visual categories. All components contribute to the performance. Performance on Estimated K. In Table 4, we present performance with estimated K instead of the ground-truth in Table 2. We implement an iterative estimation approach with three passes: first, we scan the range [LB0, UB0], then [LB0, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7283 Methods LV17 (73) NL26 (101) ET13 (252) ET30 (206) DINO+KMeans -/72.68 -/62.93 -/67.41 -/63.22 CLIP 37.31/47.24 33.35/38.96 30.09/40.00 31.23/39.90 MUST 31.71/49.35 35.30/48.68 38.46/58.25 33.41/47.08 SCD 40.70/69.17 52.63/70.21 40.12/71.37 45.03/69.14 S3A (Est-K) 49.83/76.23 57.10/75.66 45.54/77.23 47.86/72.75 S3A (GT-K) 48.34/75.57 56.20/75.97 45.21/76.92 50.41/76.14 Table 4: Transductive evaluation on four fine-grained benchmarks with estimated cluster numbers (Acc/Cluster). The estimated number is behind the dataset title. Methods Caltech (0.34) CIFAR100 (0.12) Pet (0.62) CLIP (Ideal) 91.25/90.96 81.54/81.12 90.87/92.39 CLIP 50.59/49.66 41.62/41.65 55.60/57.96 MUST 51.20/50.80 42.93/42.96 58.32/55.83 SCD 54.08/54.46 42.62/41.64 58.57/57.58 S3A (Our) 55.29/55.55 46.10/46.40 59.00/60.57 Table 5: Tranductive and inductive evaluation on out-ofvocabulary benchmarks (Train/Test Acc). The OOV ratios for each dataset are provided alongside their respective names. Performance is reported by cosine similarity of generic pre-trained Sentence-BERT, upscaled ×100. S1], and finally [S2, S2+S1 2 ], each time applying elbow algorithm optimized with Silhouette score (Rousseeuw 1987). Here, S1 and S2 denote the solution of the first and second pass. We consistently set LB0 = 50 and UB0 = 2000 for all datasets. Consequently, we obtain minor (+1.0, +1.2, −0.9) train/cluster/test accuracy differences when K is overestimated by 30% w.r.t. the performance with ground-truth K on NL26, which exhibits robustness. On Out-Of-Vocabulary (OOV) Scenarios. Considering the scenarios in which target datasets have category names out of our S3A vocabulary, we further conduct an out-of-vocabulary evaluation on three benchmarks, i.e., Caltech101 (Fei-Fei, Fergus, and Perona 2004), CIFAR100 (Krizhevsky, Hinton et al. 2009), and Oxford-IIIT Pet (Parkhi et al. 2012). The out-of-vocabulary ratios of datasets and results are presented in Table 5. We can conclude that S3A still achieves SOTA performance in this challenging setup on both inductive and transductive evaluation. On Effectiveness of S3A Prompt Augmentation. In this ablation experiment, we analyze the effect of the proposed LLM-guided discriminative prompt augmentation in our CVPR algorithm. We compare with four augmentation setups in Table 6: (1) using WordNet definition for augmentation (5th row); (2) reduce prompt semantic discriminativeness by requesting visual attributes for only a single category name in each LLM prompt (6th row); (3) our prompt augmentation guided by ChatGPT (7th row); (4) our prompt augmentation guided by GPT-4. Besides, we also compare with a recent SOTA, CHiLS (Novack et al. 2023), in prompt augmentation for zero-shot prediction. We use their prompt to generate ten subcategories for each class. We #Row Methods IN1K ET13 ET30 1 CLIP (Ideal) 65.38/96.58 78.43/99.61 77.99/99.58 2 CLIP 32.21/96.49 30.09/97.31 31.23/95.83 3 SCD 37.06/35.09 40.46/32.31 46.29/39.94 4 CHiLS∗ 36.23/34.46 41.13/33.33 46.09/39.94 5 Our (WordNet) 18.69/18.42 21.82/16.85 20.38/15.94 6 Our (Single) 37.40/35.74 41.13/33.00 47.09/40.76 7 Our (ChatGPT) 37.69/36.11 42.65/36.84 47.43/41.18 8 Our (GPT-4) 37.95/36.48 44.98/37.56 48.37/42.01 Table 6: Ablations on prompt augmentation techniques (Acc/IoU). Performance is reported by cosine similarity of generic pre-trained Sentence-BERT, upscaled ×100. Figure 4: Qualitative results in IN100 without finetuning (SCD (Han et al. 2023) and our CVPR). can draw the following conclusions: (1) Semantic distinctiveness in prompts aids fine-grained differentiation; (2) Incorporating WordNet linguistic knowledge hinders semantic discriminativeness; (3) Our approach outperforms CHiLS, thus is more tailored to RZSC tasks; (4) CLIP focuses on instance alignment and leads to low ACC but high IoU; (5) Our method benefits from advanced LLMs. Qualitative Examples. We present qualitative examples from IN100 in Fig. 4, which demonstrate that our CVPR algorithm can effectively correct category misrecognitions and precisely focus on salient objects. Conclusion In this work, we address the challenging task of Realistic Zero-Shot Classification, without assuming partial source supervision or ideal vocabularies. We propose a Self Structural Semantic Alignment (S3A) framework, anchored by an innovative Cluster-Vote-Prompt-Realign (CVPR) algorithm for structural semantic relationship mining and a selftraining process for iterative semantic alignment. Our experiments demonstrate the effectiveness of S3A, consistently achieving significant accuracy improvements over baseline methods on all generic and fine-grained benchmarks, with unknown class numbers, and in out-of-vocabulary scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7284 References Akata, Z.; Perronnin, F.; Harchaoui, Z.; and Schmid, C. 2015. Label-embedding for image classification. IEEE transactions on pattern analysis and machine intelligence, 38(7): 1425–1438. Arthur, D.; and Vassilvitskii, S. 2007. K-means++ the advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, 1027–1035. Cappallo, S.; Mensink, T.; and Snoek, C. G. 2016. Video stream retrieval of unseen queries using semantic memory. arXiv preprint arXiv:1612.06753. Caron, M.; Touvron, H.; Misra, I.; J´egou, H.; Mairal, J.; Bojanowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, 9650–9660. Chen, G.; Yao, W.; Song, X.; Li, X.; Rao, Y.; and Zhang, K. 2022. Prompt learning with optimal transport for visionlanguage models. arXiv preprint arXiv:2210.01253. Dale, R. 2021. GPT-3: What’s it good for? Natural Language Engineering, 27(1): 113–118. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fei-Fei, L.; Fergus, R.; and Perona, P. 2004. Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. 2004 Conference on Computer Vision and Pattern Recognition Workshop, 178–178. Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; and Qiao, Y. 2021. Clip-adapter: Better visionlanguage models with feature adapters. arXiv preprint arXiv:2110.04544. Ghiasi, G.; Gu, X.; Cui, Y.; and Lin, T.-Y. 2021. Open-vocabulary image segmentation. arXiv preprint arXiv:2112.12143. Grill, J.-B.; Strub, F.; Altch´e, F.; Tallec, C.; Richemond, P.; Buchatskaya, E.; Doersch, C.; Avila Pires, B.; Guo, Z.; Gheshlaghi Azar, M.; et al. 2020. Bootstrap your own latenta new approach to self-supervised learning. Advances in neural information processing systems, 33: 21271–21284. Han, K.; Li, Y.; Vaze, S.; Li, J.; and Jia, X. 2023. What’s in a Name? Beyond Class Indices for Image Recognition. arXiv preprint arXiv:2304.02364. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, 4904–4916. PMLR. Kahana, J.; Cohen, N.; and Hoshen, Y. 2022. Improving Zero-Shot Models with Label Distribution Priors. arXiv preprint arXiv:2212.00784. Karazija, L.; Laina, I.; Vedaldi, A.; and Rupprecht, C. 2023. Diffusion Models for Zero-Shot Open-Vocabulary Segmentation. arXiv preprint arXiv:2306.09316. Khattak, M. U.; Rasheed, H.; Maaz, M.; Khan, S.; and Khan, F. S. 2023. MaPLe: Multi-Modal Prompt Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19113–19122. Khosla, A.; Jayadevaprakash, N.; Yao, B.; and Li, F.-F. 2011. Novel dataset for fine-grained image categorization: Stanford dogs. In Proc. CVPR workshop on fine-grained visual categorization (FGVC). Citeseer. Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images. Kuhn, H. W. 1955. The Hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2): 83–97. Lampert, C. H.; Nickisch, H.; and Harmeling, S. 2009. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE conference on computer vision and pattern recognition, 951–958. IEEE. Lei Ba, J.; Swersky, K.; Fidler, S.; et al. 2015. Predicting deep zero-shot convolutional neural networks using textual descriptions. In Proceedings of the IEEE international conference on computer vision, 4247–4255. Li, D.; Li, J.; Li, H.; Niebles, J. C.; and Hoi, S. C. H. 2021. Align and Prompt: Video-and-Language Pre-training with Entity Prompts. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4943–4953. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Li, J.; Li, D.; Xiong, C.; and Hoi, S. 2022. Blip: Bootstrapping language-image pre-training for unified visionlanguage understanding and generation. In International Conference on Machine Learning, 12888–12900. PMLR. Li, J.; Savarese, S.; and Hoi, S. C. 2022. Masked unsupervised self-training for zero-shot image classification. arXiv preprint arXiv:2206.02967. Miller, G. A. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11): 39–41. Novack, Z.; McAuley, J.; Lipton, Z. C.; and Garg, S. 2023. Chils: Zero-shot image classification with hierarchical label sets. In International Conference on Machine Learning, 26342–26362. PMLR. Parkhi, O. M.; Vedaldi, A.; Zisserman, A.; and Jawahar, C. V. 2012. Cats and dogs. 2012 IEEE Conference on Computer Vision and Pattern Recognition, 3498–3505. Pourpanah, F.; Abdar, M.; Luo, Y.; Zhou, X.; Wang, R.; Lim, C. P.; Wang, X.-Z.; and Wu, Q. J. 2022. A review of generalized zero-shot learning methods. IEEE transactions on pattern analysis and machine intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7285 Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Reimers, N.; and Gurevych, I. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Ren, S.; Li, L.; Ren, X.; Zhao, G.; and Sun, X. 2022. Rethinking the Openness of CLIP. arXiv preprint arXiv:2206.01986. Ridnik, T.; Ben-Baruch, E.; Noy, A.; and Zelnik-Manor, L. 2021. Imagenet-21k pretraining for the masses. arXiv preprint arXiv:2104.10972. Rousseeuw, P. J. 1987. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20: 53–65. Santurkar, S.; Tsipras, D.; and Madry, A. 2020. Breeds: Benchmarks for subpopulation shift. arXiv preprint arXiv:2008.04859. Sariyildiz, M. B.; Kalantidis, Y.; Larlus, D.; and Alahari, K. 2021. Concept generalization in visual representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9629–9639. Shin, G.; Albanie, S.; and Xie, W. 2023. Zero-shot Unsupervised Transfer Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4847–4857. Vaze, S.; Han, K.; Vedaldi, A.; and Zisserman, A. 2022. Generalized Category Discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7492–7501. Wang, W.; Zheng, V. W.; Yu, H.; and Miao, C. 2019. A survey of zero-shot learning: Settings, methods, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2): 1–37. Wen, X.; Zhao, B.; and Qi, X. 2023. Parametric Classification for Generalized Category Discovery: A Baseline Study. arXiv:2211.11727. Wu, J.; Li, X.; Yuan, S. X. H.; Ding, H.; Yang, Y.; Li, X.; Zhang, J.; Tong, Y.; Jiang, X.; Ghanem, B.; et al. 2023. Towards Open Vocabulary Learning: A Survey. arXiv preprint arXiv:2306.15880. Xu, J.; Liu, S.; Vahdat, A.; Byeon, W.; Wang, X.; and De Mello, S. 2023. Open-vocabulary panoptic segmentation with text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2955–2966. Yang, Z.; Gan, Z.; Wang, J.; Hu, X.; Lu, Y.; Liu, Z.; and Wang, L. 2021. An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA. ArXiv, abs/2109.05014. Zang, Y.; Li, W.; Zhou, K.; Huang, C.; and Loy, C. C. 2022. Open-vocabulary detr with conditional matching. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX, 106–122. Springer. Zhang, J.; Huang, J.; Jin, S.; and Lu, S. 2023a. Visionlanguage models for vision tasks: A survey. arXiv preprint arXiv:2304.00685. Zhang, S.; Khan, S.; Shen, Z.; Naseer, M.; Chen, G.; and Khan, F. 2023b. PromptCAL: Contrastive Affinity Learning via Auxiliary Prompts for Generalized Novel Category Discovery. arXiv:2212.05590. Zhang, T.; Kishore, V.; Wu, F.; Weinberger, K. Q.; and Artzi, Y. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Zhou, C.; Loy, C. C.; and Dai, B. 2022. Extract free dense labels from clip. In European Conference on Computer Vision, 696–712. Springer. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhou, Z.; Lei, Y.; Zhang, B.; Liu, L.; and Liu, Y. 2023. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11175–11185. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7286
2024
809
18,639
Exposing the Deception: Uncovering More Forgery Clues for Deepfake Detection Zhongjie Ba1,2, Qingyu Liu1,2, Zhenguang Liu1,2 *, Shuang Wu3, Feng Lin1,2, Li Lu1,2, Kui Ren1,2 1State Key Lab. of Blockchain and Data Security, Zhejiang University, Hangzhou, China 2ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, China 3Black Sesame Technologies, Singapore {zhongjieba,qingyuliu}@zju.edu.cn, liuzhenguang2008@gmail.com, wushuang@outlook.sg, {flin,li.lu,kuiren}@zju.edu.cn, Abstract Deepfake technology has given rise to a spectrum of novel and compelling applications. Unfortunately, the widespread proliferation of high-fidelity fake videos has led to pervasive confusion and deception, shattering our faith that seeing is believing. One aspect that has been overlooked so far is that current deepfake detection approaches may easily fall into the trap of overfitting, focusing only on forgery clues within one or a few local regions. Moreover, existing works heavily rely on neural networks to extract forgery features, lacking theoretical constraints guaranteeing that sufficient forgery clues are extracted and superfluous features are eliminated. These deficiencies culminate in unsatisfactory accuracy and limited generalizability in real-life scenarios. In this paper, we try to tackle these challenges through three designs: (1) We present a novel framework to capture broader forgery clues by extracting multiple non-overlapping local representations and fusing them into a global semanticrich feature. (2) Based on the information bottleneck theory, we derive Local Information Loss to guarantee the orthogonality of local representations while preserving comprehensive task-relevant information. (3) Further, to fuse the local representations and remove task-irrelevant information, we arrive at a Global Information Loss through the theoretical analysis of mutual information. Empirically, our method achieves state-of-the-art performance on five benchmark datasets. Our code is available at https://github.com/ QingyuLiu/Exposing-the-Deception, hoping to inspire researchers. 1 Introduction Fueled by the accessibility of large-scale video datasets and the maturity of deepfake technologies (Nirkin, Keller, and Hassner 2019; Li et al. 2019), one may effortlessly create massive forgery videos beyond human discernibility. However, malicious usage of deepfake can have serious influences, ranging from identity theft and privacy violations to large-scale financial frauds and dissemination of misinformation. For instance, in March 2022, hackers created a fake video of the Ukrainian president Zelenskyy in which he stands at a podium and addresses Ukrainian soldiers to lay down their arms. Such events are far from isolated, and they *The corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Example visualization of four local salient features obtained by our method. Each feature focuses on distinct forgery regions with little overlap. We zoom in to show the detailed differences for these regions between a real sample and a fake sample. Our method can grasp broader forgery clues including blending ghosts, consistent and symmetrical skin tones, tooth details, and stitching seams. highlight the risk of deepfake technology in misleading the public and undermining trust. Consequently, accurate and effective deepfake detection is essential for mitigating these risks. Fundamentally, deepfake detection amounts to recognizing forgery clues that distinguish real and synthetic images. There emerge many studies for deepfake detection, which can be roughly categorized into two main branches. One line of work (Afchar et al. 2018; Dolhansky et al. 2020) employs CNN networks to automatically learn the clues within manipulated images. Another line of work is dedicated to pondering and exploring the differences between fake and real images by incorporating human observation and understanding. These approaches hone in on high-level semantic imperfections of counterfeits (Haliassos et al. 2021), as well as underlying imperceptible patterns in artifacts (such as blending ghost (Shiohara and Yamasaki 2022) and frequency domain anomalies (Liu et al. 2021)). After scrutinizing and experimenting with the implementations of state-of-the-art approaches, we obtain two empiriThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 719 cal insights: (1) Despite achieving a high AUC on the training dataset, current methods usually experience a substantial decrease in AUC on unseen datasets. This may stem from the fact that current methods tend to unintentionally learn shortcuts for the training dataset, focusing only on one or a few forgery clues. (2) Current works heavily rely on neural networks to automatically extract forgery features, lacking rigorous theoretical guarantees to capture sufficient labelrelevant information and to eliminate superfluous information. Consequently, the extracted features may converge to insufficient representations or trivial features, compromising the accuracy of such methods. Motivated by these, we advocate extracting broader forgery clues for deepfake detection and seek to lay the mathematical foundation for sufficient forgery feature extraction. Specifically, we first adaptively extract multiple disentangled local features focused on non-overlapping aspects of the suspect image (as in Fig. 1). To ensure the orthogonality of these local features while preserving comprehensive task-relevant information, we utilize mutual information to derive an information bottleneck objective, i.e., Local Information Loss. Secondly, we fuse local features into a global representation guided by Global Information Loss that serves to eliminate task-irrelevant information. To evaluate the effectiveness of our method, we conduct extensive experiments on five widely used benchmark datasets, i.e., FaceForensics++ (Rossler et al. 2019), two versions of Celeb-DF (Li et al. 2020b), and two versions of DFDC (Dolhansky et al. 2020). We also conduct ablation studies to assess the efficacy of each key component in our method. Our method achieves state-of-the-art performance for both in-dataset (the training and test datasets are sampled from the same domain) and cross-dataset (the training and test datasets are two different datasets) settings. In summary, our contributions are as follows: • We propose a novel framework for deepfake detection that aims to obtain broader forgery clues. • We mathematically formulate a mutual information objective to effectively extract disentangled task-relevant local features. Additionally, we introduce another objective for aggregating the local features and eliminating superfluous information. We provide a rigorous theoretical analysis to show how these mutual information objectives can be optimized. • Empirically, our method achieves state-of-the-art performance on five benchmark datasets. Interesting and new insights are also presented (e.g., most deepfake detection approaches tend to focus on only a few specific regions around the face swap boundaries). 2 Related Work Fueled by the maturity of deep learning models and largescale labeled datasets, deep learning has found its applications in various fields (Zhang et al. 2023; Wei et al. 2023; Liu et al. 2022; Chiou et al. 2020; Liu et al. 2023; Song, Chen, and Jiang 2023), especially for deepfake. The ease of access and misuse of deepfake technology has led to the materialization of severe risks, and developing deepfake detection to counteract such threats is all the more pertinent and urgent. Deepfake detection (Ying et al. 2023; Ba et al. 2023; Hua et al. 2023; Wu et al. 2023; Pan et al. 2023; Shuai et al. 2023) faces a significant challenge posed by the sophistication of deepfake technology that can create highly realistic content that is barely distinguishable from real ones. A large body of literature (Dong et al. 2022b; Li et al. 2020a; Shiohara and Yamasaki 2022; Chen et al. 2022a; Haliassos et al. 2021; Zhao et al. 2021a) focuses on semantic facial feature clues of forgeries. ICT (Dong et al. 2022b) models identity differences in the inner and outer facial regions. Face X-ray (Li et al. 2020a) and SBIs (Shiohara and Yamasaki 2022) find the blending boundaries of face swap as evidence for forged images and build private augmented datasets. Chen.et al. (Chen et al. 2022a) further expands upon blending-based forgeries, considering the eyes, nose, mouth, and blending ratios. LipForensics (Haliassos et al. 2021) observes the irregularities of mouth movements in forgery videos. However, such methods only apply to the detection of face swaps and semantic-guided forgery detection cannot be exhaustive vis-`a-vis the rapid development of deepfake techniques. Another class of works (Frank et al. 2020; Liu et al. 2021; Luo et al. 2021; Qian et al. 2020) proposes to take into further consideration human understanding differences in the frequency domain. Qian.et al. (Qian et al. 2020) employ frequency as complementary evidence for detecting forgeries, which can reveal either subtle forgery clues or compression errors. Frank.et al. (Frank et al. 2020) and SPSL (Liu et al. 2021) search for ghost artifacts resulting from up-sampling operations in generative networks. While such works include more features than pure semantically visual clues, the additional features modelled tend to be domain-specific, thereby failing to generalize well to cross-dataset scenarios. Researchers have also engaged in multi-headed attention modules to correlate the low-level textural features and high-level semantics at different regions for deepfake detection (Zhao et al. 2021a). Nevertheless, a challenge persists, as there exists no concrete theoretical assurance that these attention regions segmented based on the paradigm of human vision remain entirely task-relevant and independent. Furthermore, the performance of such attention-based models is greatly affected by data scarcity. 3 Methodology 3.1 Overview Presented with a suspect image, we aim to judge its authenticity by extracting forgery clues that could distinguish between genuine and synthetic images. Technically, deepfake detection can be viewed as a binary classification problem. Early methods (Afchar et al. 2018; Dolhansky et al. 2020) directly utilize deep neural networks to automatically learn differences between genuine and synthetic images. Recent works (Shiohara and Yamasaki 2022; Haliassos et al. 2021; Liu et al. 2021) try to draw inspiration from human understanding and explore human-perceivable forgery clues. Unfortunately, current approaches tend to unintentionally learn The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 720 : Concatenate ℒ𝐿𝐼𝐿: Local Information Loss ℒ𝐺𝐼𝐿: Global Information Loss (a) Data preparation Face Extraction Face Extractor Raw Video Local Disentanglement Overall Objectives (b) Deepfake detection Input Image 𝑥 𝓛𝑳𝑰𝑳: 𝑚𝑎𝑥෍ 𝑖=1 𝑛 𝐼𝑧𝑖; 𝑦ห𝒵\𝑧𝑖 𝓛𝑮𝑰𝑳: 𝑚𝑖𝑛𝐼𝒵; 𝑦−𝐼𝐺; 𝑦 𝓛𝑪𝑬: 𝑚𝑖𝑛𝐻(𝑦ȁ𝐺) Local Information Block 𝒇𝟏 𝑧1 Backbone Local Information Block 𝒇𝒏 𝑧𝑛 Global Aggregation 𝐺 𝒵 Fusion Layer Classification 𝐺 Real Fake Classifier Backbone ℒ𝐶𝐸: Classification Loss Figure 2: Method overview. In the data preparation phase, we first extract frame-level facial bounding boxes from raw videos. For deepfake detection, our method consists of three modules. i) We first employ local information blocks fi to extract multiple disentangled local features zi corresponding to different forgery regions. We introduce local information loss to ensure that zi has comprehensive forgery-related information and is orthogonal to zj. ii) We fuse all zi into a global feature G under the guidance of a Global Information Loss. iii) Finally, G is passed to the classification module to output the prediction result. We design our Local and Global Information Loss based on information bottleneck theory. shortcuts1, which actually makes approaches focus only on one or a few forgery regions. This overfitting issue manifests as the limited generalizability of state-of-the-art methods, namely significant accuracy decreases when applied to unseen datasets. Furthermore, due to the lack of rigorous theoretical constraints, neural networks of current methods may converge to trivial features or insufficient representations. Motivated by these, we propose to broaden our extraction of forgery clues by adaptively extracting multiple nonoverlapping local features. We also establish a theoretical foundation to ensure the orthogonality and sufficiency of extracted features. Overall, our approach consists of two key components: the local disentanglement module and the global aggregation module. The local disentanglement module serves to extract the non-overlapping local features while the global aggregation module is designed to aggregate them into a global representation. The pipeline of our proposed approach is shown in Fig. 2, which can be summarized as follows. (1) For image preprocessing, we extract face regions using a popular pre-trained backbone network. (2) Given the preprocessed image as an input, we design the local disentanglement module to extract multiple local features. The local disentanglement module comprises n local information blocks {fi}n i=1, each extracting a local feature zi. fi consists of feature extraction backbone networks such as ResNet (He et al. 2016). To ensure that local features contain comprehensive information related to the task and zi is orthogonal to zj(i ̸= j), we derive 1Shortcuts are decision rules optimized for benchmark performance but incapable of transferring to more challenging testing conditions due to a domain gap. the Local Information Loss LLIL. (3) Thereafter, we design the global aggregation module to fuse local features. Specifically, we first concatenate all local features to a joint local representation Z = Ln i zi. Then, a fusion layer fg serves to fuse and compress Z to obtain our final global representation G for classification. To guide this global representation extraction, we design a Global Information Loss LGIL that facilitates the retaining of sufficient task-related information and the elimination of superfluous information in Z. In what follows, we elaborate on the details of the local disentanglement and global aggregation modules one by one. 3.2 Local Disentanglement Module In this section, we provide the key derivation of Local Information Loss within the local disentanglement module. Given an input image x with n (n ≥2) associated local feature representations zi, our Local Information Loss objective seeks to ensure two fundamental properties within the joint local representation Z = Ln i zi, i.e., comprehensiveness and orthogonality. Comprehensiveness mandates the inclusion of maximal task-relevant information within Z, while orthogonality necessitates that the individual local features zi remain non-overlapping. To facilitate understanding, Fig. 3 shows the information relationship when n = 2. In the terminology of mutual information theory, the relationship between labels y and Z is expressed as: I(y; Z) = I(y; z1, · · · , zn) = H(y) −H(y | Z), (1) where I(∗) is mutual information and H(∗) is entropy. I(y; Z) expresses the amount of predictive information (i.e., The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 721 current task-related information) contained in Z. H(y | Z) and H(y) represent the required and whole information related to the task, respectively. The comprehensiveness objective of information in Z is given by: max I(y; Z). (2) The orthogonality condition between two probability distributions is equivalent to them having zero mutual information. As such, we can disentangle local feature representations by minimizing the mutual information between them, i.e., min Pn i̸=j I(zi; zj). According to the definition of interaction information (McGill 1954), I(zi; zj) can be further decomposed into: I(zi; zj) = I(zi; zj; y) | {z } target + I(zi; zj | y) | {z } superfluous , (3) where I(zi; zj; y) represents the amount of label information retained within both zi and zj, while I(zi; zj | y) is extraneous (superfluous) information encoded within both zi and zj, which is irrelevant to the task. For the orthogonality of local features, we are primarily concerned with label-related (target) information. As for the elimination of superfluous information, we formulate an objective inspired by the information bottleneck, namely Global Information Loss (which will be discussed later in the following section). We first focus on the target term in Eq. 3, i.e.,: min n X i̸=j I(zi; zj; y). (4) By applying the chain rule for mutual information, I(y; Z) = Pn i=1 I(zi; y | z1, · · · , zi−1), we can rewrite Eq. 2 as: max I(y; Z) ≤max n X i̸=j I(zi; y | Z \ zi) + I(zi; zj; y), (5) where Z \ zi ≡z1 ⊕· · · ⊕zi−1 ⊕zi+1 ⊕· · · ⊕zn. Overall, the comprehensiveness and orthogonality constraints for local feature extraction can be achieved by simultaneously optimizing Eq. 5 and Eq. 4. It is worth noting that these optimization objectives are in conflict with I(zi; zj; y). After resolving these conflicting constraints, the local objective is eventually: max n X i=1 I(zi; y | Z \ zi). (6) Intuitively, the local objective corresponds to the red regions illustrated in Fig. 3a. By optimizing Eq. 6, our goal is to ideally cover all task-relevant information with disentangled local features. However, directly estimating Eq. 6 is intractable in general. Earlier works (Poole et al. 2019) have pointed out major difficulties in mutual information estimation, primarily due to the curse of dimensionality (the amount of samples for accurately estimating mutual information scales exponentially with the embedding dimension). In light of this, we optimize Eq. 6 via a variational approach instead of explicitly estimating the mutual information. We have the following theorem (detailed proof is in supplementary files): (a) Optimizing local features (b) Optimizing the global feature Figure 3: Information content of feature representations Theorem: Eq. 6 has a lower bound due to: n X i=1 I(zi; y | Z \ zi) ≥ n X i=1 DKL  Pz∥Pz\zi  , (7) where PZ\zi = p (y | Z \ zi) , PZ = p (y | Z) represent the predicted distributions. DKL denotes the Kullback-Leibler (KL) divergence. Given the above analytical derivations, we can thus formulate the Local Information Loss as: LLIL = min θ exp − n X i=1 DKL  Pz∥Pz\zi  ! , (8) where θ denotes the model parameters in our local disentanglement module. Here, since the KL-divergence is not bounded above, i.e. DKL ∈[0, ∞), we take the exponential of its negative value to transform the objective from maximization to minimization. The transformed objective is bounded within (0, 1] which is numerically advantageous. Upon optimizing for this objective, local features are constrained to be mutually orthogonal while simultaneously approaching the maximal covering of all task-related information. In this way, our method uncovers more forgery clues and disentangles forgery regions adaptively, thus obtaining feature representations with richer task-related information. 3.3 Global Aggregation Module Following the local disentanglement module, the concatenated local features Z encompass comprehensive but not purified information related to the task. Thus, we pass Z through our global aggregation module, which plays the role of an information bottleneck2 to eliminate the superfluous 2The concept of information bottlenecks is proposed in (Tishby, Pereira, and Bialek 2000) which attributes the robustness of a machine learning model to its ability to distill superfluous noises while retaining only useful information. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 722 information and obtain a global representation G. The information bottleneck objective can be formulated as: LIB = H(G) −I(y; G), (9) where H(G) denotes the total information content in G. Once again, estimating LIB is an intractable problem in practice due to the curse of dimensionality. Minimizing superfluous information will therefore be delegated to the network operations and is not explicitly supervised. Therefore, to ensure that G has sufficient label information, we employ a variational approach once again. Since G is a representation learnt from Z, the task-relevant information in G is upper-bounded by that in Z, denoted as I(y; G) ≤I(y; Z). By minimizing the label information difference between local features and the global feature, we optimize G for the sufficiency of label information: min I(y; Z) −I(y; G). (10) We make use of the following theorem from (Tian et al. 2021): min I(y; Z) −I(y; G) ⇐⇒min[DKL[PZ∥PG]]]. (11) Finally, we arrive at the Global Information Loss: LGIL = min ϕ EG∼Eϕ(G|Z) [DKL[PZ∥PG]] , (12) where ϕ denotes the model parameters of the global aggregation module. Overall Objective The overall objective for our framework consists of a cross-entropy classification loss, Local Information Loss, and Global Information Loss: L = LCE + αLLIL + βLGIL, (13) where α and β are hyperparameters for the model. 4 Evaluation In this section, we conduct extensive experiments on five large-scale deepfake datasets to evaluate the proposed method, including setup, comparison with state-of-the-art methods, ablation study, and visualization results. See the supplementary file for more experimental results. 4.1 Experimental Setup Datasets. Following existing deepfake detection approaches (Chen et al. 2022a; Bai et al. 2023), we evaluate our model on five public datasets, namely FaceForensics++ (FF++) (Rossler et al. 2019), two versions of CelebDF (Li et al. 2020b) and two versions of DeepFake Detection Challenge (DFDC) (Dolhansky et al. 2020) datasets. FF++ dataset, which is the most widely used dataset, utilizes four forgery-generation methods for producing 4,000 forgery videos, i.e., DeepFakes (DF), Face2Face (FF), FaceSwap (FS), and NeuralTextures (NT). FF++ has three compression versions and we use the high-quality level (C23) one for training. Celeb-DF dataset contains two versions, termed Celeb-DF-V1 (CD1) and Celeb-DF-V2 (CD2). CD1 consists of 408 pristine videos and 795 manipulated videos, while CD2 contains 590 real videos and 5,639 DeepFake videos. DFDC dataset includes DFDC-Preview (DFDC-P) and DFDC. DFDC-P as the preview of DFDC consists of 5,214 videos. DFDC, as one of the most large-scale face swap datasets, contains more than 110,000 videos sourced from 3,426 actors. Implementation details. In data pre-processing, we use the state-of-the-art face extractor RetinaFace (Deng et al. 2020) and oversample pristine videos to balance training datasets. For the model architecture, we employ four local information blocks (LIBs) using pre-trained ResNet-34 (He et al. 2016) as the backbone. For training, we use the method in (Liebel and K¨orner 2018) to determine α and β in Eq. 13, to automatically balance the weights for these loss terms. Evaluation metrics. We utilize the Accuracy (ACC), Area Under Receiver Operating Characteristic Curve (AUC), and log-loss score for empirical evaluation. (1) ACC. We employ the accuracy rate as one of the metrics in our evaluations, which is commonly used in deepfake detection tasks. (2) AUC. Considering the imbalance of pristine and manipulated videos in the datasets, we use AUC as the predominant evaluation metric. (3) Logloss. This is the evaluation metric designated for the Deepfake Detection Challenge. We evaluate the log-loss score to benchmark our method against winning teams. By default, we use framelevel metrics. Since our method uses a single frame as input, we also compute video-level AUC (as in (Haliassos et al. 2021)) for a more comprehensive comparison with videolevel detection methods. 4.2 Comparison with Existing Methods In this section, we benchmark our method against state-ofthe-art deepfake detection methods for in-dataset and crossdataset settings. In-dataset performance In in-dataset evaluations, we train and test methods on FF++ (C23), CD2, and DFDC, respectively. Tab. 1 presents in-dataset comparison results. Considering that most current deepfake detection methods have not yet released their codes, we directly cited their results in the corresponding original papers. From Tab. 1, we observe that our method is capable of consistently outperforming existing methods on all three benchmarks. For example, the AUC of our method is 0.939 on DFDC while the state-of-the-art detection method Chugh.et al. (Chugh et al. 2020) is 0.907. The Logloss of our method is also outperformed by the champion team in DFDC. Cross-dataset Performance and Model Generalizability The cross-dataset setting is more challenging than the in-dataset setting for deepfake detection. To evaluate the generalization abilities of the methods on unseen datasets, we train the models on the FF++ (C23) dataset and test them on CD1, CD2, DFDC-P, and DFDC datasets. Since our method uses a single frame as input, in addition to framelevel comparisons, we also compute the average AUC over frames in a video for comparison with video-level methods. Tab. 2 and Tab. 3 demonstrate cross-dataset comparison results in terms of frame-level and video-level AUC, respectively. Our first insight is that state-of-the-art deepfake detection methods still suffer from relatively low AUC on unseen datasets, which reveals that such methods are prone to overfitting the training dataset. The second insight The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 723 FF++(C23) Celeb-DF-V2 DFDC Method AUC↑ Method AUC↑ Method AUC↑ LogLoss↓ Xception (Rossler et al. 2019) 0.963 DeepfakeUCL (Fung et al. 2021) 0.905 Selim Seferbekov∗ 0.882 0.4279 Xception-ELA 0.948 SBIs (Shiohara and Yamasaki 2022) 0.937 NTechLab∗ 0.880 0.4345 SPSL (Masi et al. 2020) 0.943 Agarwal et al. 2020 0.990 Eighteen Years Old∗ 0.886 0.4347 Face X-ray (Li et al. 2020a) 0.874 Wu et al. 2023 0.998 WM∗ 0.883 0.4284 TD-3DCNN (Zhang et al. 2021) 0.722 TD-3DCNN 0.888 TD-3DCNN 0.790 0.3670 F3-Net (Qian et al. 2020) 0.981 Xception 0.985 Chugh et al. 2020 0.907 FInfer (Hu et al. 2022) 0.957 FInfer 0.933 FInfer 0.829 Ours (ResNet34) 0.983 Ours (ResNet34) 0.999 Ours (ResNet34) 0.939 0.3379 Table 1: In-dataset comparison results on FF++, Celeb-DF-V2, and DFDC. We train and test models on the same dataset, reporting the frame-level AUC and LogLoss. ∗is the method of winning the top four teams in DFDC. The bold and underline mark the best and second performances, respectively. Method Training dataset CD1 CD2 DFDC-P DFDC Xception (Rossler et al. 2019) FF++ 0.750∗ 0.778∗ 0.698∗ 0.636∗ DSP-FWA (Li and Lyu 2018) FF++ 0.785∗ 0.814∗ 0.595∗ Meso4 (Afchar et al. 2018) FF++ 0.422∗ 0.536∗ 0.594∗ F3-Net (Qian et al. 2020) FF++ 0.712∗ 0.729∗ 0.646∗ Face X-ray (Li et al. 2020a) PD 0.806 0.742∗ 0.809 Multi-Attention (Zhao et al. 2021a) FF++ 0.674 0.680∗ OST (Chen et al. 2022b) FF++ 0.748 0.833 HCIL (Gu et al. 2022a) FF++ 0.790 0.692 LiSiam (Wang, Sun, and Tang 2022) FF++ 0.811 0.782 RECCE (Cao et al. 2022) FF++ 0.687 0.691 ICT (Dong et al. 2022b) PD 0.814 0.857 DCL (Sun et al. 2022) FF++ 0.823 0.767 IID (Huang et al. 2023) FF++ 0.838 0.812 Ours (ResNet-34) FF++ 0.818 0.864 0.851 0.721 Table 2: Cross-dataset comparison results (frame-level AUC) on Celeb-DF-V1 (CD1), Celeb-DF-V2 (CD2), DFDC-Preview (DFDC-P), and DFDC. We train our method on FF++ (C23) and test it on other benchmark datasets. The ’PD’ means private data. * is collected from (Dong et al. 2022b; Cao et al. 2022; Sun et al. 2022), and other results are directly cited from the corresponding original paper. The bold and underline mark the best and second performances, respectively. is that our method is more robust, with significant improvement when tested on unseen datasets. This reflects that our model has a better capability for uncovering forgery clues. The improvements in generalizability can be attributed to the information bottleneck in our framework design, where our model demonstrates a better capacity for identifying different forms of deepfake artifacts instead of merely the instances specific to the training dataset. Overall, our method achieves state-of-the-art frame-level and video-level generalization performance. For frame-level comparisons, our method attains 0.818 and 0.857 AUCs on CD1 and CD2 respectively, outperforming the current state-of-the-art method ICT. Our method also improves the AUC on DFDC-P from 0.833 (OST) to 0.851, and on DFDC from 0.691 (RECCE) to 0.721. In contrast with video-level methods, our method surpasses the current state-of-the-art technique, AUNet, in terms of AUC with scores of 0.936, 0.902, and 0.754 on CD2, DFDC-P, and DFDC, respectively. Remarkably, despite employing solely traditional data augmentation techniques, our approach attains state-of-the-art performance across all four benchmarks, surpassing models (such as AUNet, SBIs, and ICT) trained on private augmented datasets. 4.3 Ablation Study In this section, we first study the effectiveness of our two information losses, i.e., Local Information Loss LLIL and Global Information Loss LGIL. We then explore the impact of local feature quantity within the local information block. Ablation Study on Information Losses We study the effects of removing LLIL and LGIL in our method. We train models on FF++ (C23) and test them on CD2. Tab. 4 demonstrates the results of the ablation study on the proposed two information losses. Clearly, we see that LLIL and LGIL play key roles in performance improvement over in-dataset and cross-dataset settings. The AUC improvement of using the proposed losses is more critical in cross-dataset than indataset settings. This empirical evidence suggests that incorporating the proposed losses may lead to extracting broader clues. Quantitatively, LLIL and LGIL have a dominant conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 724 Method Training dataset CD2 DFDC-P DFDC Xception (Rossler et al. 2019) FF++ 0.737∗ 0.679∗ 0.709∗ F3-Net (Qian et al. 2020) FF++ 0.757∗ 0.709∗ PCL+I2G (Zhao et al. 2021b) PD 0.900 0.744 0.675 FST-Matching (Dong et al. 2022a) FF++ 0.894 LipForensics (Haliassos et al. 2021) FF++ 0.824 0.735 FTCN (Zheng et al. 2021) FF++ 0.869 0.740 0.710∗ Luo.et al. (Luo et al. 2021) FF++ 0.797 ResNet-34+ SBIs (Shiohara and Yamasaki 2022) PD 0.870 0.822 0.664 EFNB4+ SBIs (Shiohara and Yamasaki 2022) PD 0.932 0.862 0.724 RATF (Gu et al. 2022b) FF++ 0.765 0.691 Li.et al. (Li et al. 2022) FF++ 0.848 0.785 AltFreezing (Wang et al. 2023) FF++ 0.895 AUNet (Bai et al. 2023) PD 0.928 0.862 0.738 Ours (ResNet-34) FF++ 0.936 0.902 0.754 Table 3: Cross-dataset comparison results (video-level AUC) on Celeb-DF-V2 (CD2), DFDC-Preview (DFDC-P), and DFDC. We train our method on FF++ (C23) and test it on other benchmark datasets. The ’PD’ means private data. * is collected from (Shiohara and Yamasaki 2022; Bai et al. 2023), and other results are directly cited from the corresponding original paper. The bold and underline mark the best and second performances, respectively. Our Method without Our Method MultiAttention Facex-ray Xception Mask Input Image NT FS DF FF Figure 4: Visual examples of our method on various types of forgery methods within FF++ (C23), i.e., Deepfakes (DF), Face2Face (FF), FaceSwap (FS) and NeuralTextures (NT). Comparison between our method with and without LLIL, MultiAttentional, Face-x-ray, and Xception. ID Loss FF++ (C23) CD2 LLIL LGIL ACC↑ AUC↑ ACC↑ AUC↑ 1 ! 94.26 0.979 78.79 0.840 2 ! 94.28 0.977 76.96 0.827 3 93.53 0.966 77.29 0.816 4 ! ! 94.98 0.983 80.70 0.864 Table 4: Ablation study of the proposed LLIL and LGIL for our method. We show frame-level ACC (%) and AUC training on FF++ (C23) and testing on Celeb-DF-V2 (CD2). The bold mark best performance. tribution to our method, with AUC on FF++ improving from 0.966 (without both) to 0.983 (with both) and AUC on CD2 3 4 5 6 7 Number of LIB 0.970 0.975 0.980 AUC Faceforensics++ (In-dataset) 92 94 ACC (%) 3 4 5 6 7 Number of LIB 0.81 0.82 0.83 0.84 AUC Celeb-DF-V2 (Cross-dataset) 65 70 75 ACC (%) AUC ACC Figure 5: In-dataset and cross-dataset performance effects within different numbers of LIBs. We train models on FF++ (C23) with 10 epochs and test them on Celeb-DF-V2. from 0.816 to 0.864. The absence of either loss will bring about a significant drop in model performance. Ablation Study on local information block We first investigate the effect of varying the number of local informaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 725 tion blocks (LIB), i.e. local feature quantity. We train models on FF++ (C23) and test them on CD2, and report the framelevel AUC and ACC. The number of LIB is varied from three to seven, while other hyper-parameters are fixed. Fig. 5 shows the results for different numbers of LIB. We observe that both in-dataset and cross-dataset performance improve with an increasing number of LIB increases. However, when the number of LIBs becomes excessively high (n = 7), the model’s generalization performance experiences a significant decline. This aligns with our intuition, as gradually augmenting the number of LIBs enlarges the number of trainable network parameters, directly affecting the in-dataset performance. Simultaneously, this expansion results in a rise in local feature quantity, contributing to the enhancement of the model’s generalization performance. Nevertheless, as the number of LIBs continues to rise, an overabundance of parameters induces model overfitting, ultimately diminishing the model’s capacity for generalization. 4.4 Visualization To further assess the model interpretability and the efficacy of the Local Information Loss LLIL, we visualize four samples subjected to various forgery methods on FF++. We apply Grad-CAM (Selvaraju et al. 2017) for representation visualization. As shown in Fig. 4, our approach offers several noteworthy insights. Firstly, it becomes evident that our method excels in extracting more forgery clues. Other detection techniques fixate on specific regions, disregarding subtle cues present elsewhere. This leads to confined regions of focus for detection. The second insight reveals our method focuses on different forgery regions with little overlap. It provides evidence that the orthogonality within extracted local representations. Specifically, our method identifies manipulated cues in the nose, cheek, forehead, and mouth, corresponding to z1 through z4 respectively. In contrast, results without LLIL depict local representations possess an imbalanced capacity to signify forgery features. While some local representations contain ample information (z′ 1), others offer duplicated (z′ 3) or scanty (z′ 2 and z′ 4) forgery-related clues. 4.5 Limitations Our method is a purely data driven approach relying on information theoretic constraints to search for forgery clues. For some classes or forgeries, employing prior knowledge as guidance could be more optimal. For future work, we seek to incorporate heuristic guidance into our model, which could further boost performance and interpretability. 5 Conclusion In this paper, we propose an information bottleneck based framework for deepfake detection, which aims to extract broader forgery clues. In this context, we derive local information losses to obtain task-related independent local features. We further theoretically analyze the global information objective to aggregate local features into a sufficient and purified global representation for classification. Extensive experiments demonstrate that our method achieves stateof-the-art in-dataset and cross-dataset performance on five benchmark datasets, indicating its potential as a reliable solution for deepfake detection in various real-world scenarios. Acknowledgements This work is partially supported by Zhejiang Provincial Natural Science Foundation of China under Grant No. LD24F020010, the National Natural Science Foundation of China (62172359, 62372402, 61972348, 62102354), the Key R&D Program of Zhejiang Province (2023C01217), the Fundamental Research Funds for the Central Universities (No. 2021FZZX001-27), and the Hangzhou Leading Innovation and Entrepreneurship Team (TD2020003). References Afchar, D.; Nozick, V.; Yamagishi, J.; and Echizen, I. 2018. Mesonet: a compact facial video forgery detection network. 1–7. Ba, Z.; Wen, Q.; Cheng, P.; Wang, Y.; Lin, F.; Lu, L.; and Liu, Z. 2023. Transferring Audio Deepfake Detection Capability across Languages. In Proceedings of the ACM Web Conference 2023, 2033–2044. Bai, W.; Liu, Y.; Zhang, Z.; Li, B.; and Hu, W. 2023. AUNet: Learning Relations Between Action Units for Face Forgery Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 24709–24719. Cao, J.; Ma, C.; Yao, T.; Chen, S.; Ding, S.; and Yang, X. 2022. End-to-end reconstruction-classification learning for face forgery detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4113–4122. Chen, L.; Zhang, Y.; Song, Y.; Liu, L.; and Wang, J. 2022a. Self-supervised learning of adversarial example: Towards good generalizations for deepfake detection. 18710–18719. Chen, L.; Zhang, Y.; Song, Y.; Wang, J.; and Liu, L. 2022b. Ost: Improving generalization of deepfake detection via one-shot test-time training. Advances in Neural Information Processing Systems, 35: 24597–24610. Chiou, M.-J.; Liu, Z.; Yin, Y.; Liu, A.-A.; and Zimmermann, R. 2020. Zero-shot multi-view indoor localization via graph location networks. In Proceedings of the 28th ACM International Conference on Multimedia, 3431–3440. Chugh, K.; Gupta, P.; Dhall, A.; and Subramanian, R. 2020. Not made for each other-audio-visual dissonancebased deepfake detection and localization. 439–447. Deng, J.; Guo, J.; Ververas, E.; Kotsia, I.; and Zafeiriou, S. 2020. Retinaface: Single-shot multi-level face localisation in the wild. 5203–5212. Dolhansky, B.; Bitton, J.; Pflaum, B.; Lu, J.; Howes, R.; Wang, M.; and Ferrer, C. C. 2020. The deepfake detection challenge (dfdc) dataset. arXiv preprint arXiv:2006.07397. Dong, S.; Wang, J.; Liang, J.; Fan, H.; and Ji, R. 2022a. Explaining deepfake detection by analysing image matching. In European Conference on Computer Vision, 18–35. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 726 Dong, X.; Bao, J.; Chen, D.; Zhang, T.; Zhang, W.; Yu, N.; Chen, D.; Wen, F.; and Guo, B. 2022b. Protecting celebrities from deepfake with identity consistency transformer. 9468– 9478. Frank, J.; Eisenhofer, T.; Sch¨onherr, L.; Fischer, A.; Kolossa, D.; and Holz, T. 2020. Leveraging frequency analysis for deep fake image recognition. 3247–3258. Fung, S.; Lu, X.; Zhang, C.; and Li, C.-T. 2021. Deepfakeucl: Deepfake detection via unsupervised contrastive learning. 1–8. Gu, Z.; Yao, T.; Chen, Y.; Ding, S.; and Ma, L. 2022a. Hierarchical Contrastive Inconsistency Learning for Deepfake Video Detection. 596–613. Gu, Z.; Yao, T.; Yang, C.; Yi, R.; Ding, S.; and Ma, L. 2022b. Region-aware temporal inconsistency learning for deepfake video detection. 1. Haliassos, A.; Vougioukas, K.; Petridis, S.; and Pantic, M. 2021. Lips don’t lie: A generalisable and robust approach to face forgery detection. 5039–5049. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. 770–778. Hu, J.; Liao, X.; Liang, J.; Zhou, W.; and Qin, Z. 2022. Finfer: Frame inference-based deepfake detection for highvisual-quality videos. 36(1): 951–959. Hua, Y.; Shi, R.; Wang, P.; and Ge, S. 2023. Learning PatchChannel Correspondence for Interpretable Face Forgery Detection. IEEE Transactions on Image Processing, 32: 1668– 1680. Huang, B.; Wang, Z.; Yang, J.; Ai, J.; Zou, Q.; Wang, Q.; and Ye, D. 2023. Implicit Identity Driven Deepfake Face Swapping Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4490– 4499. Li, J.; Xie, H.; Yu, L.; and Zhang, Y. 2022. Waveletenhanced Weakly Supervised Local Feature Learning for Face Forgery Detection. 1299–1308. Li, L.; Bao, J.; Yang, H.; Chen, D.; and Wen, F. 2019. Faceshifter: Towards high fidelity and occlusion aware face swapping. arXiv preprint arXiv:1912.13457. Li, L.; Bao, J.; Zhang, T.; Yang, H.; Chen, D.; Wen, F.; and Guo, B. 2020a. Face x-ray for more general face forgery detection. 5001–5010. Li, Y.; and Lyu, S. 2018. Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656. Li, Y.; Yang, X.; Sun, P.; Qi, H.; and Lyu, S. 2020b. Celebdf: A large-scale challenging dataset for deepfake forensics. 3207–3216. Liebel, L.; and K¨orner, M. 2018. Auxiliary tasks in multitask learning. arXiv preprint arXiv:1805.06334. Liu, H.; Li, X.; Zhou, W.; Chen, Y.; He, Y.; Xue, H.; Zhang, W.; and Yu, N. 2021. Spatial-phase shallow learning: rethinking face forgery detection in frequency domain. 772– 781. Liu, Z.; Qian, P.; Yang, J.; Liu, L.; Xu, X.; He, Q.; and Zhang, X. 2023. Rethinking smart contract fuzzing: Fuzzing with invocation ordering and important branch revisiting. IEEE Transactions on Information Forensics and Security, 18: 1237–1251. Liu, Z.; Wu, S.; Xu, C.; Wang, X.; Zhu, L.; Wu, S.; and Feng, F. 2022. Copy Motion From One to Another: Fake Motion Video Generation. arXiv preprint arXiv:2205.01373. Luo, Y.; Zhang, Y.; Yan, J.; and Liu, W. 2021. Generalizing face forgery detection with high-frequency features. 16317– 16326. Masi, I.; Killekar, A.; Mascarenhas, R. M.; Gurudatt, S. P.; and AbdAlmageed, W. 2020. Two-branch recurrent network for isolating deepfakes in videos. 667–684. McGill, W. 1954. Multivariate information transmission. Transactions of the IRE Professional Group on Information Theory, 4(4): 93–111. Nirkin, Y.; Keller, Y.; and Hassner, T. 2019. Fsgan: Subject agnostic face swapping and reenactment. 7184–7193. Pan, K.; Yin, Y.; Wei, Y.; Lin, F.; Ba, Z.; Liu, Z.; Wang, Z.; Cavallaro, L.; and Ren, K. 2023. DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues. In Proceedings of the 31st ACM International Conference on Multimedia, 8035–8046. Poole, B.; Ozair, S.; Van Den Oord, A.; Alemi, A.; and Tucker, G. 2019. On variational bounds of mutual information. 5171–5180. Qian, Y.; Yin, G.; Sheng, L.; Chen, Z.; and Shao, J. 2020. Thinking in frequency: Face forgery detection by mining frequency-aware clues. 86–103. Rossler, A.; Cozzolino, D.; Verdoliva, L.; Riess, C.; Thies, J.; and Nießner, M. 2019. Faceforensics++: Learning to detect manipulated facial images. 1–11. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. 618–626. Shiohara, K.; and Yamasaki, T. 2022. Detecting deepfakes with self-blended images. 18720–18729. Shuai, C.; Zhong, J.; Wu, S.; Lin, F.; Wang, Z.; Ba, Z.; Liu, Z.; Cavallaro, L.; and Ren, K. 2023. Locate and Verify: A Two-Stream Network for Improved Deepfake Detection. In Proceedings of the 31st ACM International Conference on Multimedia, 7131–7142. Song, X.; Chen, J.; and Jiang, Y.-G. 2023. Relation Triplet Construction for Cross-modal Text-to-Video Retrieval. In Proceedings of the 31st ACM International Conference on Multimedia, 4759–4767. Sun, K.; Yao, T.; Chen, S.; Ding, S.; Li, J.; and Ji, R. 2022. Dual contrastive learning for general face forgery detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2316–2324. Tian, X.; Zhang, Z.; Lin, S.; Qu, Y.; Xie, Y.; and Ma, L. 2021. Farewell to mutual information: Variational distillation for cross-modal person re-identification. 1522–1531. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 727 Tishby, N.; Pereira, F. C.; and Bialek, W. 2000. The information bottleneck method. arXiv preprint physics/0004057. Wang, J.; Sun, Y.; and Tang, J. 2022. LiSiam: Localization invariance Siamese network for deepfake detection. IEEE Transactions on Information Forensics and Security, 17: 2425–2436. Wang, Z.; Bao, J.; Zhou, W.; Wang, W.; and Li, H. 2023. AltFreezing for More General Video Face Forgery Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4129–4138. Wei, Y.; Sun, Y.; Zheng, R.; Vemprala, S.; Bonatti, R.; Chen, S.; Madaan, R.; Ba, Z.; Kapoor, A.; and Ma, S. 2023. Is Imitation All You Need? Generalized Decision-Making with Dual-Phase Training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16221–16231. Wu, Y.; Song, X.; Chen, J.; and Jiang, Y.-G. 2023. Generalizing Face Forgery Detection via Uncertainty Learning. In Proceedings of the 31st ACM International Conference on Multimedia, 1759–1767. Ying, Q.; Hu, X.; Zhou, Y.; Qian, Z.; Zeng, D.; and Ge, S. 2023. Bootstrapping multi-view representations for fake news detection. In Proceedings of the AAAI conference on Artificial Intelligence, volume 37, 5384–5392. Zhang, D.; Li, C.; Lin, F.; Zeng, D.; and Ge, S. 2021. Detecting Deepfake Videos with Temporal Dropout 3DCNN. 1288–1294. Zhang, X.; Hong, H.; Hong, Y.; Huang, P.; Wang, B.; Ba, Z.; and Ren, K. 2023. Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks. In 2024 IEEE Symposium on Security and Privacy (SP), 53– 53. IEEE Computer Society. Zhao, H.; Zhou, W.; Chen, D.; Wei, T.; Zhang, W.; and Yu, N. 2021a. Multi-attentional deepfake detection. 2185–2194. Zhao, T.; Xu, X.; Xu, M.; Ding, H.; Xiong, Y.; and Xia, W. 2021b. Learning self-consistency for deepfake detection. 15023–15033. Zheng, Y.; Bao, J.; Chen, D.; Zeng, M.; and Wen, F. 2021. Exploring temporal coherence for more general video face forgery detection. 15044–15054. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 728
2024
81
18,640
A Computation-Aware Shape Loss Function for Point Cloud Completion Shunran Zhang1, 2*, Xiubo Zhang1*, Tsz Nam Chan3, Shenghui Zhang1, Leong Hou U1† 1University of Macau 2Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences 3Shenzhen University {yc27968, mc25209}@um.edu.mo, edisonchan@szu.edu.cn, {yc07428, ryanlhu}@um.edu.mo Abstract Learning-based point cloud completion tasks have shown potential in various critical tasks, such as object detection, classification, and registration. However, accurately and efficiently quantifying the shape error between the predicted point clouds generated by networks and the ground truth remains challenging. While EMD-based loss functions excel in shape detail and perceived density distribution, their approach can only yield results with significant discrepancies from the actual EMD within a tolerable training time. To address these challenges, we first propose an initial price based on an auction algorithm, reducing the number of iterations required for the algorithm while ensuring the correctness of the assignment results. We then introduce a method to compute the initial price through a successive shortest path and the Euclidean information between its nodes. Finally, we adopt a series of optimization strategies to speed up the algorithm and offer an EMD approximation scheme for point cloud problems that balances time loss and computational accuracy based on point cloud data characteristics. Our experimental results confirm that our algorithm achieves the smallest gap with the real EMD within an acceptable time range and yields the best results in end-to-end training. Introduction Understanding the environment is a fundamental necessity across various domains, ranging from autonomous driving and intelligent transportation systems to robotics and mixed reality applications. The success of these cutting-edge technologies heavily depends on the capacity of a system to interact with and accurately perceive its surroundings. In this regard, point clouds have emerged as a pivotal element in environmental sensing, and their generation can be facilitated by diverse technologies like LiDAR (Light Detection and Ranging) (Reutebuch, Andersen, and McGaughey 2005). Point clouds provide a rich representation of the environment and can capture detailed geometrical and spatial information, making them invaluable in various tasks related to perception, localization, mapping, and object recognition. As a result, point cloud data has become a fundamental input *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. for many state-of-the-art algorithms and systems in modern sensing and navigation applications. However, one challenge of using LiDAR-generated spatial point clouds is that they may fail to fully depict the surface of the observed object due to occlusions or the inability to arrange sensors at sufficient angles. As a result, the captured point cloud data may only provide a partial representation of the object. This limitation can have a significant impact on subsequent point cloud processing tasks, such as point cloud registration, object detection, and classification. Input Point Encoder Decoder Ground Truth Predicted  Point Cloud Learning-based Model S T s1 s2 s3 S' s4 s5 … sn tn t1 t5 t3 t4 … t2 Distance for Unordered Points Shape Error Figure 1: The illustration of point cloud completion task. Numerous studies (Achlioptas et al. 2018; Yuan et al. 2018; Sarmad, Lee, and Kim 2019) have successfully showcased the efficacy of employing learning-based techniques for 3D shape completion, offering significant advantages for subsequent processing tasks. As depicted in Figure 1, dealing with irregular and unordered point clouds introduces substantial challenges in this context. Accurately quantifying the distance between point cloud pairs, accounting for shape discrepancies in the output loss of a deep learning model concerning the ground truth, and facilitating the update of the model parameters pose significant hurdles due to the inherent complexities of point cloud data. Fan et al. (Fan, Su, and Guibas 2017) introduced two permutation invariant functions, namely Chamfer Distance (CD) and Earth Mover’s Distance (EMD). While CD is efficient, it does not penalize uneven distributions, leading to shape detail fluctuations (Achlioptas et al. 2018). EMD is more sensitive to shape detail and density distribution differences (Liu et al. 2020), which can achieve more accurate comparison between a pair of point clouds. However, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7287 EMD is computationally expensive, which is not scalable to be used in deep learning models (with large number of iterations) for supporting large data. As such, researchers often opt for Chamfer Distance (CD) over Earth Mover’s Distance (EMD), which can significantly degrade the quality of point cloud completion. To address the computational challenges of EMD, Fan et al. (Fan, Su, and Guibas 2017) develop an iterative (1+ϵ)approximation method. However, this method still cannot be scalable to large point cloud sizes. Liu et al. (Liu et al. 2020) propose an alternative approach, which can handle larger point clouds and only takes O(n) memory. However, our experiments (cf. Table 1, where two existing methods are denoted as emd1 and emd2.) show both methods significantly deviate from the actual EMD, leading to estimation errors and increased gradient update noise in the training process. MSE emd1 emd2 Ours Sparse 1388.26 912.78 121.79 Dense 1014.70 1131.23 408.13 Table 1: The mean squared error (MSE) of different methods for estimating EMD. Experiments were conducted on “Sparse” point clouds (1024 points) and “Dense” point clouds (8192 points), calculating the MSE between the computed results and the actual EMD. Here, two widely used EMD loss functions (Fan, Su, and Guibas 2017; Liu et al. 2020) are denoted as emd1 and emd2. To tackle the challenge of accurate EMD approximation within a reasonable time frame, we propose the Adaptive Auction with Intial Price Algorithm (AAIP)1. This approach is built on the auction algorithm, using successive shortest path principles to redefine distance relationships between specific spatial points. We have proven that initializing the auction algorithm with these initial prices converges to the assignment outcomes of the original algorithm and reduces the number of iterations in the training process. Our principal contributions can be succinctly encapsulated as follows. • We propose a novel concept, the Initial Prices of auction algorithm, to accelerate the convergence of the algorithm while ensuring the correctness of the results. We theoretically demonstrate its correctness and effectiveness. • We propose an efficient algorithm for computing the Initial Prices. By utilizing initial prices and the data features, we introduce a novel adaptive EMD approximation scheme for shape loss function. • We conduct sufficient experiments on point cloud data of various categories and sizes. The experimental results show that the proposed approach can effectively reduce the error in the shape loss function, which can further achieve the best training results compared with the existing methods. 1The code is available at https://github.com/coldbubbletea/ AAIP-Point-Cloud-Completion. Related Work Point cloud completion via learning-based methods. Over recent years, many deep learning methods have been developed for the point cloud completion problem, which can be further categorized into two camps, namely (1) voxel-based methods and (2) point-based methods. Voxel-based methods (Dai, Qi, and Nießner 2017; Han et al. 2017; Sharma, Grau, and Fritz 2016; Stutz and Geiger 2018; Liu et al. 2019a,b) first represent each point cloud as a set of voxels and then train the learning-based models for these point cloud data. However, it is hard to accurately tune the voxel size, which can either result in huge computational costs (and huge memory space consumption) or low accuracy during the learning process. Unlike the voxel-based methods, point-based methods (Qi et al. 2017; Li et al. 2018; Yuan et al. 2018; Sarmad, Lee, and Kim 2019; Tchapmi et al. 2019; Chen, Chen, and Mitra 2020) adopt different loss functions for directly measuring the distance between data points in a pair of point clouds in order to train the learning-based models, which, to the best of our knowledge, can achieve better accuracy for this point cloud completion problem compared with the voxel-based methods. Loss function for point cloud completion. There are two types of commonly used loss functions (Fan, Su, and Guibas 2017) for point-based methods, which are Chamfer Distance (CD) and Earth Mover’s Distance (EMD). Compared with EMD, CD is unable to penalize errors in shape details and is insensitive to differences in density distribution (Yuan et al. 2018; Achlioptas et al. 2018; Liu et al. 2020). Therefore, using EMD as the loss function in a learning-based model can provide more accurate results for the point cloud completion problem. However, EMD suffers from huge computational cost, which is not scalable to large-scale point cloud data. Although numerous approximation methods have been proposed to boost the efficiency of computing EMD for the image retrieval tasks (Jang et al. 2011; Cuturi 2013; Solomon et al. 2015; Altschuler, Weed, and Rigollet 2017; Chan, Yiu et al. 2019), using these methods for the point cloud completion problem needs to construct the cost matrix (with quadratic computational cost) between each pair of point clouds. Hence, these methods still suffer from huge computational burden (especially for the point cloud with large number of data points). Recently, Fan et al. (Fan, Su, and Guibas 2017) have directly proposed the EMD approximation scheme for point clouds, which has been widely adopted in many learningbased point cloud completion networks (Yuan et al. 2018; Chen, Chen, and Mitra 2019; Wu, Miao, and Fu 2021; Chang, Jung, and Xu 2021). However, this approach still takes O(n2) memory space, limiting its applicability to large point cloud datasets. Later, Liu et al. (Liu et al. 2020) further propose the auction-based (Bertsekas 1979) loss function estimation method (with O(n) memory space), which is scalable to large-scale point cloud datasets. However, compared with our methods, all these approximation methods are still not accurate for computing EMD (cf. Table 1). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7288 Preliminary Point cloud completion problem statement. As shown in Figure 1, consider S ′ as an ensemble of spatial points, situated on the visibly observed facets of an object, acquired through singular or sequential observations via LiDAR. Concurrently, envisage T as a densely populated collection of spatial points, uniformly distributed across both the observed and unseen facets of the said entity. In this context, the shape completion problem is framed as the prediction of T using learning based methods, given S ′ as input. Loss function of shape error. It is key to note that S ′ is not necessarily contained within T due to independent sampling. This absence of direct point-to-point correspondence between S ′ and T means that the chosen metric in the loss function computation of shape error, measuring the distance between S and T, should exhibit robust permutation invariance and accurately reflect shape error. This choice directly impacts evaluative efficacy and, consequently, the quality of point cloud prediction. Fan et al. (Fan, Su, and Guibas 2017) utilized Earth Mover’s Distance (EMD) as a loss function, which demonstrates greater robustness compared to Chamfer Distance (CD), for point cloud completion problems. The formulation of EMD in the context of point cloud distance measurement is shown as follows. EMD (S, T) = min ϕ:S→T 1 |S| X x∈S ∥x −ϕ(x)∥2 (1) where ϕ(x) denotes a bijection that establishes a one-to-one mapping relationship between point clouds S and T, minimizing the average distance of corresponding points. Auction algorithm on point cloud. The auction algorithm (Bertsekas 1985) is contemplated as an elegant solution for computing the bijection amidst points within dual point clouds because parallel computation is straightforwardly achievable, rendering the procedure apt for calculating EMD as loss function in deep learning model. The auction algorithm treats the two point clouds as sources S and sinks T. We will illustrate the algorithmic process using the example in Figure 2 and display it in Table 2. For each independent source si ∈S and sink tj ∈T pairing, there exists a gij, which represents the source’s evaluation of the sink’s attractiveness, expressed as a formula as gij = C −dij, (2) where C is a constant here to ensure positive value, and dij represents the Euclidean Distance between si and tj. Assume C = 20 in the example. In the auction algorithm, each sink in the graph is given a numerical quantity, often denoted as µj, known as the price, which represents the cost of assigning that particular sink. Initially, the price for every sink is usually set to 0. During each round, as illustrated in Table 2, each source si identifies a sink tj that maximizes (gij −µj), to choose which sink is the optimal choice. As an example, s1, s2, s3 place their bids for t2 in the first round. If the sink tj is presently unassigned or if si offers a higher bidding price s2 s1 s2 s3 t1 t2 t3 4.2 11.2 3.5 4.9 3.1 2.8 4.0 6.4 7.2 EMD = 3.5 + 7.2 + 3.1 = 13.8 S T sources sinks s3 s1 t2 t1 t3 final assignment Figure 2: An example illustrates the efficacy of the method. Two point clouds are regarded as sources and sinks, with the objective of finding an one-to-one mapping of threedimensional points that satisfy optimization conditions. Finally, the global minimum distance EMD is determined. than the other sources bidding on tj, then si is assigned to tj, and the price µj is updated to the bidding price of si accordingly. The bidding price denoted as β for a source is established by appending a bid increment to the existing price µj of a sink tj. This bid increment lies within the range of [ϵ, π+ϵ], where ϵ is a pre-established relaxation parameter, and π represents the profit discrepancy between the optimal and suboptimal choices for si in its present state. However, if si does not offer the highest bidding price, it must seek another sink in the following round. In the example, the bid increment attains its maximum value (π + ϵ), with ϵ = 0.01. As s2 presents the highest bidding price 0.81 (β12 = 0.71, β22 = 0.81, β32 = 0.31), t2 is ultimately assigned to s2, with its price updated to 0.81. This process is repeated until each source is assigned a sink. As shown in Table 2, the algorithm proceeds for three more rounds as outlined in the table, culminating in a one-to-one mapping, denoted by the red lines in Figure 2. The pseudocode for the specific algorithm is provided in the Appendix. Round Bid Price Update s1 s2 s3 t1 t2 t3 Init. 0 0 0 1 t2: 0.71 t2: 0.81 t2: 0.31 0 0.81 0 2 t1: 0.71 t1: 0.91 0.91 0.81 0 3 t2: 1.41 0.91 1.41 0 4 t3: 0.62 0.91 1.41 0.62 Table 2: The bidding and price update process of the auction algorithm across different rounds. Although the auction algorithm has been widely used as the fundamental algorithm for solving the EMD in point cloud problems due to its outstanding parallel capabilities, its application still presents challenges. Firstly, its heuristic nature results in significantly varying time costs for different data characteristics, which are difficult to predict. Moreover, for some point cloud data, the algorithm requires a large number of iterations before it can stop. These factors are detrimental to the iterative computation process in deep learning with large dataset. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7289 Methodology Motivating Example In the example provided in Figure 2, the auction algorithm requires at least four rounds to complete the calculation for optimal assignment. However, if we set the initial prices as µ1 = 0.80, µ2 = 1.00, µ3 = 0, it becomes apparent in Table 3 that merely a single auction round is necessary to finalize the assignment. Round Bid Price Update s1 s2 s3 t1 t2 t3 Init. 0.80 1.00 0 1 t2: 1.41 t3: 0.21 t1: 0.91 0.91 1.41 0.21 Table 3: The bidding and price update process of the auction algorithm with initializing the price. As shown in this example, the auction iteration is due to different source points selecting the same sink during the process, concurrently boosting the price of the sink points. The essence of the auction algorithm is to iteratively heighten the cost of selecting a sink, thereby reducing the number of sources interested in that sink. This continues until all unsuitable sources (those with better options) are eliminated, achieving an one-to-one assignment. Thus, can we adopt certain strategies to initialize prices in the auction algorithm, ensuring the same results but with a quicker elimination of unsuitable sources? Auction Algorithm with Initial Prices In the widely used auction algorithm, the price of each sink is often initialized at 0. Setting prices incorrectly could potentially deteriorate the assignment, thus we propose a guaranteed initial price that ensures the final optimized assignment A′ remains unaffected and accelerates the convergence of the algorithm. Correctness of initial prices. Yet, we discern that given a local assignment A−of S−⊆S and T−⊆T, denoting the final selling price of the assigned tj ∈T−in A−as µ− j . We propose the initial price, which is described in Definition 1. Definition 1 (Initial prices, ∆). We say the set of initial prices ∆is valid if 0 ≤δj ≤µ− j , ∀tj ∈T−. When each initial price δj ∈∆is employed to initialize the corresponding sink price µj for each sink tj in the auction algorithm, we demonstrate that the final assignment remains correct. This is encapsulated in our proposed Theorem 1. Theorem 1 (Correctness of initial prices). By initializing the price of the corresponding sink with the initial prices ∆, the final assignment A′ is equivalent to the result A∗of the auction algorithm without using initial prices for price initialization, that is, A′ ≡A∗. The proof of Theorem 1 is provided in Appendix. In simple terms, the auction algorithm ensures that the selling price of a sink in a local optimal assignment does not exceed its selling price in a globally solved problem. This implies that even if we initialize the initial price δj for tj at a value less than or equal to µ− j , it will eventually be procured by a suitable source at a higher (correct) selling price by the auction process. This maintains the consistency of the assignment. The introduction of Theorem 1 provides a solvable upper bound when we need to initialize the price of the sink. As long as we do not exceed this upper bound when assigning the price, we can ensure that the results remain unchanged. Effectiveness of initial prices. Furthermore, the initial price strategy can expedite the convergence speed of the auction algorithm towards its final result by causing unsuitable sources to cease their bidding for a specific sink prematurely. This is stated in our proposed Lemma 1. Lemma 1. The initial price can reduce the upper bound of the number of iterations required by the Auction Algorithm. For proof details, please refer to the Appendix. As demonstrated in Table 3, the initialization of prices leads to a scenario where the unsuitable source s2 cannot bid for t2 in the first round of auction, and instead directly selects the optimal choice t3. This results in the algorithm completing the assignment within a single iteration. Clearly, the core of the initial prices is to reconstruct the relationship between sources and sinks in the initial state, which reduces the number of unsuitable sources, thereby accelerating the convergence speed. Thus, based on Lemma 1, we prove that the proposed initial price, when used to initialize prices, indeed accelerates the convergence of the auction algorithm. Configuration of Initial Prices, ∆ However, this approach presents practical challenges. Since the upper bound, µ− j , for the initial price δj for tj must be obtained through the auction algorithm to find a local assignment, considering the unpredictability of the computational cost associated with solving such problem using the auction algorithm (as per Definition 1), it is possible that the overall computational cost of (1) determining the initial price ∆and (2) solving the final assignment A ′ using the initial price ∆ could surpass that of the original Auction Algorithm. In this endeavor, our aim is to configure the initial prices in a manner that fulfills the following requirements: (1) ensuring the fulfillment of Theorem 1, which states that for every task tj in the set T, δj must be less than or equal to µ− j , and (2) providing a solution cost that can be controlled, thereby reducing the overall computational burden. Initial prices derived from a local assignment. To begin, let us delve into an initial price that not only adheres to the conditions of Theorem 1, but also can be computed using the result of a local assignment. Let A−be the local assignment and dij denote the Euclidean distance between point si ∈ S and point tj ∈T. The specific content is presented in Lemma 2. Lemma 2. Assuming that (si, tj) is an assigned pair in a local assignment A−. Furthermore, within this assignment, ∀sp ∈(S−−si), the sink assigned to sp is denoted as tq ∈T−. Considering the value αj obtained by evaluating maxsp∈S−{dpq −dpj + ϵ}, we can set δj = max {0, αj}. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7290 This choice ensures that δj satisfies Theorem 1, specifically δj ≤µ− j . We leave the proof in Appendix. As per the insights provided by Lemma 2, we have the flexibility to relax the value of δj by reducing it from the price upper bound µ− j (from the Auction algorithm) to max {0, αj} (from the local assignment). Remarkably, this adjustment preserves the assurance that the final assignment remains in alignment with the original auction algorithm. This implies that we merely necessitate the acquisition of a locally optimal assignment, and through the relationship of tj, utilizing the local assignment will still yield a δj that satisfies Lemma 2. Computing a local assignment. According to Lemma 2, our subsequent challenge is to efficiently determine the local assignments required by the lemma. The local assignment can be transformed into a minimum cost flow problem (MCF) in directed bipartite graphs (Waissi 1994). In the context of spatial point cloud issues, each edge e satisfies: a capacity of 1, a cost equivalent to the distance between points for e(vi, vj). Several algorithms exist to solve the MCF problem, with the Hungarian algorithm (Kuhn 1955) and the Successive Shortest Path Algorithm (SSPA) (Derigs 1981) being the most commonly used due to their low complexity. However, the Hungarian algorithm requires to construct a cost matrix makes it unsuitable for high-density point cloud problems. SSPA, on the other hand, is more suited for this task, as it operates directly on the flow graph and iteratively computes the assignment via shortest path searches. Specifically, we utilize SSPA to obtain the local assignment A− on the bipartite graph (S−, T). The outcome A−produced by SSPA successfully fulfills the prerequisites outlined in Lemma 2 for determining the initial prices. We name this calculation process as Initial Prices Algorithm (InPrA). In the next section, we shall delve into an even more streamlined approach to compute the local assignment, leveraging a simplified graph strategy. Optimization Strategies In this section, we will explore advanced optimization techniques for algorithms, enhancing computational speed, strengthening the adaptability of algorithms as loss function solvers, and finalizing the algorithm design. Simplified graph strategy. We employ SSPA for local optimal assignment and determine the initial prices. Like many others, the time complexity grows with spatial points, complicating the shortest path identification in a directed bipartite graph and increasing computation time. The Simplified Graph Incremental Algorithm (SIA) (U et al. 2010) mitigates this complexity by preserving a novel subgraph in each loop, where edges are added until a specific condition is met. This ensures the shortest path found on the subgraph also represents a shortest path in the original graph. SIA involves a k-nearest neighbour search to establish a simplified graph, for which we use an R∗-Tree (Beckmann et al. 1990) to pre-process spatial point cloud data. We believe any data structure that facilitates a k-nearest neighbour search will be appropriate for this algorithm. Upon applying SIA, the resolution of local optimization assignment A−is achieved, followed by the search of max {0, αj} of tj through the established shortest path tree, expediting the computation of the initial prices. Algorithm 1: Source Sorted Algorithm (SSA) Input: Spatial point cloud S and T (with size n) Output: Selected point list S− 1 Initialize a (s, t)-pair list H, a point set CD, a point list S− 2 foreach si ∈S do 3 tj∗←arg maxtj∈T{gij}, push pair (si, tj∗) to H 4 Sort H based on t in (s, t)-pair 5 for 1 ≤i ≤n do 6 if H(i).t = H(i −1).t then 7 insert point si to CD 8 S∗←sort S with priority s1 > s2, where s1 ∈CD and s2 ∈(S −CD) Computation order in SSPA/SIA. To meet training cost requirements, we have imposed a time constraint that may restrict the involvement of only a subset of source points S−. Instead of randomly selecting source points, we prioritize the inclusion of those source points that bid for the same sink points in the auction algorithm. This approach (cf. Algorithm 1) has proven beneficial, as it increases the number of sink points that receive initial prices. Our experiments have further validated this observation, reinforcing the effectiveness of this selective approach in the SSPA/SIA algorithm. Algorithm 2: Auction with Initial Price Algorithm 1 S∗←SSA(S, T) 2 S−←getTopK(S∗) 3 A−←SIA(S−, T) 4 ∆←InPrA(A−) 5 Initialize µj to be δj ∈∆for all tj ∈T 6 A ′ ←Auction Algorithm with δj for all tj Main algorithm. Thus, we have finalized the workflow for the Auction with Initial Price algorithm as described in Algorithm 2. It sorts the sources by SSA, selects k points as S−, calculates local assignment A−using SIA, computes the initial price through InPrA, and finally initializes the auction algorithm’s price to the computed initial price. This process culminates in the final assignment. Adaptive iteration strategy. The existing methods frequently encounter the need to abruptly terminate Earth Mover’s Distance (EMD) calculations due to the heavy computational burden and extensive iterations involved in deep learning. Contrary to these methods lacking robustness, we propose an approach that adaptively adjusts the algorithm’s time cost based on data characteristics. Despite the ability to arbitrarily select S−in local assignments for SIA resolution, fixing the number of assignment points is not robust in practical point cloud problems The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7291 with diverse data features, potentially leading to insufficient solution points or prolonged solving time. Conversely, by restraining the SIA algorithm under certain conditions, the number of completed local assignments, or loops, can better reflect data characteristics. In our method, we set a fixed total search edge limit for the construction of the SIA subgraph. This limit is determined by the size of the point cloud data, and when reached, it signals the end of local assignment computation. As the SIA carries out more loops, it needs fewer edges on average to construct the subgraph used for the shortest path search. This efficiency translates to fewer suitable match candidates in one point cloud for a point in another, which in turn reduces the chance of encountering competitors. The end result is a decrease in the iterations required for the auction algorithm to conclude. Conversely, fewer loops necessitate increased iterations. Based on this, an Adaptive Iteration Strategy is proposed as follows: If the spatial point cloud terminates at the kth loop when using SIA to calculate the local assignment, then the auction algorithm with initial price iteration number i satisfies i = ω|S|k−1 + λ, (3) where i is informed by the provided scale factor ω and the ratio of the number of rounds k that the SIA can complete within the given computational limit to the size of |S|. The symbol λ signifies the minimum number of rounds required to complete the auction. Thus, if a point cloud is readily solvable by the auction algorithm, the algorithm will correspondingly decrease the number of iterations. Conversely, if the problem is more complex, the algorithm will augment the number of iterations as much as possible. Finally, should the auction algorithm terminate due to adaptive iteration while there remain unassigned points, in accordance with MSN (Liu et al. 2020) proposed, we assign these points to the nearest counterparts in the alternate point cloud, which is deemed as their respective distances. Integrated with the above optimization strategy, this section completes the core design and elucidation of the proposed EMD approximation algorithm for spatial point clouds. With this optimization strategy, we have expanded our Initial Price Algorithm into an Adaptive Auction Initial Price Algorithm (AAIP). Experiments Experimental Setup Dataset. In this study, the ShapeNet CAD dataset (Chang et al. 2015) was used for spatial point cloud data. This dataset was chosen to ensure the fairness of our experiments, as it is used for training and testing by the models under our investigation. We conducted experiments with point clouds uniformly selected from eight categories, which include watercraft, cabinet, table, airplane, car, chair, sofa, and lamp. The complete point clouds, serving as the ground truth, were produced through uniform sampling from the model’s mesh surfaces, while partial spatial point clouds were simulated using back-projected depth images (Yuan et al. 2018). Notably, these partial point clouds were collected from eight random viewpoints of each model to more closely reflect real-world conditions. Ultimately, we generated a total of 64000 pairs of point clouds for training and 9600 pairs of point clouds for testing. Compared methods. Two predominant EMD loss function algorithms are employed in the realm of point cloud completion problems: initially, Fan et al. introduces an approximation scheme, employed as an estimation of EMD amid point cloud pairings, consequently facilitating shape loss computations (Fan, Su, and Guibas 2017). In the experimental section of this paper, such an algorithm is distinguished as “emd1”. Subsequently, MSN (Liu et al. 2020) presents an enhanced approximation approach predicated on the auction algorithm, necessitating solely O(n) memory, and this methodology is referred to as “emd2”. Our elucidated EMD estimation algorithm is denoted as “AAIP”. Experimental Results from Point Cloud Completion Networks We scrutinize the empirical outcomes of the deep learning model for point cloud completion, utilizing diverse EMD approximation strategies in an end-to-end fashion. Backbone models. Our proposed algorithm AAIP is independently applied to two deep learning point cloud completion models, PCN (Yuan et al. 2018) and MSN (Liu et al. 2020). PCN is a two-stage point cloud generation model and its superior performance has instigated a cascade of subsequent methodologies. Many point cloud completion models have adopted the coarse-dense network architecture proposed by PCN. Due to the limitations of the EMD approximation scheme it employs, PCN uses CD/EMD as the loss function for its coarse output (1024), but only uses CD for the loss function of its dense output (16384). MSN directly generates a dense point cloud as coarse output, which is further optimized to produce the final point cloud output. Different from PCN, MSN exclusively uses EMD as the loss function and maintains 8192 points for both the coarse output and the final output. To ensure a fair comparison of model functionalities, we replaced their respective EMD approximation schemes with our proposed AAIP as the loss function, to present the experimental results. Comparison results. The experimental results conducted on PCN and MSN are presented respectively in the Table 4. In line with the standards of other works, we employ the precise EMD as the criterion for evaluating the quality of model output. A smaller EMD signifies less shape discrepancy between the generated point cloud and the ground truth, implying a higher quality of the generated point cloud. (a) PCN. When acting as the loss function for coarse output, the AAIP outperforms the training results using CD or emd1, in any categories of training outcomes. However, for point clouds of complex categories like “lamp”, due to their intricate topological structures, the CD fails to adequately penalize the differences in detail shape, resulting in a larger EMD. Moreover, adopting the AAIP as the loss function on large-scale dense output has also achieved superior experimental results within the framework of a PCN-based model. (b) MSN. Given that the shape error calculation of MSN is entirely anchored on EMD, the overall results it produces The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7292 Methods chair table sofa cabinet lamp car airplane watercraft average PCN (CD+CD) 62.46 66.88 52.85 61.07 102.88 50.86 38.17 52.22 60.93 PCN (emd1+CD) 62.83 59.94 52.53 54.91 69.50 54.12 33.22 55.30 55.29 PCN (AAIP+CD) 52.73 49.77 48.21 49.66 62.95 38.15 27.22 44.13 46.60 PCN (CD+AAIP) 43.23 43.54 34.58 35.56 63.55 31.13 25.79 35.96 39.17 MSN (emd2) 33.12 31.12 31.11 36.13 36.66 32.90 18.70 25.66 30.68 MSN (AAIP) 28.99 28.25 28.48 34.18 31.53 31.45 16.58 22.58 27.71 ∗Referring to PCN, the former term inside parentheses indicates the loss function type utilized for the coarse output, while the latter term denotes the loss function type employed for the dense output. Table 4: The training results (EMD×103) of point cloud completion network on the ShapeNet dataset. are superior to PCN, with the output of completed point clouds being closer to the ground truth. This is particularly noticeable in complex point cloud types such as the lamp class, where due to the EMD’s precise reaction to local shape variations, it aids in the completion of complex shapes beyond the intrinsic differences of the model itself. Similarly, results of AAIP outperform of emd2 across all categories. Thus, the two sets of experimental results demonstrate the superior performance of AAIP in deep learning models for point cloud completion, where it serves as an approximation scheme to solve the loss function in response to EMD. Performance of EMD Approximation 1024 2048 4096 8192 The Size of Point Clouds 0 200 400 600 800 1000 1200 1400 Mean Square Error 10 5 10 4 10 3 10 2 10 1 100 101 102 103 Time (s) emd1 emd2 AIP(w/o AI) AAIP(+AI) emd1 emd2 AIP(w/o AI) AAIP(+AI) Figure 3: The accuracy and efficiency results (in terms of mean square error and time, respectively) of different EMD estimation methods for the point cloud completion problem, where we compare our proposed methods, AIP (w/o AI) and AAIP (+ AI), with two most widely used approaches (Fan, Su, and Guibas 2017; Liu et al. 2020), emd1 and emd2. We conduct experiments to analyze the accuracy and efficiency of various EMD approximation schemes across point cloud data of different sizes. All experimental data are derived from the training of actual point cloud completion models, with specific details presented in the Appendix. Within the bar chart section, it can be observed that across different point cloud size datasets, the EMD estimates of emd1 and emd2 diverge significantly from the actual EMD, whereas the MSE between the EMD estimated by AAIP (for both settings) and the actual EMD is substantially smaller than the other two. Conversely, AAIP maintains a low MSE in EMD calculations across various point cloud sizes. This suggests that AAIP provides superior accuracy in EMD estimation between point cloud pairs and offers robustness across different point cloud sizes. Within the line chart section, emd1 exhibits the swiftest computational speed, with a marginal difference between our methods and emd2. In order to complete the estimation quickly, emd1 sacrifices the ability to correctly assign most points, leading to a significant discrepancy from the true value. These results illustrate that, compared to emd2, AAIP guarantees the performance of the EMD approximation algorithm at the expense of minimal time cost, establishing a significant precision gap with other algorithms. Given that emd2 is a widely adopted approximation scheme, such a time cost is acceptable. Ablation Study on Optimization Strategy Adaptive iteration strategy. We conducted experiments comparing the use of an Adaptive iteration strategy (denoted as “AAIP (+AI)”) and its absence (designated as “AIP (w/o AI)”). After computing the initial price, AIP (w/o AI) terminated the auction algorithm through a fixed number of iterations. On point cloud data of varying sizes, the EMD estimation accuracy of AIP (w/o AI), due to the initial prices, has already surpassed that of emd1 and emd2. However, the efficiency and accuracy of the method have further improved after the use of the adaptive iteration strategy. This improvement is particularly evident in larger point cloud datasets, which validates that our proposed optimization strategy can effectively enhance the method’s effectiveness based on the characteristics of the data. Conclusion In this paper, we introduce the Adaptive Auction with Initial Price Algorithm, designed to efficiently and accurately estimate the shape loss function in point cloud completion problems. Initially, we propose the use of initial prices to expedite the convergence of the auction algorithm, coupled with a theoretical proof. Then we propose an efficient calculation method for initial prices, based on the successive shortest path. In accordance with the practical scenario of loss function in deep learning models, we propose optimization strategies that adaptively aligns with data characteristics. Through experiments conducted on eight different categories of point cloud data and two distinct deep learning models, we demonstrate the effectiveness and superiority of the proposed method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7293 Acknowledgments This work was supported by the Science and Technology Development Fund Macau SAR (0052/2023/RIA1, 0031/2022/A, 0015/2019/AKP, SKL-IOTSC-2021-2023), the Research Grant of University of Macau (MYRG202200252-FST), the National Natural Science Foundation of China under Grant No. 62202401, and Wuyi University Hong Kong and Macau joint Research Fund (2021WGALH14). This work was performed in part at SICC which is supported by SKL-IOTSC, University of Macau. References Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; and Guibas, L. 2018. Learning representations and generative models for 3d point clouds. In International conference on machine learning, 40–49. PMLR. Altschuler, J. M.; Weed, J.; and Rigollet, P. 2017. Nearlinear time approximation algorithms for optimal transport via Sinkhorn iteration. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. V. N.; and Garnett, R., eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 1964–1974. Beckmann, N.; Kriegel, H.; Schneider, R.; and Seeger, B. 1990. The R*-Tree: An Efficient and Robust Access Method for Points and Rectangles. In Garcia-Molina, H.; and Jagadish, H. V., eds., Proceedings of the 1990 ACM SIGMOD International Conference on Management of Data, Atlantic City, NJ, USA, May 23-25, 1990, 322–331. ACM Press. Bertsekas, D. P. 1979. A distributed algorithm for the assignment problem. Lab. for Information and Decision Systems Working Paper, MIT. Bertsekas, D. P. 1985. A distributed asynchronous relaxation algorithm for the assignment problem. In 1985 24th IEEE Conference on Decision and Control, 1703–1704. IEEE. Chan, T. N.; Yiu, M. L.; et al. 2019. The power of bounds: Answering approximate earth mover’s distance with parametric bounds. IEEE Transactions on Knowledge and Data Engineering, 33(2): 768–781. Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. 2015. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012. Chang, Y.; Jung, C.; and Xu, Y. 2021. FinerPCN: High fidelity point cloud completion network using pointwise convolution. Neurocomputing, 460: 266–276. Chen, X.; Chen, B.; and Mitra, N. J. 2019. Unpaired point cloud completion on real scans using adversarial training. arXiv preprint arXiv:1904.00069. Chen, X.; Chen, B.; and Mitra, N. J. 2020. Unpaired Point Cloud Completion on Real Scans using Adversarial Training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Cuturi, M. 2013. Sinkhorn Distances: Lightspeed Computation of Optimal Transport. In Burges, C. J. C.; Bottou, L.; Ghahramani, Z.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, 2292–2300. Dai, A.; Qi, C. R.; and Nießner, M. 2017. Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 2126, 2017, 6545–6554. IEEE Computer Society. Derigs, U. 1981. A shortest augmenting path method for solving minimal perfect matching problems. Networks, 11(4): 379–390. Fan, H.; Su, H.; and Guibas, L. J. 2017. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 2463–2471. IEEE Computer Society. Han, X.; Li, Z.; Huang, H.; Kalogerakis, E.; and Yu, Y. 2017. High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 85–93. IEEE Computer Society. Jang, M.-H.; Kim, S.-W.; Faloutsos, C.; and Park, S. 2011. A linear-time approximation of the earth mover’s distance. In Proceedings of the 20th ACM international conference on Information and knowledge management, 505–514. Kuhn, H. W. 1955. The Hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2): 83–97. Li, C.-L.; Zaheer, M.; Zhang, Y.; Poczos, B.; and Salakhutdinov, R. 2018. Point cloud gan. arXiv preprint arXiv:1810.05795. Liu, M.; Sheng, L.; Yang, S.; Shao, J.; and Hu, S.-M. 2020. Morphing and sampling network for dense point cloud completion. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 11596–11603. Liu, Y.; Fan, B.; Xiang, S.; and Pan, C. 2019a. RelationShape Convolutional Neural Network for Point Cloud Analysis. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 1620, 2019, 8895–8904. Computer Vision Foundation / IEEE. Liu, Z.; Tang, H.; Lin, Y.; and Han, S. 2019b. PointVoxel CNN for Efficient 3D Deep Learning. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alch´e-Buc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, 963–973. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 77–85. IEEE Computer Society. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7294 Reutebuch, S. E.; Andersen, H.-E.; and McGaughey, R. J. 2005. Light detection and ranging (LIDAR): an emerging tool for multiple resource inventory. Journal of forestry, 103(6): 286–292. Sarmad, M.; Lee, H. J.; and Kim, Y. M. 2019. Rl-gan-net: A reinforcement learning agent controlled gan network for real-time point cloud shape completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5898–5907. Sharma, A.; Grau, O.; and Fritz, M. 2016. VConv-DAE: Deep Volumetric Shape Learning Without Object Labels. In Hua, G.; and J´egou, H., eds., Computer Vision - ECCV 2016 Workshops - Amsterdam, The Netherlands, October 810 and 15-16, 2016, Proceedings, Part III, volume 9915 of Lecture Notes in Computer Science, 236–250. Solomon, J.; de Goes, F.; Peyr´e, G.; Cuturi, M.; Butscher, A.; Nguyen, A.; Du, T.; and Guibas, L. J. 2015. Convolutional wasserstein distances: efficient optimal transportation on geometric domains. ACM Trans. Graph., 34(4): 66:1– 66:11. Stutz, D.; and Geiger, A. 2018. Learning 3D Shape Completion From Laser Scan Data With Weak Supervision. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 1822, 2018, 1955–1964. Computer Vision Foundation / IEEE Computer Society. Tchapmi, L. P.; Kosaraju, V.; Rezatofighi, H.; Reid, I. D.; and Savarese, S. 2019. TopNet: Structural Point Cloud Decoder. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 1620, 2019, 383–392. Computer Vision Foundation / IEEE. U, L. H.; Mouratidis, K.; Yiu, M. L.; and Mamoulis, N. 2010. Optimal matching between spatial datasets under capacity constraints. ACM Trans. Database Syst., 35(2): 9:1– 9:44. Waissi, G. R. 1994. Network flows: Theory, algorithms, and applications. Wu, H.; Miao, Y.; and Fu, R. 2021. Point cloud completion using multiscale feature fusion and cross-regional attention. Image and Vision Computing, 111: 104193. Yuan, W.; Khot, T.; Held, D.; Mertz, C.; and Hebert, M. 2018. Pcn: Point completion network. In 2018 international conference on 3D vision (3DV), 728–737. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7295
2024
810
18,641
Vision-Language Pre-training with Object Contrastive Learning for 3D Scene Understanding Taolin Zhang1*, Sunan He2*, Tao Dai3†, Zhi Wang1, Bin Chen4, Shu-Tao Xia1, 5 1Tsinghua Shenzhen International Graduate School, Tsinghua University 2Department of Computer Science and Engineering , Hong Kong University of Science and Technology 3College of Computer Science and Software Engineering, Shenzhen University 4Harbin Institute of Technology, Shenzhen 5Research Center of Artifcial Intelligence, Peng Cheng Laboratory zhangtlin3@gmail.com,sunan.he@connect.ust.hk,daitao.edu@gmail.com wangzhi@sz.tsinghua.edu.cn,chenbin2021@hit.edu.cn,xiast@sz.tsinghua.edu.cn Abstract In recent years, vision language pre-training frameworks have made significant progress in natural language processing and computer vision, achieving remarkable performance improvement on various downstream tasks. However, when extended to point cloud data, existing works mainly focus on building task-specific models, and fail to extract universal 3D vision-language embedding that generalize well. We carefully investigate three common tasks in semantic 3D scene understanding, and derive key insights into the development of a pre-training model. Motivated by these observations, we propose a vision-language pre-training framework 3DVLP (3D vision-language pre-training with object contrastive learning), which transfers flexibly on 3D visionlanguage downstream tasks. 3DVLP takes visual grounding as the proxy task and introduces Object-level IoUguided Detection (OID) loss to obtain high-quality proposals in the scene. Moreover, we design Object-level CrossContrastive alignment (OCC) task and Object-level SelfContrastive learning (OSC) task to align the objects with descriptions and distinguish different objects in the scene, respectively. Extensive experiments verify the excellent performance of 3DVLP on three 3D vision-language tasks, reflecting its superiority in semantic 3D scene understanding. Code is available at https://github.com/iridescentttt/3DVLP. Introduction Semantic 3D scene understanding has recently attracted increasing research interest due to its wide applications such as automatic driving, human-machine interaction, etc. Much progress has been made in semantic 3D scene understanding, with task-specific models continuously pushing the state-of-the-art in various downstream tasks including visual grounding (Chen, Chang, and Nießner 2020; Zhao et al. 2021; Cai et al. 2022), dense captioning (Chen et al. 2021b), and question answering (Azuma et al. 2022). While effective on their benchmarks, the task-specific representations obtained by existing approaches prevent *These authors contributed equally. †Corresponding author: Tao Dai. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. An armchair on the right of the other chair. Fusion Module (a) 3D Visual Grounding (b) 3D Question Answering (c) 3D Dense Captioning An armchair on the right of the other chair. caption Brown. answer Fusion Module Object Detector Object Detector Object Detector select match What color is the armchair in the right? Figure 1: Relationship between 3D vision-language tasks. Firstly, all the tasks rely heavily on the object detector to locate object in the scene. Secondly, 3D vision-language tasks require an effective fusion module to understand the connection between point cloud and language. them from generalizing well to other tasks. A common practice for extracting joint multimodal representation is to adopt the pre-training plus fine-tuning paradigm (Tan and Bansal 2019; Zhai et al. 2022; Alayrac et al. 2022; Zha et al. 2023). Existing works on semantic 3D scene understanding are still limited, which motivates us to introduce this paradigm in an appropriate way. However, 3D vision-language pre-training differs from pre-training in 2D vision-language tasks since point cloud data is introduced (Guo et al. 2020). The objectives designed in previous works cannot be directly applied to 3D vision-language pre-training due to the gap of downstream tasks. Therefore, it is essential to identify the shared The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7296 nature across different tasks in semantic 3D scene understanding to determine the appropriate pre-training model. Figure 1 provides an intuitive depiction of the relationships among three 3D vision-language tasks and two key observations emerages from the comparision. Firstly, all of these tasks rely on object detection when applying two-stage pipeline models, which is a common practice in semantic 3D scene understanding (Chen, Chang, and Nießner 2020; Chen et al. 2021b). Secondly, an effective fusion module is required to enable information interaction across modals for a deeper understanding of the relationships between objects in the scene, such as the matching stage in visual grounding (Zhao et al. 2021; Cai et al. 2022) and the classification stage in question answering (Azuma et al. 2022). These observations in semantic 3D scene understanding pose several challenges in designing an effective training paradigm for the pre-training model. Firstly, high-quality bounding boxes are required for object detection. These boxes represent the model’s ability to segment the scene at the object level, as demonstrated by works that use a detection-then-matching pipeline (Cai et al. 2022; Chen, Chang, and Nießner 2020). Secondly, object detection requires the model to distinguish between different objects in the scene, especially when there are many objects similar to the target(Chen, Chang, and Nießner 2020). This means the model needs to be able to identify what makes objects distinct in the scene, which is challenging and has not yet been fully addressed. Thirdly, the fusion module suffers from the issue that the data come from different modalities are unaligned, as similar to 2D vision language learning (Li et al. 2021; Chen et al. 2020). Point cloud features and word token embeddings exist in different spaces, making it challenging for the fusion module to model their interactions. To this end, we propose 3DVLP: vision-language pretraining with object contrastive learning in semantic 3D scene understanding. (1) To obtain better object bounding boxes, we introduce Object-level IoU-guided Detection (OID) loss. Specifically, we leverage visual grounding as the proxy task, as it shares the same objective of localizing high-quality bounding boxes. Additionally, we incorporate Distance IoU (DIoU) loss (Zheng et al. 2020) and label smoothing at the object level to achieve faster convergence and better performance. (2) We further introduce Object-level Self-Contrastive learning (OSC) task to distinguish the target object from others. The self-contrastive learning is performed at the object level, where boxes with an IoU higher than a specific threshold are considered positive samples and others are regarded as negative ones. (3) To enable fully information intereaction between point cloud and language, we design Object-level Cross-Contrastive alignment (OCC) task to align the unimodal representation across these two modalities. We use a similar IoU filter as in OSC to generate positive and negative samples, which are then fed as inputs to calculate the cross-contrastive loss. To apply the contrastive loss at the object level, we leverage the large amount of the proposals generated by the object detector and filter positive ones with the IoU filter. The positive proposals can be regarded as diverse data augmentations of the ground truth and share similarity to some extent. The positive and negative sample pairs contain sufficient information for training the contrastive loss and help the model better distinguish different objects in the scene. The contributions of this study are summarized as follows: (1) A 3D vision-language pre-training framework called 3DVLP has been proposed, achieving the unification of the tasks in semantic 3D scene understanding. (2) We introduce Object-level IoU-guided Detection loss to obtain high-quality bounding boxes. We also present two proxy tasks at the object level, including the Objectlevel Cross-Contrastive alignment task and Object-level Self-Contrastive learning task, which facilitate cross-modal alignment and help the model distinguish objects more accurately, respectively. (3) We conduct extensive experiments and empirically demonstrate the effectiveness of 3DVLP. Related Work 3D Visual-Langauge Tasks Recently, semantic 3D scene understanding has raised great interest and has been widely explored in recent approaches across various tasks, including 3D visual grounding (Chen, Chang, and Nießner 2020; Zhao et al. 2021; Cai et al. 2022; Luo et al. 2022), 3D dense captioning (Chen et al. 2021b), and 3D question answering (Azuma et al. 2022). 3D visual grounding aims to locate a region of interest in a scene based on a referring description. Chen et al. (Chen, Chang, and Nießner 2020) introduces ScanRefer dataset, while Achlioptas et al. (Achlioptas et al. 2020) collects two datasets containing Nr3D and Sr3D. Most existing methods rely on a detection-then-match pipeline to tackle the grounding task. 3DVG-Transformer (Zhao et al. 2021) introduces coordinate-guided contextual aggregation module to enhance proposal generation. HAM(Chen et al. 2022b) shifts attention to contextual information and develops both local and global attention module, while BUTD-DETR(Jain et al. 2022) presents a DETR-like (Zhu et al. 2020) referential grounding model that incorporates guidance from language, points, and objects. 3D-SPS(Luo et al. 2022) proposes the first one-stage end-to-end framework and mines the cross-modal relationship based on points. Dense captioning in 3D scene requires model to derive high-quality object bounding box from point cloud data and generates corresponding descriptions. Scan2Cap (Chen et al. 2021b) extends the dense captioning task to 3D scenes based on ScanRefer and establishes a messege-passing network. SpaCap3D(Wang et al. 2022) investigates the relative spatiality of objects and builds a spatiality-guided transformer. Importantly, it designs a object-centric decoder by using a vision token as information carrier of the target object. 3D question answering requires model to generate a correct answer provided with point cloud and a question. ScanQA(Azuma et al. 2022) collects 41k question-answer pairs and brings the question-answering task into 3D scenes. Besides, it propose a baseline model by casting the task as a classification problem. FE-3DGQA(Zhao et al. 2022) proposes anthoer datasets and predicts the answer through a token encoding and fusion module based on attention. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7297 Fusion layer Pretraining tasks Object-level Self-Contrastive alignment A large brown chair. An armchair on the right of the other chair. Encode Object-level Cross-Contrastive learning Cross Attention Visual Grounding Head Downstream Heads MLP 𝐿!"# 𝐿!## 𝐿!$% + 𝐿&' Transformer Decoder 𝐿#() MLP 𝐿*( Point Cloud Encoder Language Encoder Figure 2: Pipeline of 3DVLP in semantic 3D scene understanding. 3DVLP takes visual grounding as the proxy task and utilizes Object-level IoU-guided Detection (OID) loss to boost the performance of the object detector. We also introduce Object-level Cross-Contrastive alignment task and Object-level Self-Contrastive learning task in the pre-training stage, which facilitate cross-modal alignment and enable the model to distinguish objects more accurately, respectively. 3D Vision-Language Pre-training Recently, there have been some studies focusing on visionlanguage pre-training of point clouds. PointCLIP (Zhang et al. 2022), PointCLIP V2 (Zhu et al. 2022) and CLIP2point (Huang et al. 2022b) utilize CLIP to align point cloud with text. CrossPoint (Afham et al. 2022) renders the point cloud into image and apply contrastive loss for intra-modal and cross-modal alignments. They mainly focus on tasks over a single object, while our research deals with multiple objects in semantic scene understanding. The models mentioned above are incapable of tackling downstream tasks in the scene such as visual grounding and dense captioning. A similar pre-training framework in semantic scene understanding is 3D-language pre-training (3DLP) (Jin et al. 2023), which utilize semantic-level and contextual alignment for cross-modal fusion. Moreover, it applies masked modeling in both proposals and language to get better understanding across modalities. In contrast, the introduction of the OID loss in 3DVLP during the pre-training phase markedly improves the performance of the object detector for scenes understanding. Consequently, 3DVLP surpasses 3DLP in (Jin et al. 2023) by a large margin in unique scenarios, especially in terms of Acc@0.5. Method As demonstrated in Figure 2, both the point cloud and linguistic data are encoded and fed into a cross-attention module for fusion. The training can be mainly divided into the pre-training stage and the fine-tuning stage. In the pretraining stage, 3DVLP utilizes visual grounding as the proxy task and employs Object-level IoU-guided Detection loss for high-quality object detection. Additionally, 3DVLP is pre-trained on other designed proxy tasks, including Objectlevel Cross-Contrastive alignment and Object-level SelfContrastive learning. In the finetuning stage, we transfer the backbone to downstream tasks with task-specific heads. Object-level IoU-guided Detection Loss We consider visual grounding as the proxy task since it shares the same objective of obtaining high-quality proposals. Additionally, we propose Object-level IoU-guided Detection loss to enhance the performance of the object detector, as demonstrated in Fig. 4a. Specifically, we introduce the Distance IoU (DIoU) loss (Zheng et al. 2020) for bounding box regression. Given the predicted proposal bp and ground truth bgt, we calulate the IoU between them and have the following regression loss: LDIoU(bp, bgt) = 1 −IoU + ρ2 (bp, bgt) c2 , (1) where c is the diagonal length of the smallest enclosing box covering the two boxes and ρ2(·, ·) is the Euclidean distance. However, previous approaches(Zhao et al. 2021; Cai et al. 2022) treats the matching stage as a classification problem and use the proposal with the highest IoU as a supervised label to train the fusion module. In this case, the DIoU loss can only be applied to a single proposal, which weakens its efforts in optimization. Additionally, due to the large number of proposals generated by the detector, there can be multiple boxes pointing to the target object, and these boxes may share similar semantic information, making it difficult to achieve accurate matching with a one-hot label. We take inspiration from label smoothing (M¨uller, Kornblith, and Hinton 2019) and address such matching problems by introducing an IoU filter. As shown in Fig. 3, given a pre-defined IoU threshold δ and the weight factor ε, positive proposals are filtered according to their IoU with the ground truth, and weights are assigned to them based on their total The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7298 IoU Filter 𝐼𝑜𝑈≥𝛿 𝐼𝑜𝑈≥𝛿 Positive Positive Negative 𝐼𝑜𝑈≤𝛿 Point Cloud Encoder Figure 3: Illustration of the IoU filter in 3DVLP. To apply label smoothing and contrastive loss at the object level, proposals with IoU higher than a threshold δ are considered positive samples while others are regarded as the negative ones. count, denoted by K. The weight of proposal p in the soft label is shown in Eq. (2). yp =    1 −ε if IoUp = IoUmax ε K if IoUp ≥δ and IoUp ̸= IoUmax 0 otherwise (2) We further combine DIoU loss and label smoothing to obtain our OID loss, as demonstrated in Eq. (3). LOID = X p yp · LDIoU(bp, bgt). (3) Usually, label smoothing in common classification tasks assigns weight to all negative classes and aims to prevent overfitting of the dataset. Completely different from traditional goals, label smoothing in 3DVLP is motivated by the need to optimize multiple proposals that point towards the ground truth simultaneously. Thus, we only assign weights to the positive proposals obtained by the IoU filter during label smoothing. This distinct approach of using label smoothing enables faster convergence and better performance. Object-level Cross-contrastive Alignment As a common practice (Zhao et al. 2021; Cai et al. 2022), a cross-modal attention module is applied for feature fusion between language and point cloud embedding. However, it is observed that the data distribution across modalities is not well-aligned, resulting in insufficient interaction between the embedding of proposals and the language feature. To address this issue, contrastive learning can provide insights for embedding alignment across different distributions. However, naive implementation over proposals is not effective, since multiple proposals pointing at the target object might contain semantically similar information, thereby conflicting with the optimization objective of contrastive loss. Based on these observations, we reconsider contrastive learning at the object level and introduce the Object-level Cross-Contrastive alignment (OCC) task to enhance the performance of the cross fusion module, as shown in Fig. 4b. The OCC task is proposed to align the distribution of crossmodal data. Specifically, in the training stage, we introduce the target detection boxes of real objects and select all the predicted boxes with IoU greater than a pre-defined threshold as positive samples since they semantically point to the target object and should have similar features. The remaining predicted boxes are considered negative samples, representing the proposals of other objects or background. We then align the features of positive samples with the language embedding and push the features of negative samples away with the contrastive loss to achieve better cross-modal understanding. Formally, we have the following contrastive loss, which serves as the loss function for our OCC task. LOCC = −1 2E(bgt,T )∼D h log P p∈Ppos exp(s(Hp, T)) P ˆp∈Ppos∪Pneg exp(s(Hˆp, T)) + log P p∈Ppos exp(s(T, Hp)) P ˆp∈Ppos∪Pneg exp(s(T, Hˆp)) i . (4) where Hp represents the embedding of proposal p, and T denotes the language embedding. Given I as the indicator function, IoU(·, ·) as the IoU score between two boxes, and δ as the IoU threshold, we have Ppos = {p|IoU(bp, bgt) ≥δ} as the set of proposals containing positive samples while Pneg = {p|IoU(bp, bgt) < δ} containing the negative ones. s(·, ·) represents the similarity score function for measuring the similarity between two types of features, such as by performing a dot product operation. Note that the threshold δ determines how close positive samples should be to align with the language embedding. Specifically, when δ = IoUmax, Eq. (4) only considers the proposal with the highest IoU to be the positive sample and reverts to the original formula of traditional pairwise contrastive loss. With incorporating the IoU filter, we utilize large amount of proposals generated by an object detector to calculate contrastive loss. The positive proposals selected through the IoU filter can be regarded as diverse data augmentations of the ground truth. Therefore, the sufficient information contained in these positive proposals aids the model in better extracting the intrinsic characteristics of objects for alignment with textual embedding. Such meaningful information fulfills the substantial data sample requirements of contrastive learning, significantly boosting the generalization and robustness of the cross-modal fusion module in 3DVLP. Object-level Self-contrastive Learning In semantic 3D scene understanding, the presence of similar objects in the scene can significantly affect the matching performance of the model. Therefore, a well-designed pre-training model should be capable of accurately distinguishing between objects in the scene and understanding what makes them similar or different. To address this issue, we utilize self-contrastive loss that incentivizes the model to capture features that differentiate objects. Similarly, we require an object-level self-contrastive loss instead of the pairwise loss to effectively differentiate between objects and improve the model’s semantic understanding of the scene. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7299 OID Ground Truth IoU Filter DIoU 𝐿!"# Proposals Label Smoothing 𝐻$ 𝑃%&' 𝑃$() (a) Object-level IoU-guided Detection OCC Ground Truth IoU Filter Constrastive Loss 𝐿!"" Tokens Proposals 𝑇 𝐻# 𝑃$%& 𝑃#'( (b) Object-level Cross-contrastive Alignment OSC Ground Truth IoU Filter Constrastive Loss 𝐿!"# Proposals 𝑃$%& 𝐻' 𝑃'() (c) Object-level Self-Contrastive Learning Figure 4: Illustration of Object-level IoU-guided Detection (OID) loss, Object-level Cross-contrastive alignment (OCC) and Object-level Self-Contrastive learning (OSC) pre-training tasks. All the modules utilize a IoU filter to select positive proposals. Therefore, we introduce the Object-level Self-Contrastive learning (OSC) task for object detection, as shown in Fig. 4c. The OSC task is proposed for unimodal point cloud data and aims to optimize the embedding generated by the point cloud encoder. Based on the idea in OCC task, we utilize the IoU threshold to select positive samples and negative ones for self contrastive learning. By optimizing the self-contrastive loss, 3DVLP encourages the features of the proposals targeting the ground truth object to be as dissimilar as possible from those of other proposals, thereby enabling the fusion module to distinguish different objects easily. Following Eq. (4), we replace the language embedding with the embedding of proposals to obtain the corresponding contrastive loss for OSC module, as shown in Eq. (5). LOSC = −Ebgt∼D h log P p,ˆp∈Ppos exp(s(Hp, Hˆp)) P p,ˆp∈Ppos∪Pneg exp(s(Hp, Hˆp)) i . (5) The sufficient information within multiple positive proposals, complemented by their contrast against the negative proposals, helps the model in more effectively discerning the similarities and differences between objects in the scene. Consequently, the OSC task at the object level enhances the performance of the object detector for downstream tasks. Experiment Datasets and Implementation Details Visual Grounding Dataset: We select the benchmark dataset ScanRefer (Chen, Chang, and Nießner 2020) for visual grounding task. It consists of 800 3D scenes from the ScanNet dataset (Dai et al. 2017), each annotated with object bounding boxes and corresponding text descriptions. To evaluate our results, we employ two evaluation metrics: IoU@0.25 and IoU@0.5, which measure the percentage of times the proposals have an IoU greater than the threshold. Dense Captioning Dataset: We conduct experiments on Scan2Cap dataset (Chen et al. 2021b) for the dense captioning task. We jointly measure the quaility of the generated model with captioning matrics including CiDEr (Vedantam, Lawrence Zitnick, and Parikh 2015), BlEU-4 (Papineni et al. 2002), METEOR (Banerjee and Lavie 2005) and ROUGE (Lin 2004), cited as C, B-4, M and R, respectively. We combine the metrics above with an IOU threshold and adopt the m@kIoU metric: m@kIoU = 1 N PN i=1 mi · I(IoU ≥k), where m represents the captioning metric, k is the threshold of IoU and I stands for the indicator function. Question Answering Dataset: We perform question answering tasks over the ScanQA dataset (Azuma et al. 2022), which consists of 41363 questions and 32337 unique answers from 800 scenes derived from the ScanNet scenes. Following the evaluation in ScanQA, EM@1 and EM@10 are used as the evaluation metric. EM@K is the percentage of predictions where the top K predicted answers exactly match any of the ground-truth answers. Implementations Details We first train 3DVLP over the proposed proxy tasks including visual grounding, OCC and OSC in the pre-training stage for 200 epochs. We then evaluate our methods on the dense captioning and question answering tasks by finetuning with tasks-specific loss. Importantly, we use VoteNet (Qi et al. 2019) as our point cloud encoder and a frozen BERT(Devlin et al. 2018) as the language encoder to avoid over-fitting on short-length sentences. For grounding task, we model it as a classification problem and use a 3-layers MLP as the head. For captioning task, we use a Transformer decoder with 6 layers and 128 as the hidden size. For QA task, a lightweight MLP is adopted to predict the score for each answer and the answer with the highest score is selected as the final answer. More details of the downstream heads can be found in the appendix. We set the batch size as 8 and the initial learning rate is set to be 0.002 for the detector and 5e-4 for other modules in the 3DVLP. Codes are implemented by Pytorch and run on a Nvidia 3090 GPU. Comparison with State-of-the-art Methods 3D Visual Grounding Task We present the results of 3D visual grounding in Table 1. The ”3D” models only utilizes raw attributes in point cloud input features, while ”2D+3D” models use 2D multi-view features as additional inputs. Note that the results of BUTD-DETR (Jain et al. 2022) is re-evaluated by removing the GT object labels in the text The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7300 Method Data Unique Multiple Overall Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 InstanceRefer (Yuan et al. 2021) 3D 77.45 66.83 31.27 24.77 40.23 32.93 3DVG-Transformer (Zhao et al. 2021) 3D 77.16 58.47 38.38 28.70 45.90 34.47 3DJCG (Cai et al. 2022) 3D 78.75 61.30 40.13 30.08 47.62 36.14 3D-SPS (Luo et al. 2022) 3D 81.63 64.77 39.48 29.61 47.65 36.43 BUTD-DETR (Jain et al. 2022) 3D 82.77 63.81 44.01 33.51 49.69 38.01 3DLP (Jin et al. 2023) 3D 79.35 62.60 42.54 32.18 49.68 38.08 3DVG-Transformer (Zhao et al. 2021) 2D + 3D 81.93 60.64 39.30 28.42 47.57 34.67 Multi-View Trans (Huang et al. 2022a) 2D + 3D 77.67 66.45 31.92 25.26 40.80 33.26 3D-SPS (Luo et al. 2022) 2D + 3D 84.12 66.72 40.32 29.82 48.82 36.98 3DJCG (Cai et al. 2022) 2D + 3D 83.47 64.34 41.39 30.82 49.56 37.33 D3Net (Chen et al. 2021a) 2D + 3D 70.35 30.05 37.87 3DLP (Jin et al. 2023) 2D + 3D 84.23 64.61 43.51 33.41 51.41 39.46 3DVLP 2D + 3D 85.18 70.04 43.65 33.40 51.70 40.51 Table 1: Comparison of different methods in 3D visual grounding task. We measure the percentage of the correctly predicted bounding boxes whose IoU with the ground-truth boxes are larger than 0.25 and 0.5, respectively. Method C@0.25 B-4@0.25 M@0.25 R@0.25 C@0.5 B-4@0.5 M@0.5 R@0.5 MORE (Jiao et al. 2022) 62.91 36.25 26.75 56.33 40.94 22.93 21.66 44.42 SpaCap3D (Wang et al. 2022) 63.30 36.46 26.71 55.71 44.02 25.26 22.33 45.36 3DJCG (Cai et al. 2022) 64.70 40.17 27.66 59.23 49.48 31.03 24.22 50.80 D3Net (Chen et al. 2021a) 46.07 30.29 24.35 51.67 3DLP (Jin et al. 2023) 70.73 41.03 28.14 59.72 54.94 32.31 24.83 51.51 3DVLP 66.63 40.85 36.12 61.03 54.41 34.10 34.34 54.28 Table 2: Comparison of different methods in 3D dense captioning task. We report the result with the percentage of the predicted bounding boxes whose IoU with the ground truth are greater than 0.25 and 0.5. Method EM@1 EM@10 Acc@0.25 Acc@0.5 ScanQA 21.05 51.23 24.96 15.42 FE-3DGQA 22.26 54.51 26.62 18.83 3DLP 21.65 50.46 3DVLP 24.03 57.91 33.38 26.12 Table 3: Comparison of different methods in 3D question answering task. The results are presented with the percentage of predictions where the top K predicted answers exactly match any of the ground-truth answers. We also report Acc@0.25 and Acc@0.5 similar to visual grounding. queries for a fair comparison, provided by UniT3D (Chen et al. 2022a). The results indicate that 3DVLP performs remarkably well and outperforms the baselines by a large margin. In terms of unique scenes, 3DVLP achieves the highest accuracy in Acc@0.5 and ranks second in Acc@0.25, indicating the significant impact of our OID loss in developing the model’s ability to identify high-quality bounding boxes. Furthermore, when comparing multiple and unique metrics, previous works suffers from issues related to the presence of similar objects in the scene, leading to poor matching results. However, the introduction of OSC and OCC tasks in 3DVLP enables it to achieve competitive performance in multiple metrics, showcasing its ability to accurately locate objects in complex scenes. In the overall metric, 3DVLP’s performance surpasses the baseline by 0.29% in Acc@0.5 fi 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.46 0.48 0.50 3DVLP-oid Threshold in the IoU filter Overall Acc@0.25 0.52 0.54 Accuracy (%) 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.36 0.38 0.44 0.42 0.40 3DVLP-oid Threshold in the IoU filter Overall Acc@0.5 Accuracy (%) Figure 5: Comparison of the performance when using different threshold in the IoU filter. We also compare a variant of 3DVLP with only OID loss, referred as 3DVLP-oid. and 1.15% in Acc@0.25. 3D Dense Captioning Task As presented in Table 2, it is evident that 3DVLP shows excellent transfer performance in dense captioning task. Importantly, the point cloud encoder in 3DVLP extracts universal features that generalize well in dense captioning, enabling 3DVLP to outperform other baselines. Specifically, 3DVLP achieves an improvement of 7.98% and 1.31% in terms of M@0.25, and R@0.5, respectively. Moreover, the results show that 3DVLP outperforms the second baseline by 8.46% in M@0.25 and 9.51% in M@0.5. Among various evaluation metrics, METEOR focuses on capturing the semantic similarity and fluency between the output and the ground truth, thereby indicating the generalization ability of 3DVLP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7301 Module Visual Gounding Dense Captioning OID OCC OSC Acc@0.25 Acc@0.5 C@0.5 B-4@0.5 M@0.5 R@0.5 50.59 37.96 53.12 31.90 33.93 52.27 ! 50.46 39.49 52.91 33.91 34.28 54.08 ! 51.15 38.44 53.24 32.79 33.98 52.99 ! 50.91 38.28 51.41 32.93 34.00 52.94 ! ! ! 51.70 40.51 54.41 34.10 34.34 54.28 Table 4: Quantitative results of the overall accuracy in visual grounding and the metric in dense captioning. (b) This is a brown ottoman in front of a brown sofa. The ottoman has a black backpack, and a black duffel bag, and a box of tissues. (a) There is a tall chair pulled up to the table in the room. It is the second from the right. Figure 6: Qualitative results in Visual Grounding. We mark the ground truth in blue, 3D-SPS in red and 3DVLP in green. 3D Question Answering Task Based on the results in Table 3, 3DVLP consistently outperforms other methods in the question answering task. For example, 3DVLP achieves approximately 1.7%-2.4% improvement in EM@1 and EM@10 compared to the baseline. Moreover, question answering benefits from the pre-training model when compared to ScanQA, as 3DVLP utilizes the same classification head. Furthermore, 3DVLP provides a boost by 6.76% and 7.23% in Acc@0.25 and Acc@0.5, respectively. Ablation Study Does the OID loss and the designed proxy tasks benefit downstream tasks? We investigate the contribution of each module in 3DVLP and the results in Table 4 demonstrate that both visual grounding and dense captioning tasks benefit from each proposed module. In visual grounding, the OID loss significantly improves the quality of the proposals, thereby enhancing Acc@0.5 to a large degree. Furthermore, neither the introduction of OSC nor OCC provides a remarkable boost in Acc@0.25, indicating the superiority of optimization at the object level in complex scenes. In dense captioning, the improvement of the model is consistent with that in visual grounding by combining the modules together. Is the improvement in OSC and OCC sensitive to the threshold used the IoU filter? To have a better understanding of the threshold δ used in the IoU filter, we estimate the results of the overall Acc in visual grounding with the varying δ. Moreover, we include 3DVLP with only OID loss as a base variant, referred as 3DVLP oid. As shown in Fig. 5, the performance improves when increasing the threshold from 0.1 to 0.25. This is because proposals targeting other objects can be incorrectly considered as positive samples and thus mislead the training optimization when using a low threshold. However, we further increase the threshold and observe that the improvement is not consistent. The performance drops with a large threshold since model will regard proposals that are not good enough as negative samples, resulting in semantic divergence. This is similar to what happens with the traditional pairwise contrastive loss. Therefore, based on our results, we believe that selecting a threshold of 0.25 in the IoU filter is a reasonable tradeoff. Qualitative Results To further explore how 3DVLP improves the performance in visual grounding, we provide the comparison results with 3D-SPS as shown in Figure 6. These examples demonstrate that 3DVLP has a better understanding of the relationship between scene and language as a result of incorporating OSC and OCC, leading to better performance. Conclusion In this paper, we investigates the shared nature across different tasks in semantic 3D scene understanding and proposes 3DVLP, a contrastive 3D vision-language pre-training framework. 3DVLP introduces the object-level IoU-guided detection loss to obtain high-quaility proposals, aligns the point cloud representation and language representation by training over object-level cross-contrastive alignment task and develops its ability to distinguish different objects in the scene through object-level self-contrastive learning task. Comprehensive experiments reveal the generalization ability and superiority of 3DVLP over all downstream tasks in semantic 3D scene understanding, leading to a new state-ofthe-art performance. Future work needs to focus on dealing with the fusion of point cloud and language, desirably about the full interaction of multi-level information. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7302 Acknowledgments This work is supported in part by the National Key Research and Development Program of China, under Grant No. 2023YFF0905502, National Natural Science Foundation of China, under Grant (62302309, 621712488), Shenzhen Science and Technology Program (Grant No. RCYX20200714114523079,JCYJ20220818101014030, JCYJ20220818101012025), and the PCNL KEY project (PCL2023AS6-1), and Tencent “Rhinoceros Birds” - Scientific Research Foundation for Young Teachers of Shenzhen University. References Achlioptas, P.; Abdelreheem, A.; Xia, F.; Elhoseiny, M.; and Guibas, L. 2020. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 422–440. Springer. Afham, M.; Dissanayake, I.; Dissanayake, D.; Dharmasiri, A.; Thilakarathna, K.; and Rodrigo, R. 2022. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9902–9912. Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35: 23716–23736. Azuma, D.; Miyanishi, T.; Kurita, S.; and Kawanabe, M. 2022. ScanQA: 3D question answering for spatial scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19129–19139. Banerjee, S.; and Lavie, A. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 65–72. Cai, D.; Zhao, L.; Zhang, J.; Sheng, L.; and Xu, D. 2022. 3djcg: A unified framework for joint dense captioning and visual grounding on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16464–16473. Chen, D. Z.; Chang, A. X.; and Nießner, M. 2020. Scanrefer: 3d object localization in rgb-d scans using natural language. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX, 202–221. Springer. Chen, D. Z.; Hu, R.; Chen, X.; Nießner, M.; and Chang, A. X. 2022a. UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding. arXiv preprint arXiv:2212.00836. Chen, D. Z.; Wu, Q.; Nießner, M.; and Chang, A. X. 2021a. D3Net: a speaker-listener architecture for semi-supervised dense captioning and visual grounding in RGB-D scans. arXiv preprint arXiv:2112.01551. Chen, J.; Luo, W.; Wei, X.; Ma, L.; and Zhang, W. 2022b. HAM: Hierarchical Attention Model with High Performance for 3D Visual Grounding. arXiv preprint arXiv:2210.12513. Chen, Y.-C.; Li, L.; Yu, L.; El Kholy, A.; Ahmed, F.; Gan, Z.; Cheng, Y.; and Liu, J. 2020. Uniter: Universal image-text representation learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXX, 104–120. Springer. Chen, Z.; Gholami, A.; Nießner, M.; and Chang, A. X. 2021b. Scan2cap: Context-aware dense captioning in rgbd scans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3193–3203. Dai, A.; Chang, A. X.; Savva, M.; Halber, M.; Funkhouser, T.; and Nießner, M. 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5828–5839. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; and Bennamoun, M. 2020. Deep learning for 3d point clouds: A survey. IEEE transactions on pattern analysis and machine intelligence, 43(12): 4338–4364. Huang, S.; Chen, Y.; Jia, J.; and Wang, L. 2022a. Multiview transformer for 3d visual grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15524–15533. Huang, T.; Dong, B.; Yang, Y.; Huang, X.; Lau, R. W.; Ouyang, W.; and Zuo, W. 2022b. Clip2point: Transfer clip to point cloud classification with image-depth pre-training. arXiv preprint arXiv:2210.01055. Jain, A.; Gkanatsios, N.; Mediratta, I.; and Fragkiadaki, K. 2022. Bottom up top down detection transformers for language grounding in images and point clouds. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVI, 417– 433. Springer. Jiao, Y.; Chen, S.; Jie, Z.; Chen, J.; Ma, L.; and Jiang, Y.-G. 2022. More: Multi-order relation mining for dense captioning in 3d scenes. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXV, 528–545. Springer. Jin, Z.; Hayat, M.; Yang, Y.; Guo, Y.; and Lei, Y. 2023. Context-aware Alignment and Mutual Masking for 3DLanguage Pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10984–10994. Li, J.; Selvaraju, R.; Gotmare, A.; Joty, S.; Xiong, C.; and Hoi, S. C. H. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34: 9694– 9705. Lin, C.-Y. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, 74–81. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7303 Luo, J.; Fu, J.; Kong, X.; Gao, C.; Ren, H.; Shen, H.; Xia, H.; and Liu, S. 2022. 3d-sps: Single-stage 3d visual grounding via referred point progressive selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16454–16463. M¨uller, R.; Kornblith, S.; and Hinton, G. E. 2019. When does label smoothing help? Advances in neural information processing systems, 32. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 311–318. Qi, C. R.; Litany, O.; He, K.; and Guibas, L. J. 2019. Deep hough voting for 3d object detection in point clouds. In proceedings of the IEEE/CVF International Conference on Computer Vision, 9277–9286. Tan, H.; and Bansal, M. 2019. Lxmert: Learning crossmodality encoder representations from transformers. arXiv preprint arXiv:1908.07490. Vedantam, R.; Lawrence Zitnick, C.; and Parikh, D. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4566–4575. Wang, H.; Zhang, C.; Yu, J.; and Cai, W. 2022. Spatialityguided transformer for 3d dense captioning on point clouds. arXiv preprint arXiv:2204.10688. Yuan, Z.; Yan, X.; Liao, Y.; Zhang, R.; Wang, S.; Li, Z.; and Cui, S. 2021. Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1791–1800. Zha, Y.; Wang, J.; Dai, T.; Chen, B.; Wang, Z.; and Xia, S.-T. 2023. Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models. arXiv preprint arXiv:2304.07221. Zhai, X.; Wang, X.; Mustafa, B.; Steiner, A.; Keysers, D.; Kolesnikov, A.; and Beyer, L. 2022. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18123–18133. Zhang, R.; Guo, Z.; Zhang, W.; Li, K.; Miao, X.; Cui, B.; Qiao, Y.; Gao, P.; and Li, H. 2022. Pointclip: Point cloud understanding by clip. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8552–8562. Zhao, L.; Cai, D.; Sheng, L.; and Xu, D. 2021. 3DVGTransformer: Relation modeling for visual grounding on point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2928–2937. Zhao, L.; Cai, D.; Zhang, J.; Sheng, L.; Xu, D.; Zheng, R.; Zhao, Y.; Wang, L.; and Fan, X. 2022. Towards Explainable 3D Grounded Visual Question Answering: A New Benchmark and Strong Baseline. IEEE Transactions on Circuits and Systems for Video Technology. Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; and Ren, D. 2020. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 12993–13000. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. Zhu, X.; Zhang, R.; He, B.; Zeng, Z.; Zhang, S.; and Gao, P. 2022. Pointclip v2: Adapting clip for powerful 3d openworld learning. arXiv preprint arXiv:2211.11682. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7304
2024
811
18,642
Transformer-Based Selective Super-resolution for Efficient Image Refinement Tianyi Zhang1, Kishore Kasichainula2, Yaoxin Zhuo2, Baoxin Li2, Jae-Sun Seo3, Yu Cao1 1University of Minnesota 2Arizona State University 3Cornell Tech zhan9167@umn.edu, {kkasicha, yzhuo6, baoxin.li}@asu.edu, js3528@cornell.edu, yucao@umn.edu Abstract Conventional super-resolution methods suffer from two drawbacks: substantial computational cost in upscaling an entire large image, and the introduction of extraneous or potentially detrimental information for downstream computer vision tasks during the refinement of the background. To solve these issues, we propose a novel transformer-based algorithm, Selective Super-Resolution (SSR), which partitions images into non-overlapping tiles, selects tiles of interest at various scales with a pyramid architecture, and exclusively reconstructs these selected tiles with deep features. Experimental results on three datasets demonstrate the efficiency and robust performance of our approach for super-resolution. Compared to the state-of-the-art methods, the FID score is reduced from 26.78 to 10.41 with 40% reduction in computation cost for the BDD100K dataset. Introduction Super-resolution (SR) is a fundamental task aimed at enhancing image resolution by producing intricate details from low-resolution (LR) images. It supplies high-resolution (HR) images that are pivotal for downstream computer vision tasks, such as object detection and image classification, with wide-ranging applications in the real world. For instance, in the context of autonomous driving, higherresolution images facilitate more precise and early object detection, particularly for diminutive objects. Although various super-resolution methods based on convolutional neural networks (CNNs) have been proposed, which enhance highfrequency information through low-resolution image reconstruction, their efficacy is impeded by a lack of long-range dependency integration. Recently, leveraging transformer-based architectures to capture the extended contextual information, pioneering efforts like SwinIR (Liang et al. 2021) and HAT (Chen et al. 2023), have achieved notable advancements in super-resolution. Nevertheless, two key issues persist with these algorithms. Firstly, due to the substantial scale of transformer-based networks, the computational demand becomes exceedingly high when reconstructing entire images, particularly when input low-resolution image sizes are not Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The reconstruction of high-frequency background information in conventional SR methods (e.g., HAT) often results in discrepancies in features compared to ground truth high-resolution (HR) images. SSR effectively resolves this issue by exclusively enhancing foreground pixels. small, such as 256×256. Secondly, even in state-of-theart super-resolution approaches, the refined images fail to match the performance of ground truth HR images for typical downstream tasks. To delve into the cause of this degradation, we conduct a comparison between features generated by the Inception model from refined images and original HR images. This analysis unveils that features derived from background pixels exhibit markedly different details compared to the ground truth feature map, as depicted in Figure 1. This divergence suggests that overemphasizing background pixel details can introduce erroneous information and impede feature generation for downstream tasks. This paper introduces a novel algorithm, Selective SuperResolution (SSR), designed to address these challenges. Specifically, leveraging object location information, we partition images into non-overlapping tiles and employ a costefficient transformer-based network for tile selection. To ensure comprehensive coverage of objects, a pyramid structure is devised for tile selection across multiple scales. In the final layer of this selection network, we integrate a GumbelThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7305 Softmax layer (Jang, Gu, and Poole 2016) to make hard decisions, subsequently directing positive tiles to an ensuing super-resolution (SR) module. This module comprises a convolution layer for shallow feature extraction, transformer blocks for deep feature extraction and an image reconstruction block. In contrast, negative tiles are reconstructed directly from shallow features. This framework not only reduces computation by extracting deep features solely for positive tiles but also enhances image generation by avoiding excessive background detail addition. To validate the robustness of SSR, alongside common evaluation metrics, such as Structural Similarity Index Measure (SSIM), Fr´echet Inception Distance (FID), and Kernel Inception Distance (KID), we introduce a novel metric, inspired by KID and tailored to evaluate features from an object detection (OD) model, YOLO, called Kernel YOLO Distance (KYD). Our approach is experimented on three distinct image datasets, BDD100K, MSRA10K, and COCO2017, demonstrating both substantial computational reduction and image generation improvement. To summarize, our key contributions are as follows: • We design a low-cost Tile Selection module, employing transformer blocks, to effectively extract desired objectcontaining tiles from images. The introduction of a pyramid structure ensures accurate selection of positive tiles. • By seamlessly integrating two transformer-based modules, Tile Selection (TS) and Tile Refinement (TR), our SSR efficiently reconstructs the input images by exclusively adding high-frequency information for objectcontaining tiles, effectively mitigating computational costs and enhancing visual quality. • Through comprehensive experiments on three diverse image datasets and various evaluation metrics, we showcase SSR’s robust performance and specifically lower FID from 26.78 to 10.41 for BDD100K, accompanied by a 40% reduction in computation cost. Related Work Super-Resolution In this context, our primary focus revolves around single image super-resolution (SISR) techniques, without delving into methodologies that reconstruct high-resolution images using multiple input images. Previously, the SRCNN model (Dong et al. 2014) based on convolutional neural networks (CNNs) inspires many works in SR (Tai, Yang, and Liu 2017; Niu et al. 2020; Mei, Fan, and Zhou 2021). This seminal approach employed bicubic interpolation and trained a threelayer CNN for SR, achieving remarkable success. A diverse range of CNN-based methodologies has emerged to map low-resolution images to their high-resolution counterparts, leveraging distinct block designs, such as the incorporation of residual blocks (Lim et al. 2017). Moreover, harnessing the unprecedented image generation capabilities of generative adversarial networks (GANs) (Goodfellow et al. 2020). Certain studies have notably advanced the quality of generated images, such as SRGAN (Ledig et al. 2017; Wang et al. 2021, 2018; Zhang et al. 2019), which introduce adversarial learning to improve perceptual quality. Recent strides have been witnessed in the adoption of the attention mechanism, which excels in capturing long-range dependencies. This mechanism has lately been adopted to further enhance SISR methodologies(Liang et al. 2021; Chen et al. 2023). Transformer in Computer Vision Given the remarkable success of transformers in natural language processing (NLP) (Vaswani et al. 2017), this architectural paradigm is progressively permeating diverse computer vision tasks (Chu et al. 2021; Huang et al. 2021; Dong et al. 2022; He et al. 2022; Zhang et al. 2023a,b). For instance, Vision Transformer (ViT) divides input images into 16 × 16 patches, which are subsequently treated as tokens for the application of the attention mechanism (Dosovitskiy et al. 2020). For object detection (OD), DETR conceptualizes it as a direct set prediction issue and crafts a transformerbased network (Carion et al. 2020). DINO introduces selfsupervised learning to propose a novel network rooted in the ViT architecture (Caron et al. 2021). Swin Transformer integrates window-based self-attention and shifted windowbased self-attention mechanisms, to reduce the computation cost by limiting the computation inside small windows (Liu et al. 2021, 2022). By capturing long-range dependencies and facilitating enhanced visual representation, transformerbased networks have exhibited superiority in various domains, including super-resolution (Liang et al. 2021; Chen et al. 2023). Methodology The overall architecture of our approach is illustrated in Figure 2. Selective Super-Resolution (SSR) network comprises two fundamental modules: Tile Selection (TS) and Tile Refinement (TR). Both modules are transformer-based networks, with TS being notably smaller in scale. Upon receiving an input image, the TS module initiates the process by partitioning it into non-overlapping tiles and then selects pertinent tiles based on object location cues. The tiles containing objects are directed through a computationally intensive block for intricate refinement, while the remaining tiles traverse a cost-efficient block for straightforward upscaling. This section is dedicated to a comprehensive discourse on all modules, elucidating their specifics. We outline the precise algorithm in Algorithm 1. Tile Selection Module By splitting the input images into 4×4 non-overlapping tiles and utilizing the location information of objects in these images, this module classifies each tile by whether it contains objects or not. Specifically, as shown in Figure 2, this module consists of an image encoder and several tile classifiers. First, the embedding layer and the first transformer layer of the encoder embeds the low-resolution (LR) input image ILR ∈RHLR×WLR×CLR (HLR, WLR, and CLR are the input LR image height, width and the number of channels) into representations r0 ∈R HLR p × WLR p ×C, where p is the patch size of each token and C is the number of channels for each token. After that, three transformer layers, TL1, TL2, and TL3, generate representations of these tokens at The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7306 Figure 2: Overall architecture of Selective Super-Resolution (SSR). Input low-resolution (LR) images are split into nonoverlapping tiles and classified to two classes by Tile Selection (TS) module. And only positive tiles containing objects are intricately reconstructed by Positive Tile Refinement (PTR) path with deep features. The specific Residual Transformer Block (RTB) in the feature extraction module will be introduced in detail later. three different scales. The transformer layer adopts the structure of Swin Transformer (Liu et al. 2021, 2022). Basically, with the incorporation of window-based self-attention and shifted window-based self-attention mechanisms, this layer can capture global contextual information with attention computation within small windows, which is much cheaper than the traditional attention mechanism. Besides, with the feature merging layer, we can obtain features at various scales to enable the implementation of pyramid structure. The whole process is as follows, r1 = TL1(r0), r2 = TL2(r1), r3 = TL3(r2) (1) where r1 ∈R HLR 2p × WLR 2p ×2C, r2 ∈R HLR 4p × WLR 4p ×4C, r3 ∈R HLR 8p × WLR 8p ×8C are three generated representations for classification. And, each token of these representations corresponds to tiles of size 2p × 2p, 4p × 4p, and 8p × 8p, respectively. To classify these tiles, we adopt the cross-attention mechanism by introducing a learnable classification token, denoted as c. We obtain the query matrix Q by applying one linear layer to image features r1, r2 or r3 as Qi = riW q i , i ∈ 1, 2, 3 while computing the key matrix and value matrix with the classification token as equations Ki = cW k i , Vi = cW v i , i ∈1, 2, 3, then the attention computation is expressed as follows, Ai = softmax(QiKT i √ d )V i ∀i ∈1, 2, 3 (2) Next, each of the three features undergoes individual processing through a multi-layer perceptron (MLP) and a Gumbel-Softmax layer. This step facilitates making definitive classifications for the tile classes. This is crucial for the subsequent refinement module to apply the corresponding network. The process are as follows: si = GumbelSoftmax(MLPi(Ai)) ∀i ∈1, 2, 3 (3) where MLPi denotes the output layer for the ith feature embeddings. Accordingly to the network structure, after pooling the instance segmentation label of the input images at three different scales, we introduce a pyramid label that contains, y1 ∈R HLR 2p × WLR 2p ×1, y2 ∈R HLR 4p × WLR 4p ×1, and y3 ∈R HLR 8p × WLR 8p ×1, to supervise the training of TS module by allocating positive labels to the tiles that contain objects. This hierarchical structure ensures the preservation of a larger number of tiles and minimizes the loss of positive instances. Finally, all tiles are divided into two groups to be processed by the subsequent refinement module. Tile Refinement Module As depicted in Figure 2, there are two Tile Refinement (TR) paths: Positive Tile Refinement (PTR) targeting objectcontaining tiles, and Negative Tile Refinement (NTR) for tiles with solely background pixels. Positive Tile Refinement. For positive tiles which contain objects, our refinement involves a transformer-based process for deep feature extraction and image reconstruction. To be specific, for a given tile TLR ∈R8p×8p×CLR, the convolution layer first extract the shallow feature FS ∈ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7307 Algorithm 1: Selective Super-Resolution (SSR) Input: Low-resolution (LR) Image data ILR, tile class labels y1, y2, y3, and ground truth high-resolution (HR) image IGT Parameter: loss weight α, learning rate η 1: Initialize model parameters θ 2: for each epoch t = 1, 2, ... do 3: r1, r2, r3 = fθIE(ILR) ▷Generate representations with Image Encoder (IE) 4: si = GumbelSoftmax(fθCL(ri)) ∀i ∈1, 2, 3 ▷ Classify tiles with Classification Layer (CL) 5: LT S(θ) = P3 i=1(−yilog(si)−(1−yi)log(1−si)) 6: for each tile T 1 LR, T 2 LR, ..., T N LR do 7: if sn 3 = 1 then ▷Positive tiles 8: F n S = fθConv(T n LR) ▷Shallow feature extraction with convolution layer 9: F n D = fθF E(F n S )) ▷Deep Feature Extraction (FE) 10: T n HR = fθIR(F n S + F n D) ▷HR Image Reconstruction (IR) 11: else ▷Negative tiles 12: F n S = fθConv(T n LR) 13: T n HR = fθIR(F n S ) 14: end if 15: end for 16: Group all output tiles THR to entire images IHR 17: LT R(θ) = ||IHR −IGT ||1 18: LSSR(θ) = LT S(θ) + αLT R(θ) 19: θ ←θ −η∇θLSSR(θ) ▷Update model 20: end for R8p×8p×Cf , where Cf is the number of channels for features. Subsequently, a series of K residual transformer blocks (RTBs), based on the Swin transformer architecture, are employed to derive deep features FD ∈R8p×8p×Cf as the following equations: Fi = RTBi(Fi−1), i = 1, 2, ..., K, (4) FD = Conv(FK) (5) where F0 is the input shallow feature FS, RTBi denotes the i-th RTB, and Conv is the final convolution layer. The keypoint is the design of RTB. Figure 3 presents the specific structure of RTB. Basically, it consists of a series of transformer layers and one convolution layer. Primarily, the inclusion of an additional convolutional layer at the end serves to optimize the transformer more effectively, yielding enhanced representations. This is due to the fact that direct similarity comparisons across all tokens often introduce redundancy, as evidenced in various works(Zhang et al. 2018; Liang et al. 2021; Li et al. 2023; Wu et al. 2021; Xiao et al. 2021). Secondly, the skip connection within the RTB establishes a link between features at different levels and the image reconstruction block. This facilitates the aggregation of features from diverse levels, promoting the integration of multi-level information. Additionally, we adopt distinct Figure 3: Structure of RTB. The incorporation of a skip connection and convolution layer in RTB contributes to improved feature learning capabilities. attention blocks proposed in (Chen et al. 2023) to activate more pixels for high-resolution image reconstruction. After obtaining both the shallow feature FS and deep feature FD, we merge them to reconstruct HR tiles using the following equation, THR = IR(FS + FD) (6) where IR is the image reconstruction block. By transmitting the shallow feature containing low-frequency information and the deep feature which highlights high-frequency details via a long skip connection, this module effectively concentrates on reconstructing high-frequency information. And in this block, the sub-pixel convolution layer (Shi et al. 2016) is employed to upsample the feature. Negative Tile Refinement. Since the intricate refinement for the background introduce some irrelevant or even detrimental features which degrade the image quality for downstream tasks, we remove the transformer-based feature extraction block and directly reconstruct the negative tiles with shallow features to obtain HR tiles with the same resolution as refined positive tiles. After obtaining HR tiles via both PTR and NTR, we consolidate them to produce the refined HR image. Loss Function For the TS module, the loss is computed using cross-entropy. With the introduction of a pyramid structure aimed at reducing false negative predictions (FN) and ensuring the selection of all positive patches, the losses from the three scales are combined to formulate the final loss as follows: LT S = 3 X i=1 (−yilog(si) −(1 −yi)log(1 −si)) (7) For the TR module, we leverage L1 loss to quantify the discrepancy between the generated high-resolution image IHR and the ground truth high-resolution image IGT as follows: LT R = ||IHR −IGT ||1 (8) Finally, the loss function of SSR is the weighted sum of these two modules and the formulation is expressed as, LSSR = LT S + αLT R (9) where α is a hyper-parameter to adjust the weight of TS module in training. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7308 Experimental Results Datasets and Experimental Settings We evaluate the effectiveness of our SSR model on three datasets, BDD100K (Yu et al. 2020), COCO 2017 (Lin et al. 2014) and MSRA10K (Cheng et al. 2015). These datasets provide segmentation or object detection labels, enabling us to generate tile classification labels for tile selection. BDD100K is the most complex dataset focused on autonomous driving, containing approximately 100,000 high-quality images with an original resolution of 1280 × 720. Each image encompasses various objects, including vehicles, pedestrians, and eight other object categories. COCO2017 comprises around 100,000 training images and 5,000 validation images. Each image encompasses multiple objects spanning 80 different classes. MSRA10K provides 10000 images and their segmentation labels. For all datasets, we resize them into low-resolution (LR) images with resolution 256 × 256, and high-resolution (HR) images with resolution 1024×1024 by bicubic interpolation. The embedding dimension of the Tile Selection (TS) module is 96 while it’s 180 for Tile Refinement (TR) module. We set the learning rates to 0.00001. Each TL utilizes a depth of 2, a window size of 7, and 3 attention heads. We employ a patch size of 2 for embedding, which corresponds to tile sizes of 16 × 16, 32 × 32, and 64 × 64, yielding tile labels of 4 × 4, 8 × 8, and 16 × 16 respectively. The weight parameter α for the loss function is set to 1. The number of RTB is 6. All experiments are conducted for 50 epochs on two Linux servers, each equipped with two NVIDIA A6000 GPUs. Evaluation Metrics. Besides the common metrics for image quality, like Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Fr´echet Inception Distance (FID), and Kernel Inception Distance (KID), we introduce an additional metric named Kernel YOLO Distance (KYD) to demonstrate the robustness of our approach. To evaluate FID, we employ a pre-trained image classification model, Inceptionv3, to generate features and compare the distributions of two image sets. For KYD, we focus on the comparison of features for object detection (OD) by utilizing a pre-trained YOLOv8 model to generate features and compute the kernel distance similarly to KID. Furthermore, we assess the performance of the TS module using True Positive Rate (TPR) and the maximum F1 score (maxF) for accuracy, along with the average number of selected tiles for computation efficiency. Tile Selection Results For Tile Selection (TS), we explore various configurations to identify the optimal setup for our objectives. We adjusted the patch size of the first embedding layer, which consequently influenced the number of transformer layers (TL) in the network. Specifically, by setting the patch size to 2, 4, or 8, we achieved 5, 4, or 3 TLs respectively. We also experimented with different loss functions, including pyramid labels at various scales and a single label. Additionally, we introduced a max block at the end of the TS module to consolidate three predictions by selecting the maximum. The results, presented in Table 1, highlighted several key Model TPR maxF #Tiles #MACs B (w/o pyramid) 0.9137 0.8807 62% 563.21M S (w/o pyramid) 0.9086 0.8522 65% 155.30M T (w/o pyramid) 0.8973 0.8225 66% 53.11M B (w/ pyramid) 0.9341 0.8967 62% 563.21M S (w/ pyramid) 0.9048 0.8327 66% 155.30M T (w/ pyramid) 0.8855 0.8384 62% 53.11M B (w/ max) 0.9692 0.8893 77% 563.21M S (w/ max) 0.9681 0.8572 79% 155.30M T (w/ max) 0.9666 0.8400 81% 53.11M Table 1: Results of TS module. Encode images with 5, 4, or 3 transformer layers, corresponding to TS-Base (B), TS-Small (S) and TS-Tiny (T) in this table. More layers provide better performance with increasing computation. The adoption of the pyramid structure for tile selection across different scales improves the network’s effectiveness. The integration of the max block further enhances the selection. insights. Firstly, increasing the number of transformer layers enhanced performance, albeit at the cost of increased computation. Secondly, the pyramid structure consistently yielded better outcomes. Lastly, while the max block improved selection results, it also introduced additional computational overhead for the refinement module. Super-Resolution (SR) Results As shown in figure 1, by comparing the feature visualization results of reconstructed images by SR methods with ground truth HR images, we found the high-frequency refinement of conventional methods for background adversely impacts the image quality. To substantiate this observation, we devised two new datasets: one by replacing foreground pixels and the other by substituting background pixels with upscaled images via bicubic interpolation. This simulation aimed to explore the impact of high-frequency removal. Upon evaluating the image quality of these datasets, we discerned an intriguing finding. Even the mere replacement of pixels lacking high-frequency information for the background led to enhanced image quality. Figure 4 graphically illustrates this comparison. The top example juxtaposes generated foreground via SR with coarse background upscaled by bicubic interpolation, while the bottom image maintains high-frequency details exclusively in the background. The top dataset exhibited superior image quality, as indicated by lower FID and higher SSIM. This quantitative assessment provides further support for our SSR design. Quantitative Results. Table 2 shows the quantitative comparison of our approach with the state-of-the-art methods (Liang et al. 2021; Chen et al. 2023) on three different datasets, considering a 4× upscale of the input images. The results clearly demonstrate the consistent superiority of our proposed method, SSR, in terms of various evaluation metrics. This robust performance underlines SSR’s potential to generate images with improved features for downstream The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7309 Method BDD100K COCO 2017 MSRA10K PSNR↑ SSIM↑ FID↓ KID↓ KYD↓ SSIM↑ FID↓ KYD↓ SSIM↑ FID↓ KYD↓ SwinIR 30.08 0.8757 24.87 0.0104 0.0092 0.8420 8.483 0.0036 0.9441 9.502 0.0035 HAT 30.09 0.8750 26.78 0.0115 0.0096 0.8391 10.03 0.0037 0.9473 9.577 0.0030 SSR (ours) 29.91 0.8833 10.41 0.0017 0.0041 0.8512 2.844 0.0006 0.9653 1.905 0.0002 Table 2: Comparison of SR results. The table presents the outcomes for a 4× upscaling factor. For SwinIR and HAT, we utilize the official tile-based implementation of HAT for equitable assessment. Both SwinIR and HAT demonstrate results inferior to our SSR in terms of image quality across all three datasets. Generated images by SSR can perform better for downstream tasks with lower FID and KYD. Figure 4: By replacing either the background or foreground pixels with bicubic-interpolated upscaled images, we effectively simulate the removal of high-frequency details from these regions. The results illustrate that the high-frequency information of background introduced by SR module impairs the generated image quality. Method Scale SSIM FID FID KYD HAT ×2 0.8764 27.58 0.0123 0.0092 SSR (ours) ×2 0.8890 10.28 0.0016 0.0037 HAT ×3 0.8768 30.88 0.0150 0.0096 SSR (ours) ×3 0.8918 10.80 0.0017 0.0027 Table 3: Quantitative comparison with HAT for 2× and 3× up-sampling. SSR consistently outperforms HAT across all evaluation metrics, demonstrating its superiority in refining images at various scales. tasks such as image classification and object detection, as indicated by its lower FID and KYD. In Table 3, we extend the comparison to 2× and 3× magnification factors on BDD100K. Given SwinIR’s challenges on this dataset, we focus on contrasting HAT with our method. The results showcased in this table demonstrate SSR’s proficiency across different scales. Visual Comparison. Figure 5 illustrates the visual comparison across three datasets. Besides the ground truth (GT) Method #Tiles SSIM FID KID KYD w/ max 62% 0.8833 10.41 0.0017 0.0041 w/o max 77% 0.8835 10.62 0.0017 0.0041 Table 4: With the max block, the visual quality of images generated by SSR can be slightly improved with additional 15% computation overhead. Dataset Method #Tiles #Params (M) #MACs (G) BDD100K HAT 1675.99 191.06 SSR 62% 1233.13 119.40 COCO 2017 HAT 1675.99 191.06 SSR 65% 1233.13 125.15 MSRA10K HAT 1675.99 191.06 SSR 57% 1233.13 109.82 Table 5: Computation cost comparison. Compared to HAT, SSR achieves about 40% computation reduction for all three datasets. All metrics are evaluated for a 256 × 256 input. images and the images generated by HAT and our SSR approach, we also present the extracted features from these images. The features derived from our SSR-generated images exhibit a closer resemblance to GT images, emphasizing the fidelity of our approach. Efficiency. Actually, with more tiles selected by the max block, the generated image quality remains quite similar, as shown in Table 4. So, we opt not to include this block in our final algorithm. For our SSR, only positive tiles are directed through the expensive SR module. In Table 5, we summarize the total number of parameters and computation cost for three different datasets. It underscores the efficiency of SSR with about 40% computation reduction. Ablation Study Pre-training with ImageNet. Similar to other computer vision work, SSR can substantially benefit from a pretraining strategy employing a large dataset, such as ImageNet. A comparison of results obtained with and without pre-training highlights the enhancement in image quality, with PSNR scores improving from 29.56 to 29.91. Figure The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7310 Figure 5: Visual comparison with the state-of-the-art methods for 4× upscaling. We generate features with Inceptionv3 model and visualize them. The features extracted from SSR images exhibit a higher degree of similarity to the GT features. Figure 6: With pre-training, SSR can generate better images compared to the outcomes without pre-training. 6 presents one example for visual comparison. Negative Tile Refinement. For background pixels, highfrequency information recovery is unnecessary. We experimented with two different designs: the first employs a single Upsample layer, while the second retains the convolution layers from the Positive Tile Refinement (PTR) path and omits the transformer blocks. The results are shown in Table 6. Adding convolution layers in NTR marginally enhances image quality. This enhancement can be attributed to the compensation for a few positive tiles that might be missed by the selection module. However, this improvement comes at the cost of increased 19.2% computational load. Therefore, it is a trade-off between the computation and image generation performance. Method PSNR SSIM FID KID KYD w/o conv 29.91 0.8833 10.41 0.0017 0.0041 w/ conv 30.03 0.8863 9.56 0.0013 0.0042 Table 6: With convolution layers in the NTR path, SSR showcases an improvement in image generation. Conclusion In this paper, we delve into the cause behind the image quality gap between generated images through conventional super-resolution (SR) techniques and high-resolution (HR) ground truth images. Our investigation reveals that the high-frequency refinement of background pixels undermines the overall image feature generation for downstream tasks. To address this challenge and concurrently reduce computational overhead, we introduce a novel algorithm termed Selective Super-Resolution (SSR). By leveraging a cost-efficient transformer-based network, we partition the input low-resolution (LR) image into non-overlapping tiles, assessing the presence of objects within each tile. By exclusively refining the high-frequency characteristics of object-containing tiles, we improve visual quality with a lower computation cost. Our approach’s superior performance is substantiated across three distinct datasets employing diverse evaluation metrics. Specifically, experiments on BDD100K exhibit an improvement of FID from 26.78 to 10.41 with 40% computation reduction. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7311 Acknowledgments This work is supported by the Center for the Co-Design of Cognitive Systems (CoCoSys), one of seven centers in Joint University Microelectronics Program 2.0 (JUMP 2.0), a Semiconductor Research Corporation (SRC) program sponsored by the Defense Advanced Research Projects Agency (DARPA). References Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European conference on computer vision, 213–229. Springer International Publishing Cham. Caron, M.; Touvron, H.; Misra, I.; J´egou, H.; Mairal, J.; Bojanowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, 9650–9660. Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; and Dong, C. 2023. Activating more pixels in image super-resolution transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22367–22377. Cheng, M.-M.; Mitra, N. J.; Huang, X.; Torr, P. H. S.; and Hu, S.-M. 2015. Global Contrast based Salient Region Detection. IEEE TPAMI, 37(3): 569–582. Chu, X.; Tian, Z.; Wang, Y.; Zhang, B.; Ren, H.; Wei, X.; Xia, H.; and Shen, C. 2021. Twins: Revisiting the design of spatial attention in vision transformers. Advances in Neural Information Processing Systems, 34: 9355–9366. Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2014. Learning a deep convolutional network for image super-resolution. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13, 184–199. Springer. Dong, X.; Bao, J.; Chen, D.; Zhang, W.; Yu, N.; Yuan, L.; Chen, D.; and Guo, B. 2022. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12124–12134. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16000–16009. Huang, Z.; Ben, Y.; Luo, G.; Cheng, P.; Yu, G.; and Fu, B. 2021. Shuffle transformer: Rethinking spatial shuffle for vision transformer. arXiv preprint arXiv:2106.03650. Jang, E.; Gu, S.; and Poole, B. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Ledig, C.; Theis, L.; Husz´ar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4681–4690. Li, K.; Wang, Y.; Zhang, J.; Gao, P.; Song, G.; Liu, Y.; Li, H.; and Qiao, Y. 2023. Uniformer: Unifying convolution and self-attention for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte, R. 2021. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, 1833–1844. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Mu Lee, K. 2017. Enhanced deep residual networks for single image superresolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 136–144. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, 740–755. Springer. Liu, Z.; Hu, H.; Lin, Y.; Yao, Z.; Xie, Z.; Wei, Y.; Ning, J.; Cao, Y.; Zhang, Z.; Dong, L.; et al. 2022. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12009–12019. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Mei, Y.; Fan, Y.; and Zhou, Y. 2021. Image super-resolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3517–3526. Niu, B.; Wen, W.; Ren, W.; Zhang, X.; Yang, L.; Wang, S.; Zhang, K.; Cao, X.; and Shen, H. 2020. Single image superresolution via a holistic attention network. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16, 191– 207. Springer. Shi, W.; Caballero, J.; Husz´ar, F.; Totz, J.; Aitken, A. P.; Bishop, R.; Rueckert, D.; and Wang, Z. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1874–1883. Tai, Y.; Yang, J.; and Liu, X. 2017. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3147–3155. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7312 Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, X.; Xie, L.; Dong, C.; and Shan, Y. 2021. Realesrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF international conference on computer vision, 1905–1914. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Change Loy, C. 2018. Esrgan: Enhanced superresolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops, 0–0. Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; and Zhang, L. 2021. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, 22–31. Xiao, T.; Singh, M.; Mintun, E.; Darrell, T.; Doll´ar, P.; and Girshick, R. 2021. Early convolutions help transformers see better. Advances in neural information processing systems, 34: 30392–30400. Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; and Darrell, T. 2020. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, T.; Kasichainula, K.; Jee, D.-W.; Yeo, I.; Zhuo, Y.; Li, B.; Seo, J.-s.; and Cao, Y. 2023a. Improving the Efficiency of CMOS Image Sensors through In-Sensor Selective Attention. In 2023 IEEE International Symposium on Circuits and Systems (ISCAS), 1–4. IEEE. Zhang, T.; Kasichainula, K.; Zhuo, Y.; Li, B.; Seo, J.-S.; and Cao, Y. 2023b. Patch-based Selection and Refinement for Early Object Detection. arXiv preprint arXiv:2311.02274. Zhang, W.; Liu, Y.; Dong, C.; and Qiao, Y. 2019. Ranksrgan: Generative adversarial networks with ranker for image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3096–3105. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), 286–301. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7313
2024
812
18,643
Exploring Base-Class Suppression with Prior Guidance for Bias-Free One-Shot Object Detection Wenwen Zhang1, Yun Hu2, Hangguan Shan1, Eryun Liu1* 1College of Information Science and Electronic Engineering, Zhejiang University, China 2School of Information Science and Technology, ShanghaiTech University, China {wenwenzhang, hshan, eryunliu}@zju.edu.cn, huyun@shanghaitech.edu.cn Abstract One-shot object detection (OSOD) aims to detect all object instances towards the given category specified by a query image. Most existing studies in OSOD endeavor to establish effective cross-image correlation with limited query information, however, ignoring the problems of the model bias towards the base classes and the generalization degradation on the novel classes. Observing this, we propose a novel algorithm, namely Base-class Suppression with Prior Guidance (BSPG) network to achieve bias-free OSOD. Specifically, the objects of base categories can be detected by a base-class predictor and eliminated by a base-class suppression module (BcS). Moreover, a prior guidance module (PG) is designed to calculate the correlation of high-level features in a non-parametric manner, producing a class-agnostic prior map with unbiased semantic information to guide the subsequent detection process. Equipped with the proposed two modules, we endow the model with a strong discriminative ability to distinguish the target objects from distractors belonging to the base classes. Extensive experiments show that our method outperforms the previous techniques by a large margin and achieves new state-of-the-art performance under various evaluation settings. Introduction Benefiting from the flourishing of deep convolutional neural networks, object detection has made tremendous progress over the past few years (Ren et al. 2015; Redmon et al. 2016; He et al. 2017). Nevertheless, most of the advanced methods heavily rely on large-scale labeled datasets (Deng et al. 2009; Lin et al. 2014), and they may struggle with new applications where novel-class objects are not witnessed during the training phase. In light of the powerful cognitive ability of humans to recognize new objects with only a few examples, few-shot learning (FSL) has emerged as a promising technique (Vinyals et al. 2016; Sung et al. 2018). FSL constructs models that can generalize to new classes with limited annotated data, offering a potential solution for object detection in scenarios involving novel-class objects. In this paper, we undertake the application of FSL in the field of object detection, termed one-shot object detection (OSOD), where the model aims to detect all instances *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. High-level Features Extraction and Correlation (a) Existing one-shot object detection frameworks Base-class  Suppression Weight-shared Backbone Weight-shared Backbone Prior Map (b) Our base-class suppression with prior guidance network Target Image Query Image Target Image Query Feature Query Feature Target Feature Target Feature Prediction Base-class Predictor Prediction Features  Correlation   and Metric   Query Image Features  Correlation   and Metric   Figure 1: Comparison between existing frameworks and ours. Existing models mostly exhibit a preference for familiar base class (e.g., person) rather than novel objects specified by the query image (e.g., horse). We propose a baseclass suppression module to eliminate base-class distractors, and a non-parametric prior guidance module to generate a class-agnostic prior map to guide the detection process. of the given category specified by only one query image patch. In recent years, the field of few-shot object detection has thrived (Fan et al. 2020a; Chen et al. 2021; Han et al. 2022), in which prevalent approaches often incorporate transfer-learning, meta-learning, and metric-learning to deal with the task. Most existing works in OSOD adopt the metric-learning paradigm and recognize new objects based on the similarity metrics between image pairs without finetuning (Hsieh et al. 2019; Zhang et al. 2022b; Yang et al. 2022). However, they are generally dedicated to exploring effective cross-image feature correlation to better use the limited information, neglecting the phenomenon of the model bias towards the base classes and the generalization degradation on the novel classes (Fan et al. 2021; Lang et al. 2022). Due to the extremely unbalanced distribution of base and novel-class datasets, the learned model will inevitably The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7314 fit the feature distribution of the abundant training data and tend to exhibit a preference for the base classes over the given novel category. As illustrated in Fig. 1(a), the conventional OSOD model easily suffers from false positive detections on the base-class objects, leading to a decrease in performance for novel categories. To address the aforementioned problems, we tackle the OSOD task from a new perspective, improving the model by suppressing the distractors belonging to base classes. Specifically, a complementary branch named base-class predictor is introduced to detect the objects of base classes, which is pre-trained on the base-class dataset following a traditional paradigm. Then, we rectify the coarse detection results derived from the general OSOD network (novel-class predictor) by Base-class Suppression (BcS) module. As shown in Fig. 1(b), equipped with the BcS module, the falsely detected objects are obviously suppressed, thus realizing accurate detection for the specified novel category and explicitly mitigating the model bias problem. Rather than pursuing well-designed interaction, which may involve numerous learnable parameters and potentially lead to a bias towards the base classes, we propose a nonparametric prior guidance module to facilitate the recognition of novel classes. In this module, we calculate the semantic relation between high-level features to generate a classagnostic prior map, in which the regions of target features belonging to the co-existing objects can be activated, as illustrated in Fig. 1(b). The prior map with unbiased semantic cues can guide the subsequent detection procedure and help the model distinguish target objects from the background. Since the prior generation is non-parametric, the model can learn more general patterns and retain generalization ability, thereby implicitly alleviating the bias problem. By integrating these two proposed modules, we establish a novel OSOD framework, termed Base-class Suppression with Prior Guidance (BSPG) network. These two components complement each other to promote bias-free OSOD and enhance generalization for novel classes. In summary, our main contributions can be summarized as follows: • We propose a BSPG network to address the model bias towards base classes problem, which has been neglected in previous works on OSOD. • We introduce a base-class predictor to detect the objects of base classes, and a base-class suppression module to eliminate them, facilitating the detection of novel objects. • We design a non-parametric prior guidance module to generate a class-agnostic prior map with unbiased semantic cues and guide the detection procedure. • Extensive experiments illustrate that our proposed approach yields new state-of-the-art results, which validate its effectiveness and robustness. Related Work General Object Detection. Object detection aims to locate objects of seen classes and assign a category label to each object instance. General detectors can be broadly divided into two streams: one-stage methods (Liu et al. 2016; Redmon and Farhadi 2017; Duan et al. 2019) and two-stage ones (Girshick 2015; Ren et al. 2015). Currently, OSOD is still in its early stage of research. To pursue a high algorithm accuracy, our model is developed based on the two-stage detector of Faster R-CNN (Ren et al. 2015). Few-Shot Object Detection. Few-shot object detection (FSOD) performs object detection on a target image conditioned on a limited number of query images. Existing FSOD methods can be generally categorized into three directions: transfer learning, meta learning, and metric learning methods. The transfer-learning based model is initially pre-trained on abundant base data and subsequently finetuned on a limited set of novel examples. DeFRCN (Qiao et al. 2021) performs decoupling among multiple modules of Faster R-CNN to boost performance. The metalearning based methods are dedicated to learning efficient meta knowledge and fostering adaptation to novel categories. Meta R-CNN (Yan et al. 2019) extends Faster RCNN by applying channel-wise attention to reweight the RoI features. The metric-learning based methods focus on exploring cross-image correlations to directly detect novel objects without fine-tuning. BHRL (Yang et al. 2022) proposes a multi-level relation module to establish semantic relations. OSOD, as an extreme case of FSOD, involves the localization and classification for novel objects with only one sample. Recent researches (Sun et al. 2021; Yang et al. 2021) suggest that the box regressor is capable of accurately localizing objects. Owing to the seriously unbalanced data distribution, the primary source of generalization degradation is misclassifying the instances of base classes as objects of interest. Therefore, we employ a base-class predictor to explicitly detect the base-class objects and further suppress them, and perform prior guidance in a non-parametric manner to generate unbiased prior knowledge. Method Problem Definition Following the common configurations in previous literature (Yang et al. 2022), the object classes of the dataset are partitioned into two disjoint parts Cb ∩Cn = ∅. Here, Cb denotes base classes with abundant annotated data for training, and Cn represents novel classes with only one instance per category. We train our model with the episodic paradigm, where each episode contains a target-query pair. The oneshot object detector is expected to identify all instances of the same category specified by the query image q in the target image I. The model is continuously optimized using numerous available data of base classes Cb during the training phase. Once the training is completed, the detector can directly predict the objects of novel classes Cn in the target image conditioned on one query image without fine-tuning. Overview Fig. 2 sketches the overall architecture of our BSPG network. It comprises four essential components: a base-class predictor, a novel-class predictor, a base-class suppression, and a prior guidance module. We apply the two-stage training strategy to train the base and novel predictors, respectively. In the first stage, we optimize the base-class predicThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7315 High-level Features Correlation Features Interaction and Metric Base-class Predictor Base-class Suppression Weight-shared Backbone Prior Map Target Image Query Image Query Feature Target Feature Prediction Target Image Query Image Weight-shared Backbone Query Feature High-level Features Extraction and Correlation Relation Module Base-class Suppression RPN Proposals ROI Align Prior Map Final Prediction Proposal features Target Feature Novel-class Predictor Element-wise Multiplication classification Frozen Parameter Prior Guidance Base-class Prediction Novel-class Prediction ROI Align Classifier Base-class Predictor Regressor Classifier Backbone Proposal features Matching Module Figure 2: The overall architecture of the proposed BSPG, which contains four key components: a base-class predictor, a novelclass predictor, a base-class suppression, and a prior guidance module. Based on the base-class predictions, we refine the coarse novel-class predictions by base-class suppression to yield the final prediction results. Meanwhile, the prior guidance module generates a class-agnostic prior map, providing unbiased semantic cues to effectively guide the subsequent detection procedure. tor on the base-class dataset following the traditional Faster R-CNN paradigm. For the second stage, we fix the parameters of the base predictor and only optimize the novel predictor and the BcS module. Concretely, the two predictors are deployed to respectively identify the objects of base and novel classes. Then, we rectify the coarse detection results obtained from the novel predictor by base-class suppression to yield the final results. Meanwhile, the non-parametric PG module calculates the cross-image correlation between highlevel features and generates a class-agnostic prior map. The prior map serves as an indicator, providing unbiased semantic hints to guide the detection process effectively. Base-class Suppression Base-class Predictor. To alleviate the performance degradation caused by misclassifying base-class instances as target objects, we introduce a base-class predictor. This branch is specifically designed to explicitly detect objects belonging to the base categories. However, we noticed that the proposals generated by the base-class predictor are mostly active in the base categories, which may burden the following classification task for novel-class detection. For simplicity, we only focus on handling misclassification produced by the novelclass predictor. Therefore, the RPN of the base predictor is removed in the second training phase, and the proposals generated by the novel predictor are delivered to the base predictor for further proposal features retrieval and classification. The base-class classifier F cls B yields the base-class classification results for proposal features PB: YB = F cls B (PB) ∈RK×(1+S), (1) where K is the number of proposals, S denotes the number of base categories (equaling 60 for COCO dataset (Lin et al. 2014), and equaling 16 for VOC dataset (Everingham et al. 2010) under 1-shot setting), and YB denotes the base-class prediction probability over S categories. Novel-class Predictor. Given a query image q and a target image I, the novel-class predictor aims to detect the object instances in I towards the given category specified by q. Specifically, the features ϕ(q), ϕ(I) extracted by the siamese ResNet-50 (He et al. 2016) with feature pyramid network (FPN) (Lin et al. 2017) are fed into the matching module (Michaelis et al. 2018), intending to accomplish feature matching. Fdiff =|ϕ(I) −GAP(ϕ(q))|, (2) Fmatch =Conv([Fdiff, ϕ(I)]), (3) where |·| denotes the absolute value operator, GAP denotes global average pooling operation, Conv denotes convolutional layers, Fdiff denotes pointwise L1 difference, and [·, ·] denotes concatenation operation. By propagating the discriminative similarity feature Fmatch to RPN, it can generate more expected proposals with high potential including target objects, and filter out negative objects not belonging to the query category. Then we use the RoI Align layer to obtain proposal features PN based on the proposals. To distinguish whether the proposals belong to the target category or not, we follow BHRL (Yang et al. 2022) to introduce a hierarchical relation module A consisting of contrastive-level, salient-level, and attention-level correlations. The goal is to comprehensively measure the semantic relation and re-score proposals. Specifically, for contrastivelevel relation Ac, we compute the absolute difference between the query vector and each position of the proposal feature PN. For salient-level relation As, the query feature The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7316 QN is regarded as a convolution kernel to generate the relevant features with proposal feature PN in a depth-wise manner (Fan et al. 2020a). For attention-level one Aa, we calculate spatial attention matrix Ws to reweight the query feature and further compute the absolute difference. A(PN, QN) = [Ac(PN, QN), As(PN, QN), Aa(PN, QN), PN], (4) Ac(PN, QN) =Conv(|GAP(QN) −PN|), (5) As(PN, QN) =Conv(GAP(QN) ⊗PN), (6) Aa(PN, QN) =Conv(|WsQN −PN|), (7) Ws = softmax((Conv(PN))T Conv(QN)), (8) where ⊗represents the features correlation in a depth-wise manner. Subsequently, we deliver the relation features to the novel-class classifier F cls N and obtain the coarse novel-class classification results YN: YN = F cls N (A(PN, QN)) ∈RK×2. (9) Base-class Suppression. The BcS module is designed to eliminate the false predictions on the base classes. For each proposal, if the proposal whose highest base-class prediction probability over S categories (excluding the background) is more than threshold α, base-class result B equals the corresponding highest base-class prediction score, otherwise 0. B = ( max i∈{1,2,...,S}(Y i B) max i∈{1,2,...,S}(Y i B) > α 0 otherwise , (10) where α is set to 0.3 for COCO dataset and 0.7 for VOC dataset. Finally, the coarse detection results YN derived from the novel-class predictor are rectified by the base-class suppression, resulting in the final results YF : YF =[Y f F , Y b F ] = [Ψf(Y f N, B), Ψb(Y b N, B)], (11) Y f F =W f N ∗Y f N + W f B ∗B, (12) Y b F =W b N ∗Y b N + W b B ∗B, (13) where Y f N and Y b N denote the foreground and background probability predicted by the novel-class predictor, respectively. Ψf and Ψb are learnable fully connected layers, which are deployed to refine the coarse novel-class scores by suppressing false predictions on the base-class objects. As expected in our experiment, when B is greater than 0, the learnable weight W f N is positive and W f B is negative to decrease the final foreground score Y f F . Conversely, the weight W b N is positive and W b B is also positive to increase the background score, aiming to suppress base-class objects. The BcS module adaptively learns to assign the weight to the base and novel-class prediction results. The higher the baseclass confidence score, the more obviously it is suppressed. Loss Function. For the second stage, the overall loss for training our model can be expressed by: L = LRP N N + LROI N , (14) LROI N = LReg N + λ1LRP N (YN, G) + λ2LRP F (YF , G), (15) (a) Prior Map Query Prior Map Query (b) (c) (d) (e) Aeroplane Motorcycle Car Bird Zebra Bicycle (f) Figure 3: Visualizations of the prior map generated by our prior guidance module via high-level features correlation. We show the query image and prior map from left to right. where LRP N N is the RPN loss of Faster-RCNN, and LROI N contains the classification and regression losses in the ROI head of the novel predictor. LReg N is the regression loss, LRP N and LRP F are the ratio-preserving losses defined in BHRL (Yang et al. 2022) for evaluating the coarse novelclass and final classification results, respectively. G denotes the ground-truth label. λ1 and λ2 are set to 0.5. Prior Guidance We observe that the well-designed modules with many learnable parameters in current prevalent OSOD models may inevitably introduce a bias towards base classes, hindering the accurate detection of novel classes. Inspired by (Tian et al. 2022), we propose a non-parametric prior guidance module to mine semantic correlations without sacrificing generalization ability. Concretely, our network adopts the ResNet-50 (He et al. 2016) as a frozen backbone Φ to encode high-level features Φ(q) and Φ(I) from the raw image-pair, where the parameters are pre-trained on the reduced ImageNet following the OSOD settings (Yang et al. 2022). Next, we calculate the element-wise relation map R ∈RHh I W h I ×Hh q W h q between Φ(I) ∈RCh×Hh I ×W h I and Φ(q) ∈RCh×Hh q ×W h q using cosine similarity function: R = Θ(Φ(I))T Θ(Φ(q)), (16) where Θ denotes the L2 normalization. For each element in the target feature Φ(I), we select the maximum similarity among all elements of the query feature as the relation values to generate the prior map P ∈RHh I W h I ×1: P = max j∈{1,2,...,Hh q W h q } R(i, j). (17) A high activation value in P for an element of the target feature indicates that this element has an intense correlation with at least one element in the query feature (Tian et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7317 Method Base class Novel class split-1 split-2 split-3 split-4 Average split-1 split-2 split-3 split-4 Average CoAE (Hsieh et al. 2019) 42.2 40.2 39.9 41.3 40.9 23.4 23.6 20.5 20.4 22.0 AIT (Chen, Hsieh, and Liu 2021) 50.1 47.2 45.8 46.9 47.5 26.0 26.4 22.3 22.6 24.3 SaFT (Zhao, Guo, and Lu 2022) 49.2 47.2 47.9 49.0 48.3 27.8 27.6 21.0 23.0 24.9 BHRL (Yang et al. 2022) 56.0 52.1 52.6 53.4 53.5 26.1 29.0 22.7 24.5 25.6 BHRL∗(Yang et al. 2022) 56.3 52.3 52.5 53.2 53.6 26.2 28.5 22.3 24.6 25.4 Ours 57.1 54.1 54.0 54.6 55.0 27.7 30.7 24.6 26.3 27.3 Table 1: Performance comparison with OSOD methods on the COCO dataset in terms of AP50 score (%). ∗represents our re-implemented results with the code released by the authors. Method Base class Novel class plant sofa tv car bottle boat chair person bus train horse bike dog bird mbike table mAP cow sheep cat aero mAP CoAE 24.9 50.1 58.8 64.3 32.9 48.9 14.2 53.2 71.5 74.7 74.0 66.3 75.7 61.5 68.5 42.7 55.1 78.0 61.9 72.0 43.5 63.8 AIT 46.4 60.5 68.0 73.6 49.0 65.1 26.6 68.2 82.6 85.4 82.9 77.1 82.7 71.8 75.1 60.0 67.2 85.5 72.8 80.4 50.2 72.2 BHRL 57.5 49.4 76.8 80.4 61.2 58.4 48.1 83.3 74.3 87.3 80.1 81.0 87.2 73.0 78.8 38.8 69.7 81.0 67.9 86.9 59.3 73.8 BHRL∗57.0 53.1 77.9 82.4 61.2 59.1 48.0 83.7 75.0 86.9 80.8 81.3 85.7 70.2 76.1 41.2 70.0 81.0 65.9 85.4 58.1 72.6 Ours 55.0 55.6 78.3 81.4 62.5 59.5 50.9 81.7 74.7 87.5 82.1 81.0 85.2 73.9 79.1 39.5 70.5 80.6 67.4 84.6 61.4 73.5 Table 2: Performance comparison with OSOD methods on the PASCAL VOC dataset in terms of AP50 score (%). ∗represents our re-implemented results with the code released by the authors. 2022). As illustrated in Fig. 3, for groups (a-d), given different query images for the same target image, the corresponding target regions in the prior maps can be respectively activated, which validates the powerful generalization ability. The target images in groups (e-f) include multiple small objects or large objects, respectively. Our prior map can serve as an indicator that coarsely locates the objects of interest despite the complex scenarios containing scale variations and appearance changes, which proves the effectiveness and robustness of our module. Notably, the above steps are independent of the training process to maintain generalization ability and mitigate the bias problem. Then, we normalize the values of the prior map to between 0 and 1, and reshape the map to match the shape of target feature ϕ(I) using an interpolation operation. Finally, the prior map is treated as guidance to reweight the target feature in the ROI head, facilitating the subsequent proposal features retrieval and novel-class classification. bϕ(I) = Freshape(Fnorm(P)) ⊙ϕ(I), (18) where ⊙stands for element-wise multiplication. Experiments Datasets and Settings Datasets and Evaluation Metrics. For a fair comparison, we follow the protocol in previous works (Hsieh et al. 2019; Chen, Hsieh, and Liu 2021; Zhao, Guo, and Lu 2022; Yang et al. 2022) to implement our method. Our model is trained and tested on two widely used datasets, namely COCO (Lin et al. 2014) and PASCAL VOC (Everingham et al. 2010). Following previous works, we report AP50 to evaluate performance on these two datasets. Implementation Details. Our training process consists of two phases, base-training and novel-training. Specifically, for the base-training phase, we adopt Faster R-CNN (Ren et al. 2015) with FPN (Lin et al. 2017) as our base framework, and ResNet-50 as our backbone. In line with previous works (Hsieh et al. 2019; Yang et al. 2022), the backbone is pre-trained on the reduced ImageNet (Deng et al. 2009), where we remove all COCO-related classes to guarantee that the model does not foresee the novel-class objects. Following the general object detection paradigm, we train the baseclass predictor on each group of base classes for 15 epochs. For the second phase, the parameters of the base predictor and the PG module are fixed, and the backbone weights of the novel predictor are initialized from the base predictor. The parameters of the novel predictor and the BcS module are further optimized for 8 epochs. For both phases, we adopt SGD as the optimizer with a batch size of 16, and a momentum of 0.9. The learning rate starts at 0.02 and decays by a factor of 10 at the 7th epoch. Target-query Pairs. We apply the same strategy as (Hsieh et al. 2019; Yang et al. 2022) to generate the target-query image pairs. In the novel-training stage, given a target image from the datasets, we randomly choose one query patch containing any of the same base-class in the target image. In the testing stage, for each novel-class in a target image, the query images of the same class are shuffled with a random seed of target image ID, then the first five query images are respectively chosen to pair with the target image. We evaluate the model on these image pairs and take the average of scores as the stable results. Comparison with State-of-the-art Methods Comparison with OSOD Methods. Our approach is mainly oriented towards complex scenarios involving multiple categories. The challenging COCO dataset, which typically contains diverse objects, is suitable for validating our The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7318 Query BHRL Ours Ground Truth (a) Bear (c) Bus (b) Fire hydrant (e) Airplane (d) Train Figure 4: Comparison of qualitative one-shot object detection results for novel classes between BHRL (Yang et al. 2022) and our proposed approach. Each row from top to bottom denotes the query image, the detection results of BHRL and our model (yellow box), and ground truth (green box). Groups (a-d) are from COCO dataset, and group (e) is from PASCAL VOC dataset. ideas. Following the common practice (Yang et al. 2022), we equally divide the 80 classes into four groups, and take three groups as base classes and one group as novel classes in turns. The results are presented in Tab. 1, our model significantly outperforms the state-of-the-art BHRL by 1.4% AP50 on base classes and 1.9% AP50 on novel classes, demonstrating its remarkable ability to handle complex scenarios. For the PASCAL VOC dataset, the 20 classes are divided into 16 base and 4 novel classes (Yang et al. 2022). As shown in Tab. 2, the proposed method achieves 0.5% and 0.9% AP50 improvements over the best model BHRL on the base and novel classes, respectively. We believe that the gains mainly stem from the explicit suppression of base-class distractors, which is especially effective in scenes consisting of both base and novel classes. Since most images in the PASCAL VOC dataset exhibit relatively simple scenarios with no distractors from base classes, as shown in group (e) of Fig. 4. As expected, the improvements are not as significant as that on the COCO dataset. Qualitative Results. To better comprehend our proposed model, we visualize the detection results in Fig. 4. In groups (a-c), the baseline method is easily distracted by base-class objects and tends to misclassify the distractors as objects of interest. In contrast, our model exhibits great superiority in distinguishing target objects from distractors, which is attributed to the base-class suppression module. Besides, the crowded scenes and appearance variations are also challenges for OSOD task. As shown in groups (d-e), the baseline method neglects some objects in crowded scenes. While our model successfully identifies multiple objects and accurately localizes them, despite significant variations in appearance, scale, and shape between query and target images. Comparison with FSOD Methods. Additionally, our model can be easily extended to few-shot settings. When several query images are available, we simply take the average of multiple query features before interacting with target features. We compare our BSPG with other advanced FSOD methods on the COCO dataset and strictly adopt the same protocol to ensure a fair comparison. Note that some FSOD algorithms (Xiao and Marlet 2020; Zhang et al. 2022a; Qiao et al. 2021) consider the task as a multi-classification and localization problem. They first train the model on abundant base data and then fine-tune it on limited novel data. The dependence on the fine-tuning process somewhat limits the application scenarios. While we treat the task as a twoclassification and localization problem (Yang et al. 2022), similar to (Fan et al. 2020a; Chen et al. 2021), to better simulate the real-world application. Our approach only uses the base-class data for training and directly predicts novel-class objects guided by a query image without fine-tuning. The experiments are implemented with the same data split as in (Xiao and Marlet 2020; Zhang et al. 2022a; Qiao et al. 2021; Fan et al. 2020b,a; Chen et al. 2021), where the 20 categories overlapped with PASCAL VOC are treated as novel classes. As shown in Tab. 3, the proposed approach achieves state-of-the-art results among all settings, and surpasses the previous best competitor DAnA (Chen et al. 2021) by 3.5%, 4.6%, 5.0% AP50 under 1-shot, 3-shot, and 5-shot settings, respectively, validating the superiority of our approach. The consistent gains in both OSOD and FSOD fields inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7319 Method Category of classification 1-shot 3-shot 5-shot AP AP50 AP75 AP AP50 AP75 AP AP50 AP75 FSDetView (Xiao and Marlet 2020) Multi-classification 4.5 12.4 2.2 7.2 18.7 3.7 10.7 24.5 6.7 Meta-DETR (Zhang et al. 2022a) Multi-classification 7.5 12.5 7.7 13.5 21.7 14.0 15.4 25.0 15.8 DeFRCN (Qiao et al. 2021) Multi-classification 9.3 14.8 16.1 FGN† (Fan et al. 2020b) Two-classification 8.0 17.3 6.9 10.5 22.5 8.8 10.9 24.0 9.0 Attention RPN† (Fan et al. 2020a) Two-classification 8.7 19.8 7.0 10.1 23.0 8.2 10.6 24.4 8.3 DAnA (Chen et al. 2021) Two-classification 11.9 25.6 10.4 14.0 28.9 12.3 14.4 30.4 13.0 Ours Two-classification 15.5 29.1 14.8 18.3 33.5 17.9 19.1 35.4 18.2 Table 3: Performance comparison with FSOD methods on the COCO dataset for novel class in terms of AP, AP50 and AP75(%). † represents the results reported in DAnA (Chen et al. 2021). BcS PG AP50 28.5 ✓ 30.0 ✓ 29.4 ✓ ✓ 30.7 Table 4: Ablation study for each module in our model. λ1 λ2 AP50 1 0 27.8 0.5 0.5 30.7 0 1 29.9 1 1 30.1 Table 5: Ablation study for different values of λ1 and λ2 in the loss function. dicate that our model possesses a superior detection capability to identify novel objects, benefiting from the effective base-class suppression and unbiased prior guidance. Thus, resolving the model bias towards base classes is a promising direction worth exploring. Ablation Study We conduct a series of ablation studies on the split-2 of the COCO dataset for novel classes under 1-shot setting following the previous works (Hsieh et al. 2019; Yang et al. 2022). Impact of Each Module. Tab. 4 shows the impact of the proposed Base-class Suppression (BcS) and Prior Guidance (PG) module on overall performance. Compared with the baseline, the BcS module brings a decent performance gain of 1.5% AP50, demonstrating its effectiveness in suppressing the false detections on the base-class objects. The comparison of rows 1 and 3 shows that the PG module contributes to a 0.9% AP50 improvement over the baseline. By combining all the modules, we achieve the best results and exceed the baseline by 2.2% AP50, providing convincing proof that our proposed modules indeed enhance the ability to detect novel objects. Specifically, the BcS module can effectively suppress the distractors to explicitly resolve the model bias towards base classes, and the non-parametric PG module can generate unbiased prior knowledge to implicitly mitigate the bias problem. Impact of Parameter λ1 and λ2 in the Loss Function. In the loss function defined by Eq. (15), λ1 and λ2 are the weights assigned to LRP N and LRP F , respectively. We investigate the impact of λ1 and λ2 on the final performance in Tab. 5. The comparison of rows 1 and 4 reveals the indispensable role of supervision on the final results. Moreover, imposing supervision on the intermediate results predicted Method IoU>0.5, score>0.5 IoU>0.75, score>0.75 FPs Precision (%) FPs Precision (%) Before BcS 3303 41.6 725 50.0 After BcS 1894 52.6 413 66.8 Table 6: The number of FPs with respect to base classes and precision before and after BcS module, where the IoU denotes intersection over union between the final predicted results and ground truth, and the score denotes predicted score. by the novel predictor can facilitate further refinement and boost the final performance. Based on our experiments, we can conclude that when λ1 and λ2 are set to 0.5, our model yields the best performance. Impact of BcS Module on the False Prediction. The BcS module is a core component of our model, serving to eliminate false positives (FPs) and explicitly address the bias problem. To further analyze its impact, we report the number of FPs with respect to base classes both before and after applying the BcS module in Tab. 6. Precision, formulated as T P T P +F P , is also used to assess the performance concerning the suppression of FPs. Note that the experiments are conducted on the split-2 of the COCO dataset, which contains 3309 test images. The results clearly indicate that the number of FPs is substantially reduced, and the precision is remarkably improved after the BcS module, thus verifying the effectiveness of the BcS module. Conclusion In this paper, targeting the phenomenon of model bias towards base classes and the generalization degradation on the novel classes, we rethink one-shot object detection from a new perspective and propose a BSPG network to achieve bias-free OSOD. We design a base-class predictor and a base-class suppression module to explicitly recognize and suppress base-class objects. Furthermore, the PG module is proposed to generate a class-agnostic prior map with unbiased semantic hints to guide the detection procedure and enhance overall performance. Extensive experiments demonstrate that our approach reaches new state-of-the-art performance under all settings. We believe that our work offers valuable insights into alleviating the bias problem in the OSOD field and can inspire future research in this area. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7320 Acknowledgments This work was supported in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LGF20F010006, and in part by the National Natural Science Foundation of China under Grant U21B2029 and Grant U21A20456. References Chen, D.-J.; Hsieh, H.-Y.; and Liu, T.-L. 2021. Adaptive Image Transformer for One-Shot Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12247–12256. Chen, T.-I.; Liu, Y.-C.; Su, H.-T.; Chang, Y.-C.; Lin, Y.-H.; Yeh, J.-F.; Chen, W.-C.; and Hsu, W. 2021. Dual-Awareness Attention for Few-Shot Object Detection. IEEE Transactions on Multimedia, 1–1. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; and Tian, Q. 2019. CenterNet: Keypoint Triplets for Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2): 303–338. Fan, Q.; Zhuo, W.; Tang, C.-K.; and Tai, Y.-W. 2020a. Few-Shot Object Detection With Attention-RPN and MultiRelation Detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Fan, Z.; Ma, Y.; Li, Z.; and Sun, J. 2021. Generalized FewShot Object Detection Without Forgetting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4527–4536. Fan, Z.; Yu, J.-G.; Liang, Z.; Ou, J.; Gao, C.; Xia, G.-S.; and Li, Y. 2020b. FGN: Fully Guided Network for Few-Shot Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Girshick, R. 2015. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Han, G.; Huang, S.; Ma, J.; He, Y.; and Chang, S.-F. 2022. Meta faster r-cnn: Towards accurate few-shot object detection with attentive feature alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 780– 789. He, K.; Gkioxari, G.; Dollar, P.; and Girshick, R. 2017. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hsieh, T.-I.; Lo, Y.-C.; Chen, H.-T.; and Liu, T.-L. 2019. One-Shot Object Detection with Co-Attention and CoExcitation. In Advances in Neural Information Processing Systems, volume 32. Lang, C.; Cheng, G.; Tu, B.; and Han, J. 2022. Learning What Not To Segment: A New Perspective on Few-Shot Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8057–8067. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2117–2125. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, 740–755. Springer. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; and Berg, A. C. 2016. Ssd: Single shot multibox detector. In European conference on computer vision, 21– 37. Springer. Michaelis, C.; Ustyuzhaninov, I.; Bethge, M.; and Ecker, A. S. 2018. One-shot instance segmentation. arXiv preprint arXiv:1811.11507. Qiao, L.; Zhao, Y.; Li, Z.; Qiu, X.; Wu, J.; and Zhang, C. 2021. DeFRCN: Decoupled Faster R-CNN for Few-Shot Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 8681–8690. Redmon, J.; Divvala, S.; Girshick, R.; and Farhadi, A. 2016. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Redmon, J.; and Farhadi, A. 2017. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Advances in Neural Information Processing Systems, volume 28. Sun, B.; Li, B.; Cai, S.; Yuan, Y.; and Zhang, C. 2021. FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7352–7362. Sung, F.; Yang, Y.; Zhang, L.; Xiang, T.; Torr, P. H.; and Hospedales, T. M. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1199– 1208. Tian, Z.; Zhao, H.; Shu, M.; Yang, Z.; Li, R.; and Jia, J. 2022. Prior Guided Feature Enrichment Network for FewShot Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(2): 1050–1065. Vinyals, O.; Blundell, C.; Lillicrap, T.; Wierstra, D.; et al. 2016. Matching networks for one shot learning. Advances in neural information processing systems, 29: 3630–3638. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7321 Xiao, Y.; and Marlet, R. 2020. Few-shot object detection and viewpoint estimation for objects in the wild. In European conference on computer vision, 192–210. Springer. Yan, X.; Chen, Z.; Xu, A.; Wang, X.; Liang, X.; and Lin, L. 2019. Meta R-CNN: Towards General Solver for InstanceLevel Low-Shot Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Yang, H.; Cai, S.; Sheng, H.; Deng, B.; Huang, J.; Hua, X.S.; Tang, Y.; and Zhang, Y. 2022. Balanced and hierarchical relation learning for one-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7591–7600. Yang, H.; Lin, Y.; Zhang, H.; Zhang, Y.; and Xu, B. 2021. Towards improving classification power for one-shot object detection. Neurocomputing, 455: 390–400. Zhang, G.; Luo, Z.; Cui, K.; Lu, S.; and Xing, E. P. 2022a. Meta-DETR: Image-Level Few-Shot Detection with InterClass Correlation Exploitation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–12. Zhang, W.; Dong, C.; Zhang, J.; Shan, H.; and Liu, E. 2022b. Adaptive context- and scale-aware aggregation with feature alignment for one-shot object detection. Neurocomputing, 514: 216–230. Zhao, Y.; Guo, X.; and Lu, Y. 2022. Semantic-Aligned Fusion Transformer for One-Shot Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7601–7611. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7322
2024
813
18,644
HEAP: Unsupervised Object Discovery and Localization with Contrastive Grouping Xin Zhang1, Jinheng Xie1, Yuan Yuan2, Michael Bi Mi2, Robby T. Tan1 1National University of Singapore 2Huawei International Pte Ltd {x.zhang, jinheng}@u.nus.edu, yuanyuan10@huawei.com, michaelbimi@yahoo.com, robby.tan@nus.edu.sg Abstract Unsupervised object discovery and localization aims to detect or segment objects in an image without any supervision. Recent efforts have demonstrated a notable potential to identify salient foreground objects by utilizing self-supervised transformer features. However, their scopes only build upon patchlevel features within an image, neglecting region/image-level and cross-image relationships at a broader scale. Moreover, these methods cannot differentiate various semantics from multiple instances. To address these problems, we introduce Hierarchical mErging framework via contrAstive grouPing (HEAP). Specifically, a novel lightweight head with crossattention mechanism is designed to adaptively group intraimage patches into semantically coherent regions based on correlation among self-supervised features. Further, to ensure the distinguishability among various regions, we introduce a region-level contrastive clustering loss to pull closer similar regions across images. Also, an image-level contrastive loss is present to push foreground and background representations apart, with which foreground objects and background are accordingly discovered. HEAP facilitates efficient hierarchical image decomposition, which contributes to more accurate object discovery while also enabling differentiation among objects of various classes. Extensive experimental results on semantic segmentation retrieval, unsupervised object discovery, and saliency detection tasks demonstrate that HEAP achieves state-of-the-art performance. Introduction Unsupervised object discovery and localization aims to detect or segment salient objects in images without any human supervision (Vo, Pérez, and Ponce 2020). It is a fundamental problem since it provides a valuable source of weak supervision beneficial for various downstream tasks (Zhang and Chen 2023; Jin, Yang, and Tan 2022), such as object detection (Shin, Albanie, and Xie 2022; Xie et al. 2021b) and semantic segmentation (Xie et al. 2022b; Li et al. 2023; Xie et al. 2022a), etc. Due to the absence of prior knowledge regarding visual characteristics or expected class membership, localizing possible objects within a scene presents significant challenges. Early efforts (Zitnick and Dollár 2014; Uijlings et al. 2013; Vo et al. 2019, 2021) attempted to extract excessive Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Encoder Patch Embeddings Patch-Level Affinities Spectral Clustering / Seed Expansion Adaptive Grouping Previous methods Ours Region-Level Pulling Image-Level Pushing Foreground Background Image-Level Patch-Level Region-Level Patch-Level Region-Level Image-Level (b) (c) (a) Figure 1: (a)-(b): Existing methods, such as FOUND (SimÃl’oni et al. 2023) and TokenCut (Wang et al. 2022b), consider patch-level similarities for each image separately. (a)-(c): HEAP jointly explores hierarchical supervisions (i.e., patch/region/image-level) across images, enabling more acurate object discovery and discrimination among objects of various classes. Note that colors only represent different objects, not specific classes. candidate region proposals from the entire image set, which are computationally prohibitive to scale to large datasets. More recent works (Melas-Kyriazi et al. 2022; Wang et al. 2022b) turn to leveraging self-supervised models such as DINO (Caron et al. 2021), which show great potential for object segmentation. (Melas-Kyriazi et al. 2022; Wang et al. 2022b) construct a similarity graph for each image separately and exploit spectral clustering to localize objects. (Shin, Albanie, and Xie 2022; Wang et al. 2022a, 2023) further utilize the identified objects as pseudo masks to train a detection/segmentation model. (Siméoni et al. 2021; SimÃl’oni et al. 2023) first identify foreground/background seeds from the self-attention map and then expand to cover the whole object. While yielding promising outcomes, existing methods treat the problem as binary patch-wise clasThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7323 sification, ignoring semantic relationships between objects. The investigated similarity is restricted within a single image and among individual patches, which might not adequately encapsulate the holistic representation of an entire object (Fig. 1 (a)-(b)). Incorporating feature relationships at a broader scale (e.g., region/image-level) across images would be beneficial. To this end, we propose a novel Hierarchical mErging framework via contrAstive grouPing (HEAP), to gradually establish patch-level, region-level, and image-level ties. The core idea is that an image can be decomposed into semantically consistent regions (e.g., sky, land, and bird, etc), and objects are typically identified as salient foreground regions (Fig. 1 (a)-(c)). It has been studied that visual features extracted by recent self-supervised models highlight salient foreground objects while remaining semantically discriminative among various objects (Caron et al. 2021). This motivates us to develop a lightweight head that takes in selfsupervised patch features. The head then undertakes two learning tasks for object discovery: (1) grouping patches to form various semantic coherent regions; (2) discriminating foreground objects from background regions. While the concept of parsing images through grouping has been explored in prior representation learning studies (Wen et al. 2022), it often requires to learn a huge number of group prototypes that represent various concepts shared across the whole dataset. Such an approach has been proven to be less flexible for scalability. Instead, we introduce a set of learnable group tokens, which adaptively interact with patch embeddings and become image-specific cluster centers for grouping patches. As such, we do not need to estimate the number of classes that a training set may contain, but the maximum number of regions that an image can be partitioned into. This significantly alleviates the associated complexity. It should also be pointed out that although GroupViT (Xu et al. 2022) takes a similar idea to leverage group tokens in designing a segmentation model, it has to rely on text supervision for training the model from scratch. Conversely, HEAP is built upon a pre-trained encoder and can be efficiently trained without any external sources of supervision. Since the main purpose of this work is to design a universal head to manipulate any pre-trained features, a second-stage self-training that is commonly employed in some previous studies (Shin, Albanie, and Xie 2022; Wang et al. 2022a, 2023) is not considered. To achieve accurate grouping and foreground-background disentanglement, we introduce three unsupervised losses, i.e., intra-image grouping, foreground-background disentanglement, and inter-image clustering losses. The intra-image grouping loss enhances discrimination within each image by encouraging the grouping of similar patches. Consequently, objects with coherent semantics are grouped into different regions. This by nature helps distinguish multiple-class objects within a single image, which cannot be handled by previous methods. After grouping, patch embeddings with the same group token are merged to create region-level features. These region-level features are then aggregated to determine foreground objects from background regions. Our intuition is that foreground objects across images share class similarities, and that background regions are more similar to backgrounds in other images than to foreground objects (Xie et al. 2022b; Siméoni et al. 2021). To achieve this, we propose a cross-image foreground-background contrastive loss to differentiate foreground objects from background regions. Also an additional inter-image clustering loss is present to promote consistent grouping across different images by encouraging the grouping of patches with similar semantics together across the dataset. Following FOUND (SimÃl’oni et al. 2023), we extensively evaluate the proposed HEAP on semantic segmentation retrieval on VOC12 (Everingham et al. 2012), unsupervised saliency detection on DUT-OMRON (Yang et al. 2013), DUTS-TE (Wang et al. 2017), and ECSSD (Shi et al. 2015), and unsupervised object discovery on VOC07 & VOC12 (Everingham et al. 2007, 2012), and COCO20k (Lin et al. 2014). HEAP demonstrates the best performance while keeping the training at a low cost. Main contributions are: • We propose a hierarchical contrastive grouping framework, HEAP, for unsupervised object discovery and localization. HEAP explores representation learning at multiple levels which enables more accurate discovery and has the capacity to distinguish objects of multiple classes in a single image. • By optimizing the proposed head with the multi-level losses, this paper provides an efficient way to parse images hierarchically with no supervision. • Extensive experimental results on semantic segmentation retrieval, unsupervised object discovery and saliency detection tasks demonstrates that HEAP achieves state-ofthe-art performance. Related Work Self-Supervised Methods Self-supervised methods have emerged as powerful approaches for leveraging unlabeled data to learn meaningful representations (He et al. 2020; Tian et al. 2020). Recently, there has been a surge of research focused on vision transformer (ViT)-based selfsupervised approaches (Caron et al. 2021; He et al. 2022; Zhou et al. 2022). MoCo-v3 (Chen, Xie, and He 2021) extends MoCo (He et al. 2020) to incorporate ViTs, demonstrating effective self-supervised learning of ViTs on largescale datasets. DINO (Caron et al. 2021) employs instance discrimination in a teacher-student setup. The teacher network assigns pseudo-labels to the student network by comparing image patches, facilitating the learning of meaningful features. iBOT (Zhou et al. 2022) incorporates an online tokenizer for masked prediction, utilizing self-distillation on masked patch tokens with the teacher network as the tokenizer. MAE (He et al. 2022) employs a high masking ratio on input images and an asymmetric encoder-decoder architecture for pixel-wise reconstruction. Vision Transformer Vision Transformers (ViTs) have gained significant attention in recent years as a powerful alternative to convolutional neural networks (CNNs). The seminal work of (Dosovitskiy et al. 2021) introduced the ViT architecture, demonstrating its efficacy in image classification by leveraging self-attention mechanisms. Since The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7324 then, numerous studies have explored and extended the vision transformer framework for image classification (Liu et al. 2021; Touvron et al. 2021), object detection (Li et al. 2022b; Carion et al. 2020; Zhang et al. 2021, 2022; Li et al. 2022a) and segmentation (Xie et al. 2021a; Strudel et al. 2021; Cheng et al. 2022). GroupViT (Xu et al. 2022) tackles semantic segmentation with text guidance. It breaks down images into meaningful regions, then employs contrastive learning between text and fine-grained region embeddings. Our approach also uses a grouping mechanism, but unlike GroupViT, we don’t rely on text supervision. We exclusively use contrastive learning with images, enabling the discovery of meaningful regions and objects in an unsupervised manner. Unsupervised Object Discovery and Localization In the field of unsupervised object discovery and localization, objects are usually detected using binary masks (Li, Sun, and Yu 2015; Vicente, Rother, and Kolmogorov 2011; Joulin, Bach, and Ponce 2012) or bounding boxes (Zhu et al. 2014). Early methods used saliency or intra&inter-image similarity for region proposals (Zitnick and Dollár 2014; Uijlings et al. 2013; Wei et al. 2019; Vo et al. 2019, 2021), employing combinatorial optimization for object bounding box selection. However, their computational complexity hindered scalability for large datasets. Recent works use pre-trained ViT models to extract features and construct a weighted graph with patches as nodes and pairwise similarities as edges. LOST (Siméoni et al. 2021) incorporates a heuristic seed expansion strategy based on the graph. FOUND (SimÃl’oni et al. 2023) takes a similar way as LOST but instead expands background patches. Some works (Shin, Albanie, and Xie 2022; Melas-Kyriazi et al. 2022; Wang et al. 2022b) apply spectral clustering or normalized graph-cut to find the highly connected patches as objects. These methods ignore the relationships among images, leading to less optimal results. To tackle this, we introduce a hierarchical framework that groups patches into semantically consistent regions for object discovery. This approach identifies distinct foreground objects with varied semantics across regions. Proposed Method Architecture As shown in Fig. 2, HEAP is built upon a frozen pre-trained encoder. It comprises a grouping block with multiple crossattention layers, and a linear aggregator. The cross-attention layers take trainable group tokens and patch embeddings as inputs. The resulting adapted group tokens are used to assign individual patches to each group. Then region-level features are obtained by averaging patch embeddings of each group, which are then input into the linear aggregator for foreground-background disentanglement. Given a batch of n images I1:n = {Ii}n i=1, where Ii ∈ RH×W ×3, H × W denotes the spatial dimension. We first extract patch embeddings using a self-supervised pre-trained ViT. The image Ii is divided into N non-overlapping image patches with patch resolutions of K × K, i.e., N = HW/K2. The N patches are represented as N patch tokens, along with an additional class token CLS. We extract patch embeddings P1:N = {Pi}N i=1, in which Pi ∈R1×D from the last self-attention layer of the ViT by concatenating multiple attention heads, resulting in a feature dimention D for each patch embedding. Note that, the CLS token is not used. As such, each image I is represented as a collection of patch embeddings {Pi}N i=1. A set of M learnable group tokens g1:M = {gi}M i=1 are initialized, in which gi ∈R1×D. The cross-attention is applied where group tokens serve as the query, and the concatenation of group tokens and patch embeddings as key and value. It involves interactions among group tokens and between group tokens and patch embeddings. The softmaxbased attention mechanism inhibits one group tokenâ ˘A´Zs response to others and fosters competition among group tokens for information aggregation. Let P ∈RN×D denotes all the patch embedding of an image, g ∈RM×D denotes all the group tokens. The calculation of cross-attention follows the standard procedure of self-attention mechanism: g = softmax(gWq([g; P]Wk)T √ D )([g; P]Wv), (1) g = g + gWo, (2) where Wq, Wk, Wv, Wo are the projection matrices of the cross-attention layer. By using the cross-attention, group tokens globally interact with other group tokens and patch embeddings. Consequently, the learned group tokens adaptively capture distinct semantic information. Then, each patch will be assigned to the groups based on similarity in the embedding space. We compute the assignment matrix Z between the patch embeddings {Pi} and the group tokens {gj} using Gumbel-Softmax: Zi,j = exp(PigjT + γi) PM k=1 exp(PigT k + γk) , (3) where γi,γk are samples randomly drawn from the Gumbel (0,1) distribution. Then the assignment of patches to groups are determined by the maximum similarity: ai = arg max j (Zi,j). (4) The region-level embeddings G1:M = {Gj}M j=1 are then calculated by averaging all patch embeddings belonging to a specific group: Gj = PN i I(ai == j)Pi PN i I(ai == j) , (5) where I is the indicator function. The aggregator ϕ(·) is a linear layer that takes the region-level embedding G and outputs the probability of a group belonging to foreground. Intra-Image Grouping HEAP takes an image as input and employs a regionsplitting mechanism to partition it into multiple distinct regions, akin to superpixels. To achieve cohesive and perceptually meaningful grouping, we construct a fully connected undirected graph G = (V, E) where each V represents a patch embedding Pi, and each patch is linked to all other The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7325 3) Inter-Image Clustering Weighted Pulling … … 𝐺! 𝐺! 𝐺" 𝐺" … … 𝐺# 𝐺# 2) FG-BG Contrasting Pushing … … 𝐹! $ 𝐹" $ 𝐹! % 𝐹& $ 𝐹" % 𝐹&% … … Encoder Cross Attention Adapted Group Tokens Patch Embeddings Group Tokens Region Embeddings Aggregator self-distillation Embedding Affinity Map Grouping Affinity Map 1) Intra-Image Grouping Architecture FG/BG Embeddings Norm (*multi-layers) Figure 2: HEAP Overview. A pre-trained encoder (e.g., with DINO (Caron et al. 2021)) processes input images for patch embeddings. The cross-attention layers take in learnable group tokens and patch embeddings, and then adaptively aggregate image representations with each group token capturing distinct characteristics. Learned tokens cluster patches into regions based on embedding similarity. Patches belonging to the same regions are merged to form region-level embeddings, further aggregated for image-level foreground and background embeddings. HEAP is trained with three losses. Intra-image grouping: grouping patches based on similarities. Foreground-background contrasting: pushing foreground and background embeddings apart. Inter-image clustering: pulling similar regions closer, with the strength weighted by the similarity ranking. Note that F f i represents the foreground embedding and F b i represents the background embedding. Gi can be Gf i or Gb i, representing the region-level foreground or background embedding. We omit the process of obtaining Gf i and Gb i from region-level embeddings, which simplifies the illustration and does not affect understanding. patches by edges E representing the similarity scores based on the cosine similarity of the patch embeddings of the two patches. As such, we build an affinity matrix Ai,j ∈RN×N of the N patches. Aij = max(0, cos(Pi, Pj)), (6) where cos calculates the cosine similarity. Self-Distillation Loss Intuitively, we expect similar patches to be grouped together. Inspired by (Hamilton et al. 2022; Wang et al. 2022b), we introduce a self-distillation loss that constrains the assigment probability of patches to groups Z, with the affinity matrix of patch embeddings. Specifically, we define an affinity matrix δij of assignment probability of each patch to groups as: δij = cos(Zi·, Zj·). (7) Then, the goal of intra-image grouping is to distill the patch embedding affinity matrix Ai,j to the assignment probability affinity matrix δij. Taking the form of modularity in the field of community detection (Newman and Girvan 2004), the self-distillation loss is as follows: Lintra = −1 2m X i,j  Aij −kikj 2m  δij, (8) where 2m = P i,j Aij, ki = P j Ai,j. m is used for normalization, and k represent the degree of node i. Note that, HEAP performs class-agnostic clustering, where group tokens lack specific semantics. Therefore, the self-distillation loss is computed image-wise. To prevent dominant groups and ensure balanced assignments, we add entropy regularization, maintaining reasonable average patch probabilities per group token. The final form of self-distillation loss is: Lintra = −1 2m X i,j(Aij −kikj 2m )δij + λ  1 M X j X i Zij log X i Zij  , (9) where λ is the weight for the entropy regularization. Foreground-Background Disentanglement Foreground objects are detected under the assumption that foreground and background regions have distinct representations. This leads to foreground features being more similar to other foreground representations across images, rather than to all background representations. Given the regionlevel embeddings G, the linear aggregator produces the probability of each region belonging to the foreground: H = σ(ϕ(G)), where σ is the sigmoid function and H ∈RM×1. Based on H, we can compute the foreground and the background representations, Ff ∈R1×D and Fb ∈R1×D as: Ff = HT ⊗G, Fb = (1 −HT ) ⊗G, (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7326 where ⊗represents matrix multiplication. For a given batch of n images, the aggregator produces corresponding n foreground and n background representations, i.e., {Ff i }n i=1 and {Fg i }n i=1. The objective is to push the two kinds of representations apart. To achieve this, we introduce a negative contrastive loss defined as follows: Lneg = −1 n2 X i,j log(1 −cos(Ff i , Fb j)), (11) where cos(Ff i , Fb j) computes the cosine similarity. The intention is to maximize the dissimilarity between all pairs of foreground and background representations in the batch. Inter-Image Clustering Relying solely on intra-image data may not ensure precise patch grouping. We introduce cross-image information, capturing contextual object relationships and appearances across images. This enriches understanding of object boundaries and enhances grouping accuracy. We introduce a crossimage similarity measure to quantify region correlations. We aim to separately pull together similar foreground and background region representations. Specifically, for a given batch of n images, the grouping block produces n × M image regions. For each of them, the aggregator outputs the probabilities for them to be the foreground, resulting in H ∈RnM×1. We have the region-level foreground and background representations expressed as: H1:nM = σ(ϕ(G1:nM)), (12) where G1:nM are the region-level embeddings of n images, each image with M regions. H1:nM are the corresponding probabilities to be foreground. Then, we can have regionlevel foreground and background representations written as: Gf i = Hi ⊙Gi, Gb i = (1 −Hi) ⊙Gi, (13) where ⊙represents element-wise multiplication and i ∈ {1, ..., nM}. Subsequently, we introduce a ranking-based inter-image clustering loss that draws closer the similar region-level foreground/background representations according to the pairwise similarities. For each pair of regionlevel foreground/background representations, the similarity is defined as: sf i,j = max(0, cos(Gf i , Gf j )), sb i,j = max(0, cos(Gb i, Gb j)), where i, j ∈ {1, ..., nM}. The weight wij is determined by the ranking on all possible pairs, i.e., j ∈{1, ..., nM}\{i}, and ranges from 0 to 1: wf i,j = exp(−α · rank(sf i,j)), wb i,j = exp(−α · rank(sb i,j)), where α is a hyper-parameter controlling the weights. The inter-image clustering loss is defined as: Linter = Linter_fg+ Linter_bg, where: Linter_fg = − 1 nM(nM −1) X i,j,i̸=j(wf i,j · log(sf i,j)), Linter_bg = − 1 nM(nM −1) X i,j,i̸=j(wb i,j · log(sb i,j)). The overall objective L is formulated as the summation of the three proposed losses: L = Lintra + Lneg + Linter. (14) Experiments Implementation details are provided in the supplementary material. Dense CRF (dCRF) (Krähenbühl and Koltun 2011) is utilized to refine the boundary of objects. Semantic Segmentation Retrieval We evaluate HEAP on unsupervised semantic segmentation retrieval on VOC12 (Everingham et al. 2012). We follow the common protocol in MaskContrast (Van Gansbeke et al. 2021) and compare with FreeSOLO (Wang et al. 2022a), C2AM (Xie et al. 2022b) TokenCut (Wang et al. 2022b), SelfMask (Shin, Albanie, and Xie 2022) and FOUND (SimÃl’oni et al. 2023). We consider both single-object retrieval and multiple-objects retrieval setups. For single-object retrieval, all foreground objects are merged as one object while for multiple-objects retrieval, each foreground object is treated as a single object separately. Note that we also discard the foreground objects smaller than 1% of an input image size. We build the feature bank on the train split and find the nearest neighbors of each object in the val set by retrieving the feature bank and assign them the corresponding groundtruth labels. We obtain the object-wise binary mask, followed by aggregating patch-level features within the masked region to generate object-level features. Mean IntersectionoverUnion (mIoU) is used for evaluation. Table 1 summarizes the results. We borrow most results of existing methods from (SimÃl’oni et al. 2023). Both cases where 7 (bus, airplane, car, person, cat, cow, and bottle) classes or all 21 classes of VOC12 are considered following (Van Gansbeke et al. 2021). It can be observed that HEAP consistently outperforms previous methods by a large margin on both single-object and multiple-objects retrieval tasks. Notably, HEAP surpasses FOUND by 3.3% for multiple-objects retrieval when all the 21 classes are considered. This indicates the superiority of HEAP to discover objects of multiple categories in a single image. We also show and compare visualizations of the predicted saliency masks by HEAP, FOUND and TokenCut in Fig. 3. Note that post-processings (bilateral solver or dense CRF) are applied. It can be observed that TokenCut is prone to focus on a wrong region in an image (the 1nd and 2rd rows) and miss some objects (the 3th row). FOUND tends to undersegment small ratios of objects (the 2rd and 3th rows). However, HEAP yields more accurate masks. Moreover, objects of different classes are also successfully distinguished. Unsupervised Object Discovery We follow the common practice of (SimÃl’oni et al. 2023; Wang et al. 2022b) and evaluate HEAP on VOC07 (Everingham et al. 2007), VOC12 (Everingham et al. 2012), and COCO20K (Lin et al. 2014; Vo, Pérez, and Ponce 2020). CorLoc metric is used to report results, which measures the proportion of images on which at least one object was correctly localized. Calculation of CorLoc is detailed in supplementary materials. Table 2 summarizes the quanlitative results. The results of most existing methods are from (SimÃl’oni et al. 2023). We list comparisons with more recent self-supervised representation-based methods, e.g., LOST (Siméoni et al. 2021), TokenCut, SelfMask, and FOUND, etc. Full table including more baselines is provided in supplementary materiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7327 (a) Raw image (b) Ground truth (c) HEAP (Ours) (d) FOUND (e) TokenCut Figure 3: Qualitative results of unsupervised object discovery and localization obtained by HEAP, FOUND (SimÃl’oni et al. 2023), and TokenCut (Wang et al. 2022b) on VOC12. Our method could distinguish objects of multiple classes in a single image while recent works not. Note that colors only represent different objects in an image, not corresponding to specific classes. mIoU Method 7cls 21cls MaskContrast (unsup. sal.) 53.4 43.3 Single saliency mask FreeSOLO 19.7 17.0 FreeSOLO (largest inst.) 20.6 20.6 C2AM (ResNet50) 45.4 35.4 TokenCut (ViT-S/8) 46.7 37.6 TokenCut (ViT-S/16) 49.7 39.9 SelfMask 56.6 40.7 FOUND (ViT-S/8) 56.1 42.9 HEAP (ours) 59.1 45.6 Multiple saliency masks FreeSOLO 23.9 25.7 SelfMask 56.2 40.8 FOUND (ViT-S/8) 58.0 42.7 HEAP (ours) 59.1 46.0 Table 1: Semantic segmentation retrieval on VOC12. Both single-object (the 2nd block) and multiple-object retrievals (the 3rd block) are considered. MaskContrast uses its own trained feature extractor. Bold highlights the best results. Underlined represents the second best. als. We observe that HEAP surpasses all existing methods on VOC07 and VOC12. Although DINOSAUR obtains higher performance on COCO20K, the training is much heavier, requiring 8 times larger GPU memory and hundreds times more training steps. Conversely, we aim to introduce an efficient and lightweight head that can also generalize to other pre-trained backbones effortlessly. Method VOC07 VOC12 COCO20K LOD 53.6 55.1 48.5 DINO-seg (ViT-S/16) 45.8 46.2 42.0 LOST (ViT-S/8) 55.5 57.0 49.5 LOST (ViT-S/16) 61.9 64.0 50.7 DSS (ViT-S/16) 62.7 66.4 52.2 TokenCut (ViT-S/8) 67.3 71.6 60.7 TokenCut (ViT-S/16) 68.8 72.1 58.8 FreeSolo 44.0 49.7 35.2 LOST* (ViT-S/16) 65.7 70.4 57.5 TokenCut* (ViT-S/16) 71.4 75.3 62.6 SelfMask 72.3 75.3 62.7 DINOSAUR — 70.4 67.2 FOUND (ViT-S/8) 72.5 76.1 62.9 HEAP (ours) (ViT-S/8) 73.2 77.1 63.4 Table 2: We compare HEAP to state-of-the-art object discovery methods on VOC07, VOC12, and COCO20K using CorLoc metric. * refers to a class-agnostic detector trained with unsupervised “pseudo-boxes” labels in the second stage. ‘ViT’ represents different ViT architectures pretrained with DINO. Bold highlights the best results, Underlined represents the second best. We provide the full table in supplementary materials. Unsupervised Saliency Detection We evaluate HEAP on three commmonly used datasets for unsupervised saliency detection: ECSSD (Shi et al. 2015), DUTS-TE (Wang et al. 2017), and DUT-OMRON (Yang et al. 2013). Following (SimÃl’oni et al. 2023), intersection-over-union (IoU), pixel accuracy (Acc) and maximal Fβ score (max Fβ) with β2 = 0.3 are used as metrics. Calculations are detailed in supplementary materials. Similar to (SimÃl’oni et al. 2023), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7328 DUT-OMRON DUTS-TE ECSSD Method Acc IoU max Fβ Acc IoU max Fβ Acc IoU max Fβ DSS — .567 — — .514 — — .733 — LOST (ViT-S/16) .797 .410 .473 .871 .518 .611 .895 .654 .758 SelfMask .901 .582 — .923 .626 — .944 .781 — TokenCut (ViT-S/16) .880 .533 .600 .903 .576 .672 .918 .712 .803 FOUND (ViT-S/8) .912 .578 .663 .938 .645 .715 .949 .807 .955 HEAP (ours) (ViT-S/8) .920 .596 .690 .940 .644 .757 .945 .811 .930 LOST (ViT-S/16) + BS .818 .489 .578 .887 .572 .697 .916 .723 .837 SelfMask + BS .919 .655 — .933 .660 — .955 .818 — TokenCut (ViT-S/16) + BS .897 .618 .697 .914 .624 .755 .934 .772 .874 FOUND (ViT-S/8) + BS .922 .613 .708 .942 .663 .763 .951 .813 .935 HEAP (ours) (ViT-S/8) + dCRF .929 .646 .724 .949 .687 .777 .962 .823 .948 Table 3: We compare HEAP to state-of-the-art methods on unsupervised saliency detection on DUT-OMRON, DUTS-TE, ECSSD. ‘+BS’ refers to applying the post-processing bilateral solver on the generated masks; ‘+dCRF’ refers to applying the post-processing dense CRF on the generated masks. Bold highlights the best results, underlined represents the second best. we consider the mask used for evaluation is with all identified objects. dCRF also processes all connected components. Num. Acc IoU max Fβ α Acc IoU max Fβ 3 .913 .604 .736 0.1 .949 .687 .777 8 .949 .687 .777 0.25 .925 .659 .756 15 .933 .654 .758 0.5 .931 .656 .762 Table 4: Effect of the number of group tokens M and interimage contrastive clustering loss weight α. Experiments are evaluated on unsupervised saliency detection on DUTS-TE. L∗ intra ER Lneg Linter Acc IoU max Fβ ✓ ✓ .930 .642 .700 ✓ ✓ ✓ .937 .662 .731 ✓ ✓ ✓ ✓ .949 .687 .777 Table 5: The effect of each loss. Experiments are evaluated on unsupervised saliency detection on DUTS-TE. Note L∗ intra refers to the Lintra without the entropy regularization, corresponding to Eq. 8. ER refers to the entropy regularization, corresponding to the second item in Eq. 9. The results are shown in Table 3. Full table including more baselines is provided in supplementary materials. Compared with LOST, SelfMask, TokenCut, and FOUND, HEAP achieves better or comparable performance on all three datasets, indicating that HEAP effectively discovers objects through grouping and foreground-background representations contrasting. Similar to those methods, our approach does not involve training segmentation decoders. The patch-level predictions result in coarse segmentation. We improve this by using dCRF to refine object boundaries and reduce false positives, thereby improving performance. Ablation Studies All ablation experiments are conducted on unsupervised saliency detection on DUTS-TE. Effect of Each Loss We verify the effectiveness of each proposed loss. We start by the simplest version, which only implements intra-image grouping loss and foregroundbackground contrastive loss. The other losses are added one by one. Results are shown in Table 5. We can observe the performance gain brought by each loss. Effect of Number of Group Tokens M determines the maximum number of regions (i.e., clusters) that an image can be divided into. The actual number of regions is adaptively determined for each image by an argmax operation. Theoretically, a small M results in coarse segmentation; while a too large M may lead to overclustering and segment objects into finer parts. As shown in the left part of Table 4, HEAP maintains good performance under a range of M and achieves best when M = 8. Effect of α α controls the impact of neighboring samples in inter-image region-level clustering. A higher α prioritizes nearest neighbors, while a smaller α considers a broader range of neighbors. From the right part of Table 4, we can observe that α = 0.1 results in best performance. Since region-level embedding space across images are much more diverse than that within a single image, it is natural to rely on more neighbourhood regions for conducting clustering. Conclusion We propose HEAP, a novel approach that addresses unsupervised object discovery and localization. HEAP achieves this by hierarchically merging self-supervised local patterns to construct global representations of foreground objects and backgrounds. Through adaptive grouping and contrastive clustering, discriminative regions are formed, with which objects are identified by cross-image foregroundbackground contrasting. HEAP’s hierarchical property benefits discovering more complete objects and distinguishing objects belonging to multiple categories. As a lightweight and general head, HEAP achieves state-of-the-art results on semantic segmentation retrieval, unsupervised object discovery and unsupervised saliency detection. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7329 References Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Caron, M.; Touvron, H.; Misra, I.; Jégou, H.; Mairal, J.; Bojanowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, 9650–9660. Chen, X.; Xie, S.; and He, K. 2021. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9640–9649. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1290–1299. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR. Everingham, M.; Van Gool, L.; Williams, C. K. I.; Winn, J.; and Zisserman, A. 2007. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. Everingham, M.; Van Gool, L.; Williams, C. K. I.; Winn, J.; and Zisserman, A. 2012. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html. Hamilton, M.; Zhang, Z.; Hariharan, B.; Snavely, N.; and Freeman, W. T. 2022. Unsupervised Semantic Segmentation by Distilling Feature Correspondences. In International Conference on Learning Representations. He, K.; Chen, X.; Xie, S.; Li, Y.; Dollár, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16000–16009. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9729–9738. Jin, Y.; Yang, W.; and Tan, R. T. 2022. Unsupervised night image enhancement: When layer decomposition meets lighteffects suppression. In European Conference on Computer Vision, 404–421. Springer. Joulin, A.; Bach, F.; and Ponce, J. 2012. Multi-class cosegmentation. In 2012 IEEE conference on computer vision and pattern recognition, 542–549. IEEE. Krähenbühl, P.; and Koltun, V. 2011. Efficient inference in fully connected crfs with gaussian edge potentials. Advances in neural information processing systems, 24. Li, F.; Zhang, H.; Liu, S.; Zhang, L.; Ni, L. M.; Shum, H.Y.; et al. 2022a. Mask dino: Towards a unified transformerbased framework for object detection and segmentation. arXiv preprint arXiv:2206.02777. Li, K.; Wang, Z.; Cheng, Z.; Yu, R.; Zhao, Y.; Song, G.; Liu, C.; Yuan, L.; and Chen, J. 2023. ACSeg: Adaptive Conceptualization for Unsupervised Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7162–7172. Li, N.; Sun, B.; and Yu, J. 2015. A weighted sparse coding framework for saliency detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5216–5223. Li, Y.; Mao, H.; Girshick, R.; and He, K. 2022b. Exploring plain vision transformer backbones for object detection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX, 280–296. Springer. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Melas-Kyriazi, L.; Rupprecht, C.; Laina, I.; and Vedaldi, A. 2022. Deep spectral methods: A surprisingly strong baseline for unsupervised semantic segmentation and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8364–8375. Newman, M. E.; and Girvan, M. 2004. Finding and evaluating community structure in networks. Physical review E, 69(2): 026113. Shi, J.; Yan, Q.; Xu, L.; and Jia, J. 2015. Hierarchical image saliency detection on extended CSSD. IEEE transactions on pattern analysis and machine intelligence, 38(4): 717–729. Shin, G.; Albanie, S.; and Xie, W. 2022. Unsupervised salient object detection with spectral cluster voting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3971–3980. Siméoni, O.; Puy, G.; Vo, H. V.; Roburin, S.; Gidaris, S.; Bursuc, A.; Pérez, P.; Marlet, R.; and Ponce, J. 2021. Localizing Objects with Self-Supervised Transformers and no Labels. In BMVC-British Machine Vision Conference. SimÃl’oni, O.; Sekkat, C.; Puy, G.; Vobecky, A.; Zablocki, Ã.; and PÃl’rez, P. 2023. Unsupervised Object Localization: Observing the Background to Discover Objects. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR. Strudel, R.; Garcia, R.; Laptev, I.; and Schmid, C. 2021. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 7262–7272. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7330 Tian, Y.; Sun, C.; Poole, B.; Krishnan, D.; Schmid, C.; and Isola, P. 2020. What makes for good views for contrastive learning? Advances in neural information processing systems, 33: 6827–6839. Touvron, H.; Cord, M.; Sablayrolles, A.; Synnaeve, G.; and Jégou, H. 2021. Going deeper with image transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 32–42. Uijlings, J. R.; Van De Sande, K. E.; Gevers, T.; and Smeulders, A. W. 2013. Selective search for object recognition. International journal of computer vision, 104: 154–171. Van Gansbeke, W.; Vandenhende, S.; Georgoulis, S.; and Van Gool, L. 2021. Unsupervised semantic segmentation by contrasting object mask proposals. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10052–10062. Vicente, S.; Rother, C.; and Kolmogorov, V. 2011. Object cosegmentation. In CVPR 2011, 2217–2224. IEEE. Vo, H. V.; Bach, F.; Cho, M.; Han, K.; LeCun, Y.; Pérez, P.; and Ponce, J. 2019. Unsupervised image matching and object discovery as optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8287–8296. Vo, H. V.; Pérez, P.; and Ponce, J. 2020. Toward unsupervised, multi-object discovery in large-scale image collections. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16, 779–795. Springer. Vo, V. H.; Sizikova, E.; Schmid, C.; Pérez, P.; and Ponce, J. 2021. Large-scale unsupervised object discovery. Advances in Neural Information Processing Systems, 34: 16764– 16778. Wang, L.; Lu, H.; Wang, Y.; Feng, M.; Wang, D.; Yin, B.; and Ruan, X. 2017. Learning to detect salient objects with image-level supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition, 136–145. Wang, X.; Girdhar, R.; Yu, S. X.; and Misra, I. 2023. Cut and learn for unsupervised object detection and instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3124–3134. Wang, X.; Yu, Z.; De Mello, S.; Kautz, J.; Anandkumar, A.; Shen, C.; and Alvarez, J. M. 2022a. Freesolo: Learning to segment objects without annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14176–14186. Wang, Y.; Shen, X.; Hu, S. X.; Yuan, Y.; Crowley, J. L.; and Vaufreydaz, D. 2022b. Self-supervised transformers for unsupervised object discovery using normalized cut. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14543–14553. Wei, X.-S.; Zhang, C.-L.; Wu, J.; Shen, C.; and Zhou, Z.H. 2019. Unsupervised object discovery and co-localization by deep descriptor transformation. Pattern Recognition, 88: 113–126. Wen, X.; Zhao, B.; Zheng, A.; Zhang, X.; and QI, X. 2022. Self-Supervised Visual Representation Learning with Semantic Grouping. In Advances in Neural Information Processing Systems. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021a. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. Xie, J.; Hou, X.; Ye, K.; and Shen, L. 2022a. CLIMS: Cross Language Image Matching for Weakly Supervised Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4483–4492. Xie, J.; Luo, C.; Zhu, X.; Jin, Z.; Lu, W.; and Shen, L. 2021b. Online Refinement of Low-Level Feature Based Activation Map for Weakly Supervised Object Localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 132–141. Xie, J.; Xiang, J.; Chen, J.; Hou, X.; Zhao, X.; and Shen, L. 2022b. C2AM: contrastive learning of class-agnostic activation map for weakly supervised object localization and semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 989–998. Xu, J.; De Mello, S.; Liu, S.; Byeon, W.; Breuel, T.; Kautz, J.; and Wang, X. 2022. Groupvit: Semantic segmentation emerges from text supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18134–18144. Yang, C.; Zhang, L.; Lu, H.; Ruan, X.; and Yang, M.-H. 2013. Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3166–3173. Zhang, H.; Li, F.; Liu, S.; Zhang, L.; Su, H.; Zhu, J.; Ni, L. M.; and Shum, H.-Y. 2022. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. arXiv preprint arXiv:2203.03605. Zhang, X.; and Chen, Y.-C. 2023. Adaptive domain generalization via online disagreement minimization. IEEE Transactions on Image Processing. Zhang, Z.; Lu, X.; Cao, G.; Yang, Y.; Jiao, L.; and Liu, F. 2021. ViT-YOLO: Transformer-based YOLO for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 2799–2808. Zhou, J.; Wei, C.; Wang, H.; Shen, W.; Xie, C.; Yuille, A.; and Kong, T. 2022. iBOT: Image BERT Pre-Training with Online Tokenizer. International Conference on Learning Representations (ICLR). Zhu, W.; Liang, S.; Wei, Y.; and Sun, J. 2014. Saliency optimization from robust background detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2814–2821. Zitnick, C. L.; and Dollár, P. 2014. Edge boxes: Locating object proposals from edges. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 391–405. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7331
2024
814
18,645
Scribble Hides Class: Promoting Scribble-Based Weakly-Supervised Semantic Segmentation with Its Class Label Xinliang Zhang1,3*, Lei Zhu1-4*, Hangzhou He1-3, Lujia Jin1-4, Yanye Lu1,3,4† 1 Institute of Medical University, Peking University, Beijing, China 2 Department of Biomedical Engineering, Peking University, Beijing, China 3 National Biomedical Imaging Center, Peking University, Beijing, China 4 Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Beijing, China zhangxinliang@tju.edu.cn, {zhulei, zhuang}@stu.pku.edu.cn, {jinlujia, yanye.lu}@pku.edu.cn Abstract Scribble-based weakly-supervised semantic segmentation using sparse scribble supervision is gaining traction as it reduces annotation costs when compared to fully annotated alternatives. Existing methods primarily generate pseudolabels by diffusing labeled pixels to unlabeled ones with local cues for supervision. However, this diffusion process fails to exploit global semantics and class-specific cues, which are important for semantic segmentation. In this study, we propose a class-driven scribble promotion network, which utilizes both scribble annotations and pseudo-labels informed by image-level classes and global semantics for supervision. Directly adopting pseudo-labels might misguide the segmentation model, thus we design a localization rectification module to correct foreground representations in the feature space. To further combine the advantages of both supervisions, we also introduce a distance entropy loss for uncertainty reduction, which adapts per-pixel confidence weights according to the reliable region determined by the scribble and pseudo-label’s boundary. Experiments on the ScribbleSup dataset with different qualities of scribble annotations outperform all the previous methods, demonstrating the superiority and robustness of our method. The code is available at https://github.com/ Zxl19990529/Class-driven-Scribble-Promotion-Network. Introduction Primarily driven by the availability of extensive pixel-level annotated datasets, the field of semantic segmentation has made remarkable strides in the last decade. However, the challenges of the laborious and time-consuming process of collecting and manually annotating such datasets hinder real-world applications of semantic segmentation. Weaklysupervised semantic segmentation (WSSS) methods utilizing sparse labels have emerged as a prominent trend to overcome this limitation. These methods use annotations at the image, scribble, or bounding box levels as supervision to train the semantic segmentation model. Among them, image-level annotations offer limited spatial supervision, while bounding boxes may lead to overlapping issues when objects are nearby. In comparison, the use of scribble *These authors contributed equally. †Corresponding author, yanye.lu@pku.edu.cn Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Image scribble P regularization loss (a) regularization loss scribble P Image’ P’ consistency loss Image (b) consistency learning Image scribble pseudo label P local diffusion (c) label diffusion global localization Image class P scribble pseudo label (d) ours Figure 1: Schematic diagrams of different scribble-based WSSS methods. Existing approaches (a-c) overlooked the class label in scribbles, which provides image-level supervision. “P” represents the model prediction. The red dashed line represents the supervision relationship. annotations strikes an optimal balance between supervision effectiveness and labor cost (Lin et al. 2016). Consequently, scribble-based WSSS has garnered increasing attention in recent years (Liang et al. 2022; Wu et al. 2023). The intrinsic challenge in scribble-based WSSS lies in the partial supervision provided by sparse labels. Existing approaches have attempted to address this issue from three perspectives, namely, regularization loss (Tang et al. 2018a,b), consistency learning (Pan et al. 2021; Wang et al. 2022), and label diffusion (Lin et al. 2016; Wu et al. 2023), as illustrated in Figure 1(a-c). Specifically, regularization lossbased methods design specific loss functions to improve the stability of the models. Consistency learning-based approaches aim to capture invariant features to boost finegrained segmentation performance through consistency loss. However, both methods fail to address the deficiency of pixel-level supervision, leading to limited performance. In contrast, label diffusion-based methods generate pixel-level pseudo-labels by diffusing labeled pixels to unlabeled ones, i.e. constructing a graph model on the scribble to generate pseudo-labels for training. However, the diffusion process predominantly relies on local pixel information and fails to exploit the global semantics and class-specific cues of imThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7332 ages, which are important for semantic segmentation. In addition, such pseudo-label generation approaches are heavily dependent on the quantity and quality of the scribbles, where the model performance would be undermined when the scribbles are shrunk or dropped as shown in Figrue 7. In fact, sparse scribbles inherently possess class information, which can offer valuable global semantic clues while enriching scribble-based WSSS supervision. However, this advantage has not been extensively explored in existing scribblebased WSSS researches. In light of this, the present paper is dedicated to promoting the performance of scribble-based WSSS with a globally considered pseudo-label. The image-level class labels could be easily obtained from the scribbles, making it feasible to acquire the globally considered pseudo-label via imagelevel WSSS methods. Previous image-level WSSS methods have demonstrated that image-level class labels prompt models to focus on discriminative areas within an image, which can be used to compensate for the limitations of local cues provided by scribbles. Drawing inspiration from this, we propose a class-driven scribble promotion (CDSP) network for scribble-based semantic segmentation, which utilizes image-level class labels to generate pseudo-labels. The overview of our method is depicted in Figure 1 (d). We begin by extracting image-level class labels from the scribbles and employing them to train a classification model, subsequently generating the globally considered pseudolabel. We then proceed to train a semantic segmentation model with both scribble and pseudo-label for supervision. By doing so, the inclusion of the image-level class label facilitates the acquisition of global semantic information for pseudo-label generation and further benefits the scribblebased WSSS training. Nevertheless, the noisy supervisions in pseudo-labels may affect the model, where we specifically devise a localization rectification module (LoRM) to address this issue, which corrects foreground representations in the latent feature space by referencing other pixels. To further leverage the advantages of both supervisions, we also introduce a distance entropy loss (DEL) for model uncertainty reduction, where the model prediction is assigned with per-pixel confidence based on the reliable region determined by the scribble and the boundary of the pseudo-label. With these integrated components, our method achieves state-ofthe-art (SOTA) performance in scribble-based WSSS. Our contributions can be concluded as: • We present a class-driven scribble promotion network for scribble-based WSSS that utilizes image class information to generate a globally considered pseudo-label. Notably, this is the first approach to exploit image-level class information in the scribble-based WSSS problem. • A localization rectification module is proposed to correct the foreground representations in the latent feature space that are misled by the noisy pseudo-labels. And a distance entropy loss is proposed to excavate the reliable areas based on proximity to scribbles and pseudo-labels. • The proposed method outperforms existing state-of-theart methods. The extensive experiments on the different qualities of scribbles scribble demonstrate the extraordinary robustness of our method. Related Works Image-level WSSS The remarkable achievements of early deep learning-based methods in image classification (Simonyan and Zisserman 2014) have spurred numerous works on feature visualization. Zhou et al. (2016) first introduced the class activation map (CAM) technique, which employs global average pooling on deep features to visualize discriminative localization. This technology subsequently catalyzed various efforts to generate semantic pseudo-labels from CAM, facilitating the training of segmentation networks (Kolesnikov and Lampert 2016; Zhang et al. 2021b; Zhu et al. 2023b, 2022). Recently, SEAM (Wang et al. 2020) presented a pixel correlation module that refines current pixel predictions using information on the similar neighbors of the pixel. From another perspective, AFA (Ru et al. 2022) addressed this problem with transformers leveraging multihead self-attention for effective long-range modeling. Additionally, (Ru et al. 2023) developed patch token contrast and class token contrast modules to capture high-level semantics. The intrinsic capability of image-level supervised semantic segmentation to capture global information makes it a promising approach to promote scribble-supervised semantic segmentation. Scribble-based WSSS Early methods can be traced back to traditional interactive segmentation (Rother, Kolmogorov, and Blake 2004; Grady 2006), which employ graphical models to expand the scribble area and extract foreground regions. These methods typically require multiple continuous interactions to extract foreground masks and generate semantic segmentation results. Recent scribble-based WSSS domain approaches can be categorized into three main groups: regularization loss-based methods (Tang et al. 2018a,b), consistency learning-based methods (Pan et al. 2021; Wang et al. 2022), and label diffusion-based methods (Lin et al. 2016; Vernaza and Chandraker 2017; Xu et al. 2021), as depicted in Figure 1. Regularization lossbased methods aim to enhance network robustness by preventing it from being overconfident. Consistency learningbased methods leverage self-supervised learning strategies to acquire invariant features. While both these two kinds of methods contribute to enhanced network robustness, they still struggle to address the issue of lacking supervision. Specifically, BPG (Wang et al. 2019) utilizes extra boundary data with edge information to improve segmentation performance. Label diffusion-based approaches utilize scribbles to generate pseudo-labels using unsupervised models, such as graph models, and subsequently employ these pseudo-labels to train semantic segmentation models. However, such a diffusion process fails to effectively exploit the global semantic information lurking in the image. More recent works (Liang et al. 2022; Wu et al. 2023) aim to adaptively generate pseudo-labels using a tree filter and a learnable probabilistic model with Gaussian prior, respectively. Despite their advancements, both of these methods still lack image-level supervision, thereby limiting their ability to model global semantic information effectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7333 classifier ) ( C ) ( S data flow gradient flow loss fucntion 0.8 0.9 1 1 LoRM seg L stage1 stage2 decoder F foreground extraction M basic supervision distance map class L de L class CAM extraction pseudo label generation Figure 2: The overview of our method (CDSP). In the first stage, we train a classification model with the image-level class labels extracted from the scribbles to generate the globally considered pseudo-label. Then we train a semantic segmentation model with the globally considered pseudo-label and the scribble label jointly in the second stage. We propose a localization rectification module (LoRM) and a distance entropy loss to assist the training process. Other WSSS Methods Points (Bearman et al. 2016; Chen et al. 2021; Wu et al. 2022, 2023; Liang et al. 2022) and bounding boxes (Dai, He, and Sun 2015; Papandreou et al. 2015; Khoreva et al. 2017; Zhang et al. 2021a) are also common annotations in weakly-supervised semantic segmentation. However, both of them fail to achieve a balance between training supervision and labor costs. The point-level annotation requires less labor, but it provides very limited supervision, hence training a high-accuracy semantic segmentation model is difficult. Bounding boxes suffer from overlapping with each other when encountering multiple objects and provide redundant supervision, which may confuse the model. In comparison, scribbles achieve the best balance between laboring cost and supervision accuracy. Method In this part, we first retrospect the general problem formulation of label diffusion-based methods and their limitations. Then we introduce CDSP with the pseudo-label generation, basic supervision, LoRM, and DEL sequentially in detail. General Problem Formulation Denoting Ω= {yi|i = 1, ...n} as the ground truth label set and Ωs as the sparse scribble label, where Ωs ⊂Ωand |Ωs| << |Ω|. The objective function of the scribble-based WSSS can be formulated as: min c(PΩs, Ωs), (1) where c(·, ·) denotes the criterion function, which is usually cross-entropy. PΩs denotes the model predictions corresponding to the sparse scribble label. Such sparse supervision limits the model’s performance and decreases the certainty of the model. Most existing label diffusion-based methods make efforts on devising a graphical diffusion model or learnable probabilistic model with low-level cues ϕ to generate the pseudo-label eΩ= {eyi|i = 1, ...n} by diffusing the labeled pixels to unlabeled ones: eΩ= ϕ(Ωs). (2) Combined with Eq 1, a complete objective function for scribble-based WSSS can be obtained: min(c(PΩs, Ωs) + c(PeΩ, eΩ)). (3) As shown in Eq. 2, because only scribble-annotated pixels are considered, it is hard for the diffusion methods to capture the global information from the scribbles, making the diffused label ey provide locally considered supervision. Besides, it is evident that the diffused pseudo-label heavily depends on the scribble, where its quality may be undermined by a shrunk or dropped version of the scribble. Class-driven Scribble Promotion To solve the problems mentioned above, we naturally think of utilizing the class label derived from the sparse scribble to provide global cues for image-supervised segmentation when generating the pseudo-label. Denoting eϕ as the classification model with a fully connected layer, the pseudo-label eΩcan be obtained from the image I ∈R3×H×W with multiclass label k ∈R1×K : eΩ= eϕ(I, k), (4) where all the pixels are taken into account to generate the pseudo-label. After that, we further introduce the LoRM and DEL to strike the advantages of both supervisions as shown in Figure 2. In general, the overall loss function for supervision can be formulated as: L = Lseg + Llorm + Lde. (5) Lseg represents the basic supervision from the scribble and pseudo-label, Llorm represents the supervision from the LoRM, and Lde is the supervision from DEL. The details of each component will be introduced sequentially in the following parts. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7334 Pseudo-label Generation and Basic Supervision To obtain the pseudo-label with Eq. 4, we first train a multilabel classification model C(·) followed by a K-class classifier (e.g. Resnet (He et al. 2016) with an FC layer) with image-level classes extracted from the scribbles. After the model converges, the image I is fed into the model to generate the class activate map of the kth class: CAMk(I) = ReLU( C X i=1 Wi,kFi), (6) where F = C(I), F ∈RC×HW is the feature maps of the last layer, W is the weight matrix in the classifier. We follow existing image-supervised semantic segmentation methods to threshold the CAM into binary masks and integrate them into a single channel multi-class mask (Wang et al. 2020; Chen et al. 2022) to generate the pseudo-label eΩ. It is also possible to adopt one-stage image-supervised WSSS methods (Ru et al. 2022; Zhu et al. 2023a) as eϕ to generate the pseudo-label. With both pseudo-label and scribble, the basic supervision can be summarized as: Lseg = Lsegs + Lsegc. (7) In detail, Lsegs denotes the sparse supervision from the scribble label in the form of a partial cross-entropy: Lsegs = 1 |Ωs| X yi∈Ωs c(yi, pi), (8) where c(yi, pi) = −PK k=1 yi,klog(pi,k), K is the class number, pi is the prediction from the model, yi is the onehot label. Lsegc denotes the supervision from the pseudolabel, which can be formulated as a smoothed cross-entropy: Lsegc = 1 |eΩ| X yi∈eΩ [(1 −ϵ)c(yi, pi) + ϵc( 1 K , pi)], (9) where ϵ = 0.1 is a regularization item of label smoothing (M¨uller, Kornblith, and Hinton 2019) to prevent the model from being over confident. Localization Rectification Module Adopting the pseudo-label directly for supervision can lead to absurd predictions (Wang et al. 2018), particularly evident when foreground objects are nearby, as illustrated in Figure 3(c). Rather than correcting the pseudo-label itself, we are motivated to refine the feature representations of the model so that the model can adopt pseudo-labels with different qualities. To achieve this goal, we propose a novel module namely LoRM. The primary concept behind the LoRM is to leverage the inherent similarity of representations among foreground pixels belonging to the same semantic class. By doing so, mispredicted pixels can be refined through a weighted combination of representations from other pixels. Let F ∈RC×H×W denotes the feature map generated by the last layer of the segmentation backbone S(·), and M ∈RH×W denotes the pseudo mask, as depicted in Figure 2. The LoRM takes F and M as inputs and operates accordingly to rectify the representations. (a) scribble label (b) pseudo label (c) no LoRM (d) with LoRM (e) GT Figure 3: Visualization results employing resnet50 backbone and deeplabV2 segmentor. (a) is the original image with scribble label, (b) is the pseudo-label for training, (c) is the prediction trained with Lseg, (d) is the prediction trained with Lseg + Llorm. (e) is the ground truth label. Q ... reshape Q F K F M F Q 0k 0  A 0 A' 0ˆF K m' F' Fˆ loss 2 L conv1x1 A ... ... conv1x1 1k ik 1  A iA  1 A' iA' 1ˆF iFˆ softmax linear projection Figure 4: The illuastration of LoRM. As detailed in Figure 4, the feature map F is firstly liner projected into FQ ∈RC×H×W and FK ∈RC×H×W with a single convolution, then flattened along the row axis into Q ∈RC×HW and K ∈RC×HW . Taking K as the key set to be refined, and Q as the query set for similarity matching, we calculate the weighted similarity matrix A by: A = softmax( QT K ∥QT ∥C 2 ∥K∥C 2 ), (10) where A ∈RHW ×HW , softmax is implemented along the row axis, the L2-norm operation ∥· ∥C 2 of QT and K is implemented along the channel dimension. Each row Ai in the matrix A describes the similarity between the i-th feature vector in K and all the HW feature vectors in Q. With the help of Eq. 10, the i-th feature vector can be refined by referencing the feature vectors in other locations. It is worth noting that, the background vectors vary largely, and contribute little to the foreground rectification. Therefore, we extract the foreground mask M ∈RH×W from the pseudo-label and flatten it along the row axis, then element-wise multiply it with A leveraging the broadcast technique: A′ = flatten(M) ∗A, (11) so that the background features in each row Ai are largely suppressed in its masked one A′i. Then the original feature map F is flattened along the row axis, and it is matrixmultiplied with the masked similarity matrix A′: ˆF = δ ∗flatten(F)A′, (12) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7335 where δ is a learnable parameter initialized with 1 to control the rectification degree, ˆF ∈RC×HW is the refined feature which is finally reshaped back to RC×H×W . The mean square error loss (MSE) is implemented on the original feature F and the refined feature ˆF: Llorm = MSE(F, ˆF). (13) The whole process is realized by efficient matrix operations. With the supervision of Eq. 13, the LoRM achieves the goal of rectifying the misled foreground representations by referencing the representations in other foreground locations. Distance Entropy Loss The LoRM effectively addresses the misalignment in the feature space in the foreground area, but the model remains susceptible to being misled by noisy labels near the object boundary during later training steps. This could undermine the efforts of LoRM and reduce the model’s certainty. To overcome this challenge, it becomes crucial to identify reliable predictions. We propose that discriminative areas, such as the surroundings of the scribble, are more reliable and should be assigned higher confidence. Conversely, indiscriminative areas like the boundary of the pseudo-label, generated by global class supervision, are less reliable and should be assigned lower confidence. Based on this concept, we introduce a distance map strategy, to assign predictions with different confidence levels according to their distance from the scribble and the pseudo-label boundary respectively, known as the distance entropy loss. By doing so, we can better leverage the advantages of both supervisions during model training. For the pseudo-label, the pixels around its boundary are indiscriminative, and such an area is probable to provide uncertain supervision. Denoting the coordinates of the ith point in the image as (m, n), and the coordinates of the jth point on the foreground pseudo-label boundary as (m′, n′), the distance maps of the pseudo-label is designed as: dc(i) = min ∀j (⌊ p eλc[(m −m′)2 + (n −n′)2]⌋255 255 ), (14) where dc is a probability ranges in [0, 1] that describes the minimum Euclidean distance between a point and the set of pseudo-label boundary points with the distance value truncated to 255 for normalization and the efficiency of data storage. λc is a coefficient to control the scope of the pseudolabel distance map as shown in Figure 5 (f-h). Denoting Nc as the number of non-zero elements in dc, the distance entropy of the pseudo-label is formulated as: Ldc = 1 Nc Nc X i=1 dc(i)pilog(pi). (15) Compared with the pseudo-label, the scribble is certain and correct, the pixels lying around the scribble may largely belong to the same semantic class as the scribble. Moreover, the scribble lying in the foreground’s inner area provides correct supervision, which could suppress the noisy supervision in pseudo-label. But this confidence should decrease (a) Pseudo (b) λc = 1 (c) λc = e3 (d) λc = e7 (e) Image (f) λs = 1 (g) λs = e3 (h) λs = e7 Figure 5: Visualization of disance maps with different coefficients for pseudo label boundary (b-d) and scribble (f-h) with the increment of the distance. Therefore, denoting the coordinates of the ith point in the image as (m, n), and the jth foreground scribble point coordinates as (m′, n′), the distance map of the scribble is designed as: ds(i) = 1 −min ∀j (⌊ p eλs[(m −m′)2 + (n −n′)2]⌋255 255 ), (16) where ds is a probability ranges in [0, 1] that describes the minimum Euclidean distance between a point and the set of scribble points. λs is a coefficient to control the scope of the scribble distance map as shown in Figure 5(b-d). Denoting Ns is the number of nonzero elements in ds, the distance entropy of the scribble is formulated as: Lds = 1 Ns Ns X i=1 ds(i)pilog(pi), (17) Finally, the overall distance entropy can be formulated as: Lde = Lds + Ldc. (18) Figure 5 presents visualizations of the distance maps for the scribble and pseudo-label boundaries at different coefficients of λs and λc. As λs increases, the reliable area determined by the scribble becomes more prominent. Conversely, a higher λc endows more weights to the pseudo-label in determining the reliable area. Through the distance entropy loss, we effectively excavate the reliable areas and reinforce the prediction certainty of the model by leveraging information from both the scribble and the pseudo-label boundaries. Experiments Dataset Our experiments were carried out on the widely used ScribbleSup dataset (Lin et al. 2016), which combines PASCAL VOC2012 and SBD (Hariharan et al. 2011) datasets with scribble annotations. The dataset includes 10,582 training images and 1,449 validation images. To ensure fairness, we used the same scribble generation code as previous works (Lin et al. 2016; Tang et al. 2018b; Pan et al. 2021), maintaining uniform scribble thickness. Additionally, we validated our method on scribble-shrink and scribbledrop introduced by URSS (Pan et al. 2021) to assess its robustness in diverse scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7336 (a) I+S (b) baseline (c) URSS (d) TEL (e) AGMM (f) Ours (g) GT Figure 6: Visualization results comparison. (a) is the image with its scribble annotations. The baseline (b) is deeplabV3+ trained with only scribble annotations. (c) to (e) are recent methods, and (g) is the ground truth label. Implementation Details With the pseudo-labels generated by BMP (Zhu et al. 2023a), we employed representative segmentation frameworks deeplabV2 (Chen et al. 2017) and deeplabV3+ (Chen et al. 2018) for method validation and generating competitive results, respectively. We conducted a total of 50 epochs with a base learning rate of 1e−3 and batch size set to 16 for training. To ensure stable training, we adopted a learning rate warmup strategy, linearly increasing the learning rate to 1e−3 over the first 10 epochs, followed by a cosine decay to zero over the next 40 epochs. Validation results were reported using the last checkpoint. The stochastic gradient descent (SGD) optimizer was utilized with a momentum of 0.9 and weight decay of 5e−4. Data augmentation followed the same strategy used in URSS. All experiments were reported with the mIoU metric (%) and conducted on one NVIDIA RTX 4090 24G GPU with an Intel Xeon Gold 6330 CPU. Comparison on ScribbleSup We deploy resnet101 (He et al. 2016) as the backbone and deeplabV3+ as the segmentor with hyper-parameters of (λs = e2, λc = e7) to generate the best result. The comparison details are recorded in Table 1. It is worth noting that, previous works of ScribbleSup, RAWKS (Vernaza and Chandraker 2017), and NCL (Tang et al. 2018a) adopted CRF for postprocessing, which was fairly time-consuming. For recent works of TEL (Liang et al. 2022) and AGMM (Wu et al. 2023), they were designed for general sparsely supervised segmentation, covering point level, scribble level, and box level annotations. To ensure the fairness, we reimplemented them using standard scribbles commonly used in previous works like ScirbbleSup, NCL, and URSS. As shown in Table 1, our method outperforms all the previous methods, exceeding the TEL by 0.6% and AGMM by 1.6%. The test results reported in the last column of Table 1 are acquired from PASCAL VOC2012 website (Everingham and Winn 2012). The visualization comparison of our method using deeplabV3+ with previous SOTA methods is shown in Figure 6, where recent methods fail to capture correct global semantics. Shrink and Drop As scribble-based annotations are flexible, it is common that the user annotates the scribbles with different length and sometimes drop some of the objects. Therefore, evaluating the model’s robustness with different shrink or drop ratios is also essential. Some shrunk or dropped samples are presented in Figure 7. Notably, as depicted in the figure, an increase in the drop or shrink ratio leads to a decrease in the model’s performance. Specifically, when the scribbles are shrunk to points (shrink ratio = 1), AGMM and TEL experience an approximately 10% performance degradation. In contrast, our method exhibits only a marginal drop within 1%, showcasing its robustness. Ablation on Components We employ resnet50 backbone with deeplabV2 as the segmentor and use the ScribbleSup (Lin et al. 2016) dataset for training and validation. The optimal hyper-parameter combination of the distance entropy loss with all components is found by grid-search, where λs = 1, λc = 6, then we validate the effectiveness of each module by eliminating them one by one. The results are recorded in Table 2. It can be observed from the first three lines that, employing either scribble or pseudo-label as the basic supervision generates an unsatisfied result (only around 67%), while using both of them produces a much better result (72.13%). This demonstrates that the scribble and pseudo-label provide complementary supervision and they compensate each other. Additionally, only adding Ldc on the basic supervision degrades the model to almost the same performance as merely using Lsegc. This issue is attributed to the overfitting of the noisy labels in pseudo-labels of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7337 0.3 0.2 0.1 0.4 0.5 (a) scribble-drop 0.5 0.2 0 0.7 1.0 (b) scribble-shrink Figure 7: The experiments on scribble-drop and scribble-shrink dataset with different drop or shrink ratios. Method Sup Segmentor val test AFA (Zhang et al. 2021a) I SegFormer 66.0 AMN (Lee et al. 2022) I r101+v2 70.7 BECO (Rong et al. 2023) I MiT+v3p 73.7 TOKO (Ru et al. 2023) I ViT+v2 72.3 BoxSup (Dai et al. 2015) B vgg16+v1 62.0 WSSL (Papandreou et al. 2015) B vgg16+v1 67.6 SDI (Khoreva et al. 2017) B vgg16+v1 65.7 BBAM (Lee et al. 2021) B r101+v2 63.7 ScribbleSup (Lin et al. 2016) S vgg16+v1 63.1 RAWKS (Vernaza et al. 2017) S r101+v1 61.4 NCL (Tang et al. 2018a) S r101+v1 72.8 KCL (Tang et al. 2018b) S r101+v2 72.9 BPG (Wang et al. 2019) S r101+v2 73.2 PSI (Xu et al. 2021) S r101+v3p 74.9 URSS (Pan et al. 2021) S r101+v2 74.6 73.3 CCL (Wang et al. 2022) S r101+v2 74.4 TEL (Liang et al. 2022) S r101+v3p 75.2 75.6 AGMM (Wu et al. 2023) S r101+v3p 74.2 75.7 Ours S r50+v2 73.9 74.2 Ours S r101+v2 75.3 75.3 Ours S r101+v3p 75.9 76.0 baseline (scribble only) S r101+v3p 66.2 69.7 Table 1: Comparison with the state-of-the-arts methods. model and can be addressed by our LoRM, which improves the model performance from 67.33% to 73.64%. Compared with the baseline, all the components obtain a better performance, and using them all achieves the best performance. Ablation on Pseudo-labels We also conducted experiments with different pseudo-labels to assess their influence, utilizing deeplabV3+ as the segmentor. The results in Table 3 indicate that, as the pseudo-label base accuracy improves, our method exhibits increasing performance. basic supervision Lde Llorm mIoU Lsegs Lsegc Lds Ldc ✓ 66.17 ✓ 67.23 ✓ ✓ 72.13 ✓ ✓ ✓ 67.33 ✓ ✓ ✓ 73.38 ✓ ✓ ✓ ✓ 73.58 ✓ ✓ ✓ 73.26 ✓ ✓ ✓ ✓ 73.51 ✓ ✓ ✓ ✓ 73.64 ✓ ✓ ✓ ✓ ✓ 73.91 Table 2: The effectiveness of each component. This demonstrates that our approach directly benefits from image-level WSSS methods, making it a promising avenue for further development. Method Base acc res50 res101 SEAM (Wang et al. 2020) 64.5 69.8 71.8 AFA (Ru et al. 2022) 66.0 71.5 73.3 BMP (Zhu et al. 2023a) 68.1 73.9 75.9 Table 3: Performance adopting different pseudo-labels. Conclusion We propose a class-driven scribble promotion network for the scribble-based WSSS problem. To address the issue of model overfitting to noisy labels, we introduce a localization rectification module. Additionally, a distance entropy loss is incorporated to enhance the robustness of the network. Experimental results show that our method outperforms existing approaches, achieving state-of-the-art performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7338 Acknowledgements This work was supported in part by the Natural Science Foundation of China (82371112, 62394311, 62394310), in part by Beijing Natural Science Foundation (Z210008), and in part by Shenzhen Science and Technology Program, China (KQTD20180412181221912). References Bearman, A.; Russakovsky, O.; Ferrari, V.; and Fei-Fei, L. 2016. What’s the point: Semantic segmentation with point supervision. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14, 549–565. Springer. Chen, H.; Wang, J.; Chen, H. C.; Zhen, X.; Zheng, F.; Ji, R.; and Shao, L. 2021. Seminar learning for click-level weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6920–6929. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4): 834–848. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 801– 818. Chen, Z.; Wang, T.; Wu, X.; Hua, X.-S.; Zhang, H.; and Sun, Q. 2022. Class re-activation maps for weakly-supervised semantic segmentation. In CVPR, 969–978. Dai, J.; He, K.; and Sun, J. 2015. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In CVPR, 1635–1643. Everingham, M.; and Winn, J. 2012. The PASCAL visual object classes challenge 2012 (VOC2012) development kit. Pattern Anal. Stat. Model. Comput. Learn., Tech. Rep, 2007(1-45): 5. Grady, L. 2006. Random walks for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 28(11): 1768–1783. Hariharan, B.; Arbel´aez, P.; Bourdev, L.; Maji, S.; and Malik, J. 2011. Semantic contours from inverse detectors. In 2011 international conference on computer vision, 991–998. IEEE. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Khoreva, A.; Benenson, R.; Hosang, J.; Hein, M.; and Schiele, B. 2017. Simple does it: Weakly supervised instance and semantic segmentation. In CVPR, 876–885. Kolesnikov, A.; and Lampert, C. H. 2016. Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, 695–711. Springer. Lee, J.; Yi, J.; Shin, C.; and Yoon, S. 2021. Bbam: Bounding box attribution map for weakly supervised semantic and instance segmentation. In CVPR, 2643–2652. Lee, M.; Kim, D.; Shim; and Hyunjung. 2022. Threshold matters in wsss: Manipulating the activation for the robust and accurate segmentation model against thresholds. In CVPR, 4330–4339. Liang, Z.; Wang, T.; Zhang, X.; Sun, J.; and Shen, J. 2022. Tree energy loss: Towards sparsely annotated semantic segmentation. In CVPR, 16907–16916. Lin, D.; Dai, J.; Jia, J.; He, K.; and Sun, J. 2016. Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In CVPR, 3159–3167. M¨uller, R.; Kornblith, S.; and Hinton, G. E. 2019. When does label smoothing help? Advances in neural information processing systems, 32. Pan, Z.; Jiang, P.; Wang, Y.; Tu, C.; and Cohn, A. G. 2021. Scribble-supervised semantic segmentation by uncertainty reduction on neural representation and self-supervision on neural eigenspace. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7416–7425. Papandreou, G.; Chen, L.-C.; Murphy, K. P.; and Yuille, A. L. 2015. Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. In ICCV, 1742–1750. Rong, S.; Tu, B.; Wang, Z.; and Li, J. 2023. BoundaryEnhanced Co-Training for Weakly Supervised Semantic Segmentation. In CVPR, 19574–19584. Rother, C.; Kolmogorov, V.; and Blake, A. 2004. ” GrabCut” interactive foreground extraction using iterated graph cuts. ACM transactions on graphics (TOG), 23(3): 309–314. Ru, L.; Zhan, Y.; Yu, B.; and Du, B. 2022. Learning affinity from attention: end-to-end weakly-supervised semantic segmentation with transformers. In CVPR, 16846–16855. Ru, L.; Zheng, H.; Zhan, Y.; and Du, B. 2023. Token contrast for weakly-supervised semantic segmentation. In CVPR, 3093–3102. Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Tang, M.; Djelouah, A.; Perazzi, F.; Boykov, Y.; and Schroers, C. 2018a. Normalized cut loss for weaklysupervised cnn segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1818–1827. Tang, M.; Perazzi, F.; Djelouah, A.; Ben Ayed, I.; Schroers, C.; and Boykov, Y. 2018b. On regularized losses for weaklysupervised cnn segmentation. In ECCV, 507–522. Vernaza, P.; and Chandraker, M. 2017. Learning randomwalk label propagation for weakly-supervised semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7158–7166. Wang, B.; Qi, G.; Tang, S.; Zhang, T.; Wei, Y.; Li, L.; and Zhang, Y. 2019. Boundary perception guidance: A scribblesupervised semantic segmentation approach. In IJCAI International joint conference on artificial intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7339 Wang, B.; Qiao, Y.; Lin, D.; Yang, S. D.; and Li, W. 2022. Cycle-consistent learning for weakly supervised semantic segmentation. In Proceedings of the 3rd International Workshop on Human-Centric Multimedia Analysis, 7–13. Wang, X.; You, S.; Li, X.; and Ma, H. 2018. Weaklysupervised semantic segmentation by iteratively mining common object features. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1354– 1362. Wang, Y.; Zhang, J.; Kan, M.; Shan, S.; and Chen, X. 2020. Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. In CVPR, 12275–12284. Wu, L.; Fang, L.; Yue, J.; Zhang, B.; Ghamisi, P.; and He, M. 2022. Deep Bilateral Filtering Network for PointSupervised Semantic Segmentation in Remote Sensing Images. IEEE Transactions on Image Processing, 31: 7419– 7434. Wu, L.; Zhong, Z.; Fang, L.; He, X.; Liu, Q.; Ma, J.; and Chen, H. 2023. Sparsely Annotated Semantic Segmentation With Adaptive Gaussian Mixtures. In CVPR, 15454–15464. Xu, J.; Zhou, C.; Cui, Z.; Xu, C.; Huang, Y.; Shen, P.; Li, S.; and Yang, J. 2021. Scribble-supervised semantic segmentation inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15354–15363. Zhang, B.; Xiao, J.; Jiao, J.; Wei, Y.; and Zhao, Y. 2021a. Affinity attention graph neural network for weakly supervised semantic segmentation. TPAMI, 44(11): 8082–8096. Zhang, F.; Gu, C.; Zhang, C.; and Dai, Y. 2021b. Complementary patch for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 7242–7251. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2921–2929. Zhu, L.; He, H.; Zhang, X.; Chen, Q.; Zeng, S.; Ren, Q.; and Lu, Y. 2023a. Branches Mutual Promotion for End-to-End Weakly Supervised Semantic Segmentation. arXiv:2308.04949. Zhu, L.; She, Q.; Chen, Q.; Meng, X.; Geng, M.; Jin, L.; Zhang, Y.; Ren, Q.; and Lu, Y. 2023b. Background-aware classification activation map for weakly supervised object localization. IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhu, L.; She, Q.; Chen, Q.; You, Y.; Wang, B.; and Lu, Y. 2022. Weakly supervised object localization as domain adaption. In CVPR, 14637–14646. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7340
2024
815
18,646
Negative Pre-aware for Noisy Cross-Modal Matching Xu Zhang1†, Hao Li1†, Mang Ye2* 1School of Computer Science and Engineering, University of Electronic Science and Technology of China 2School of Computer Science, Wuhan University {xuzhang.xoe, 18th.leolee, mangye16}@gmail.com Abstract Cross-modal noise-robust learning is a challenging task since noisy correspondence is hard to recognize and rectify. Due to the cumulative and unavoidable negative impact of unresolved noise, existing methods cannot maintain a stable performance when the noise increases. In this paper, we present a novel Negative Pre-aware Cross-modal (NPC) matching solution for large visual-language model fine-tuning on noisy downstream tasks. It is featured in two aspects: (1) For noise recognition and resistance, previous methods usually directly filter out a noise subset, we propose to estimate the negative impact of each sample. It does not need additional correction mechanisms that may predict unreliable correction results, leading to self-reinforcing error. We assign a confidence weight to each sample according to its negative impact in the training process. This adaptively adjusts the contribution of each sample to avoid noisy accumulation. (2) For maintaining stable performance with increasing noise, we utilize the memorization effect of DNNs by maintaining a memory bank. Specifically, we apply GMM to select high-confident clean samples as the memory entry, where the memory entry is used to estimate the negative impact of each sample. Since clean samples are easier distinguished by GMM with increasing noise, the memory bank can still maintain high quality at a high noise ratio. Compared to the correction mechanism focusing on noise samples, memory bank-based estimation is more robust, which makes the model performance stable on noisy datasets. Extensive experiments demonstrate that our method significantly improves matching accuracy and performance stability at increasing noise ratio. Our approach also surpasses the state-of-the-art methods by a large margin. The code is available at: https://github.com/ZhangXu0963/NPC. Introduction Cross-modal matching aims to align different modalities (e.g., text and image) within a common space and pair them based on similarity score. With the explosion of multimedia data, cross-modal matching has gained traction in both industry and academia, e.g., text-to-image generation (Zhou et al. 2022; Ding et al. 2021), image captioning (Li et al. 2019b; Stefanini et al. 2022; Wang et al. 2023), and visual question answering (Lin et al. 2022; Lei et al. 2023). Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. *Corresponding author. †These authors contributed equally to this work. Captions Co-training Rectify Memory bank Select Pre-aware 𝑤 Training Retrieval Retrieval Captions Existing solution NPC (Ours) 𝑀𝐵 Clean set Noise set 𝐷𝐶 𝐷𝑁 (a) (b) R@1 R@1 Figure 1: (a) Existing solution vs NPC. (b) The variance of R@1 of noise-robust learning and CLIP-based methods. A lower variance indicates that the method is more robust in the face of increasing noise. These works have achieved promising performance by training on large-scale datasets. However, it is expensive to obtain a well-annotated dataset in practical scenarios. The manual-annotated datasets, e.g., MSCOCO (Lin et al. 2014), Flickr30K (Young et al. 2014), and Conceptual Captions (Sharma et al. 2018), incorporate a significant number of inaccurate descriptions, namely noisy correspondence. Unlike noisy label in classification tasks, the noise here is mismatched cross-modal pairs which is more difficult to deal with, since involves both visual and textual modeling. Therefore, a series of approaches (Huang et al. 2021; Yang et al. 2023; Han et al. 2023) following the noiserectify paradigm have been developed to counter the negative impact of the noise. These methods typically filter out the noise subset from the original training set, and address the noise issue through label correction. Nevertheless, the inherent flaw of the noise-rectify paradigm cannot maintain the performance stability in the existence of severe noise. As shown in Fig. 1(b), we compare the performance of different methods using R@1 metric, including noise-rectify based approaches (Huang et al. 2021; Yang et al. 2023), the CLIP-based approaches (Radford et al. 2021; Chun 2023) and our approach. We employ variance (var) of R@1 at different noise ratio to illustrate the “performance stabilThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7341 ity”. Obviously, noise-rectify based methods exhibit unstable performance with a considerably larger variance than ours. Additionally, CLIP-based methods also lack consistent performance with increasing noise, even though CLIP is a powerful pre-trained model. Most existing noise-rectify paradigms rely on collaborative rectifying with multiple models. Since the limitation of the rectifying mechanism, the matching performance under high-noise is unstable. In these works (Huang et al. 2021; Yang et al. 2023), the new labels are entirely estimated by DNN models. With high noise ratio, some indistinguishable noise correspondences are prone to be directly learned and remembered by DNNs, ultimately leading to a dramatic drop in performance under high-noise. Existing methods emphasize the “discrimination learning” ability but ignore the “stability”. In our opinion, two essential abilities are required for noise-robust learning for large visual-language model fine-tuning on noisy downstream tasks: 1) the ability to distinguish between noisy and clean samples, 2) maintain the stability of “discriminative learning” with increasing noise. To address aforementioned challenges, we propose a novel approach named Negative Pre-aware Cross-modal (NPC) matching. NPC adopts a unique Negative Preaware (NP) paradigm for robust learning. Unlike previous paradigms that mainly focus on noise filtering or correction, the NP paradigm adaptively assesses the potential negative impact of each sample before the model learning (see Fig. 1(a)). DNNs tends to prioritize learning easy samples over noisy and challenging ones (Arpit et al. 2017; Xia et al. 2021). With gradually fitting noise samples, the model begins to generate incorrect predictions (Liu et al. 2020). In other words, once the model learned a noise pair, fitting certain specific clean samples becomes more challenging. These clean samples usually have images or texts that are similar to the noise pair. Inspired by this phenomenon, our NPC uses easydistinguishable clean samples to estimate negative impacts. We rigorously choose a reliable clean subset from the training data by using Gaussian Mixture Model (Li, Socher, and Hoi 2020; Permuter, Francos, and Jermyn 2006) to fit the loss distribution of each pair. And high-confident clean samples are maintained in a Memory Bank (MB), which is used to assist the model in estimating negative impact before to fully model training. A small confidence weight will be assigned to high-negative samples. The main contributions are summarized as follows: • We highlight the challenge with large visual-language model fine-tuning on noisy downstream tasks, i.e., how to achieve robust learning in cross-modal matching with the increasing amount of noise. • We introduce the Negative Pre-aware Cross-modal (NPC) matching paradigm by establishing a memory bank for negative impact estimation. We employ the assistance of memory entries to allocate confidence weights (w) to the samples. These components constitute the cornerstones to achieving stable and highly noiseresistant performance. • Extensive experiments are conducted on two manualannotated datasets and a real-world dataset, showcasing the NPC’s superiority over the state-of-the-art methods. Moreover, with the increasing noise, both quantitative and qualitative results affirm that NPC demonstrates notably higher performance stability compared to previous methods. Related Works Image-text Matching Typical image-text matching methods align data from different modalities to measure similarity. Early works (Faghri et al. 2018; Song and Soleymani 2019; Wang et al. 2018; Qian et al. 2021) mainly focus on global alignments. Some prior works (Lee et al. 2018; Li et al. 2019a; Diao et al. 2021; Zhang et al. 2023) adopt attention mechanisms to achieve fine-grained local alignments. Subsequently, many works (Chun et al. 2021; Chun 2023; Li et al. 2022) devote to modeling the many-to-many relationships in imagecaption pairs. Recently, with the success of transformer-based vision and language models (Dosovitskiy et al. 2021; Devlin et al. 2019), vision-language pre-training (VLP) models, such as CLIP (Radford et al. 2021), have shown strong performance in multiple cross-modal tasks (Jiang and Ye 2023; You et al. 2023). Although VLP possesses impressive zero-shot ability, it still reveals vulnerabilities in training with noisy datasets on specific downstream tasks. In this paper, we employ CLIP as our backbone and introduce an anti-noise learning strategy. Cross-modal Noise-robust Learning Huang et al. (Huang et al. 2021) first tackle noise correspondences, which consider mismatched cross-modal pairs instead of incorrect annotations. Since then, several approaches (Han et al. 2023; Yang et al. 2023; Ye et al. 2022; Ye and Yuen 2020) have developed the noise-rectify process in various cross-modal tasks. They can be categorized into two groups: noise correction and noise re-weighting. Noise correction methods achieve robust learning by correcting the similarity (Han et al. 2023) or correspondence label (Huang et al. 2021; Yang et al. 2023) of noise pairs. The noise re-weighting methods (Qin et al. 2022) degrade the contribution of noise samples to achieve robust learning. All these methods require splitting a noise subset from the original training dataset. Subsequently, they proceed with rectification within this subset. Nonetheless, as noise increases, the imprecise subset division and inaccurate rectification can amplify adverse effects. Different from these works, we sidestep the problem by forecasting per-sample negative impact following the novel NP paradigm. Proposed Method Preliminary Problem Definition. Given a dataset D = {(Ii, Ti)}N i=1, where (Ii, Ti) is the ith image-text pair, and N denotes the data size. The goal of image-text matching is to align the visual and textual modalities within a shared space to calculate The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7342 Annotations 𝐺𝑀𝑀modeling 𝑀𝐵𝑏𝑎𝑡𝑐ℎ 𝑏𝑎𝑡𝑐ℎ Step 1: Pre-aware of negative impact Step 2: Training base model with 𝑤 Copy all parameters 𝐴′𝑠 𝐴𝑠 𝐴′𝑠+1 𝐿𝐶𝐸(𝑏𝑎𝑡𝑐ℎ) Update 𝐴𝑠′ to 𝐴𝑠+1 ′ 𝐴𝑠+1 𝐴𝑠 Step 1 Update 𝐴𝑠to 𝐴𝑠+1 Step 2 Base model 𝐿𝐶𝐸 𝑠(𝑀𝐵𝑏𝑎𝑡𝑐ℎ) 𝐿𝐶𝐸 𝑠+1(𝑀𝐵𝑏𝑎𝑡𝑐ℎ) Confidence weight Updated base model for next training (a) (b) Confidence weight 𝒘𝐿𝐶𝐸𝑏𝑎𝑡𝑐ℎ+ 𝐿𝑀𝐵 𝒘 𝒘 Figure 2: (a) Illustrating the NPC training pipeline. Given a batch of image-text pairs, we select their corresponding memory entries from a strict clean set divided by GMM as inputs. Then we optimize the base model in two steps: the first step aims to estimate the negative impact and obtain per-sample confidence weight w. The second step is training the base model with w. (b) Illustrating two training steps. We first share all parameters of the base model As to its siamese model A′ s. Then we train the model A′ s on the batch samples, obtaining the model A′ s+1. The negative impact of each sample can be calculated by comparing its loss of corresponding memory entry on A′ s and A′ s+1. If the loss on A′ s+1 is higher than it on A′ s, this means the sample brings a negative impact to the model, and we will give it a low confidence weight. After the negative-aware process, the model As will be trained with the re-weight samples and memory bank, generating the robust target model As+1. the similarity following Eq. 1, S(Ii, Tj) = f(Ii) · g(Tj) ∥f(Ii)∥· ∥g(Tj)∥, (1) where f(·) and g(·) serve as feature extractors for two modalities. Generally, positive pairs exhibit higher similarity scores, whereas negative pairs show lower similarity scores. Revisiting CLIP-based Solution. With the emergence of the VLP model CLIP (Radford et al. 2021) as a compelling option for cross-modal downstream tasks, we employ CLIP as the pre-trained backbone for the proposed NPC approach. CLIP enhances visual and textual feature extractors through the minimization of the symmetric crossentropy loss LCE(Ii, Ti), which is defined as follows: LCE(Ii, Ti) = CE(Ii, Ti) + CE(Ti, Ii), CE(xi, yi) = −1 N N X i=1 log  exp(S(xi, yi)) PN j=1 exp(S(xi, yj))  . (2) However, Eq. 2 works effectively based on the assumption that (Ii, Ti) constitutes a positive pair. Yet, when (Ii, Ti) is a noise correspondence, relying solely on Eq. 2 can lead to a substantial detrimental impact on the model. Fig. 1 provides a clear visual representation, demonstrating that when the noise ratio rises from 20% to 60%, the CLIP’s R@1 performance experiences a steep decline from 82.3% to 66.3%. Therefore, the NPC approach is introduced to enhance the stability and robustness of pre-trained models in tackling noise challenges. The training pipeline, depicted in Fig. 2, comprises the two main components that will be elaborated upon in the subsequent section. Threshold Threshold Proportion Noise ratio = 20% Noise ratio = 60% Figure 3: The proportion of noise and clean samples in the clean set, obtained through GMM at different thresholds τ. Generally, samples with the posterior probability of pi ≥τ are included in the clean set. Inevitably, there are some noise samples in it. The threshold τ = 0.99 ensures that the clean set selected from either the low (e.g. 20%) or the high (e.g. 60%) noise ratio training set is virtually noise-free. Memory Bank Construction We propose to estimate the negative impact of each sample brought to the model during the training process. A direct approach is to evaluate the performance changes of the model before and after training. Limited by the high cost of evaluating on the test set, we construct corresponding evaluation entries for each sample, which together form a Memory Bank (MB). Concretely, we select these entries from a reliable clean set to guarantee the accuracy of evaluation. Since DNN tends to learn the easy patterns before noisy and hard patterns (Arpit et al. 2017; Xia et al. 2021), clean samples typically exhibit lower loss values than the noisy or hard The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7343 ones. Based on this, we leverage the difference in loss distribution among samples to discern clean pairs. Following NCR (Huang et al. 2021), we utilize a two-component Gaussian Mixture Model to fit the distribution of per-sample loss in the training dataset: p(z|θ) = K X k=1 αkϕ(z|θk), (3) where αk represents the mixture coefficient, and ϕ(z|θk) denotes the probability density of the kth component. The posterior probability computed by Eq. 4 serves as the clean probability pi for the ith sample. pi = p(θk|zi) = p(θk)p(zi|θk) p(zi) . (4) Here, θk refers to the Gaussian component with a lower mean. The samples with pi ≥τ are considered clean, as indicated in Eq. 5. Dc = {(Ij, Tj)|pj ≥τ}. (5) Fig. 3 illustrates the proportion of noise and clean samples in the selected dataset Dc with varied threshold τ. We perform the strict selection using τ = 0.99 to obtain the clean set Dc, practically devoid of noise. Strict selection is a prerequisite to ensure the reliability of the memory bank. Then, we need to select evaluation entries in the strict clean set for each sample to construct the memory bank. For each pair (Ii, Ti) in the training set, we first select an imagetext pair (II i , T I i ) from Dc for Ii, where the image in this pair (II i ) exhibits the highest cosine similarity (Eq. 2) with Ii. Similarly, we also choose an image-text pair (IT i , T T i ) for Ti. The constructed memory bank can be defined as MB = {(II i , T I i ), (IT i , T T i )}N i=1. Pre-aware of the Negative Impact An intuitive fact is that when the model learns a noisy sample, its prediction accuracy of related clean samples will be declined. Therefore, after a sample is trained, we can determine its negative impact degree through the model performance on related clean samples. To estimate the negative impact of each sample, we have built the related clean evaluation entries for each sample, which together form a Memory Bank (MB). During the batch with the size of m training shown in Fig. 2, both batch data and their corresponding memory entry set MBbatch = {b1, b2, . . . , bm} are inputted into the model simultaneously. In the initial phase of each batch training, the base model A shares all parameters with A′. It’s worth noting that the models A and A′ update separately and independently. The purpose of A′ is to perceive the negative impact of each sample in the batch by assessing the performance changes of the model on MBbatch after training. We utilize the loss to denote the performance, i.e., the low loss almost means the model performs well on MBbatch. For the image-text pair (Ik, Tk), the losses of its evaluation entry bk on both i2t and t2i can be computed by: pk = CE(II k, T I k ) + CE(IT k , T T k ), qk = CE(T I k , II k) + CE(T T k , IT k ). (6) Denote the model before and after training as A′ s and A′ s+1, respectively. The performance change of the model after the sample (Ik, Tk) trained can be calculated by: rk = 1 2  ps k ps+1 k + qs k qs+1 k  . (7) When rk < 1, i.e., the loss pk and qk increase after training. It means that the model’s ability on predicting the correspondence of the clean pair related to the sample (Ik, Tk) is declined after training it. Thus, (Ik, Tk) has a negative impact on the model A′. We utilize the confidence weight wk to quantify the negative impact of the pair (Ik, Tk) following Eq. 8. The sample with high negative impact (i.e., low rk) should correspond to a small confidence weight wk. wk = tanh(rk) , rk < 1 1 , otherwise (8) The sample with rk < 1 will bring a negative impact to the model. Thus, we will assign the confidence weight wk < 1 computed by a tangent function for it. Similarly, for the samples with rk ≥1, we will assign the confidence weight wk = 1. So far, in the batch, we can estimate the negative impact of each sample on the base model A. Re-training After negative impact evaluation, we need to re-train the model A to get the robust target model As+1. To avoid the detriment of the samples with a negative impact on the base model A, we re-weight the symmetric cross-entropy loss: LRCE = 1 m m X k=1 wkLCE(Ik, Tk). (9) For these detrimental samples, the labels are not reliable. To further mitigate the detriment of these unreliable labels to the model, we employ the related memory entries to help the model learn the correct correspondences (Eq. 10). LMB = 1 m m X k=1  LCE(II k, T I k ) + LCE(IT k , T T k )  . (10) Thus, the total objective function in the re-training process can be denoted as: Ltotal = LRCE + LMB. (11) Experiments Experimental Setting Datasets and Evaluation Metrics. The proposed NPC is evaluated on three benchmark datasets, MSCOCO (Lin et al. 2014), Flickr30K (Young et al. 2014), and CC120K: • MSCOCO contains 123,287 images with 5 annotated captions per image. Following previous works (Huang et al. 2021), we use 113,287 images for training, 5,000 images for validation, and 5,000 images for testing. • Flickr30K contains 31,783 images with 5 annotated texts per image. Following previous works (Huang et al. 2021), we use 29,783 images for training, 1,000 images for validation, and 1,000 images for testing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7344 noise method MSCOCO 1K Flickr30K image-to-text text-to-image image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 0% SCAN 69.2 93.6 97.6 56.0 86.5 93.5 67.4 90.3 95.8 48.6 77.7 85.2 SAF 76.1 95.4 98.3 61.8 89.4 95.3 73.7 93.3 96.3 56.1 81.5 88.0 NCR 78.7 95.8 98.5 63.3 90.4 95.8 77.3 94.0 97.5 59.6 84.4 89.9 DECL 79.1 96.3 98.7 63.3 90.1 95.6 79.8 94.9 97.4 59.5 83.9 89.5 BiCro 79.1 96.4 98.6 63.8 90.4 96.0 81.7 95.3 98.4 61.6 85.6 90.8 CLIP 79.9 95.1 98.1 65.0 90.3 98.1 86.2 97.6 99.2 72.9 92.3 96.0 NPC 82.2 96.5 98.7 68.3 92.0 98.7 87.9 98.1 99.4 75.0 93.7 97.2 20% SCAN 62.2 90.0 96.1 46.2 80.8 89.2 58.5 81.0 90.8 35.5 65.0 75.2 SAF 71.5 94.0 97.5 57.8 86.4 91.9 62.8 88.7 93.9 49.7 73.6 78.0 NCR 77.7 95.5 98.2 62.5 89.3 95.3 73.5 93.2 96.6 56.9 82.4 88.5 DECL 77.5 95.9 98.4 61.7 89.3 95.4 77.5 93.8 97.0 56.1 81.8 88.5 BiCro 78.8 96.1 98.6 63.7 90.3 95.7 78.1 94.4 97.5 60.4 84.4 89.9 CLIP 75.0 93.1 97.2 58.7 86.1 97.2 82.3 95.5 98.3 66.0 88.5 93.5 NPC 79.9 95.9 98.4 66.3 90.8 98.4 87.3 97.5 98.8 72.9 92.1 95.8 40% SCAN 42.9 74.6 85.1 24.2 52.6 63.8 26.0 57.4 71.8 17.8 40.5 51.4 SAF 13.5 43.8 48.2 16.0 39.0 50.8 7.4 19.6 26.7 4.4 12.2 17.0 NCR 74.7 94.6 98.0 59.6 88.1 94.7 68.1 89.6 94.8 51.4 78.4 84.8 DECL 75.6 95.5 98.3 59.5 88.3 94.8 72.7 92.3 95.4 53.4 79.4 86.4 BiCro 77.0 95.9 98.3 61.8 89.2 94.9 74.6 92.7 96.2 55.5 81.1 87.4 CLIP 70.7 91.7 96.2 54.7 83.4 96.2 76.2 93.3 96.5 59.4 85.0 90.9 NPC 79.4 95.1 98.3 65.0 90.1 98.3 85.6 97.5 98.4 71.3 91.3 95.3 60% SCAN 29.9 60.9 74.8 0.9 2.4 4.1 13.6 36.5 50.3 4.8 13.6 19.8 SAF 0.1 0.5 0.7 0.8 3.5 6.3 0.1 1.5 2.8 0.4 1.2 2.3 NCR 0.1 0.3 0.4 0.1 0.5 1.0 13.9 37.7 50.5 11.0 30.1 41.4 DECL 73.0 94.2 97.9 57.0 86.6 93.8 65.2 88.4 94.0 46.8 74.0 82.2 BiCro 73.9 94.4 97.8 58.3 87.2 93.9 67.6 90.8 94.4 51.2 77.6 84.7 CLIP 67.0 88.8 95.0 49.7 79.6 95.0 66.3 87.3 93.0 52.1 78.8 87.4 NPC 78.2 94.4 97.7 63.1 89.0 97.7 83.0 95.9 98.6 68.1 89.6 94.2 Table 1: Image-Text Matching on MSCOCO 1K and Flickr30K. • CC120K. We randomly sample a subset from the realworld dataset Conceptual Captions (Sharma et al. 2018). This dataset is harvested from the Internet, with about 3%-20% incorrect image-text pairs. CC120K contains 120, 851 with a single caption per image. In our experiment, we use 118,851 images for training, 1,000 images for validation, and 1,000 images for testing. The widely-used metric Recall@K (R@K) is used to evaluate the performance of image-text matching with K=1, 5, and 10. The variance (var) of R@1 at different noise ratios is used to evaluate the approaches’ performance stability, with lower var indicating higher stability. Implementation Details. NPC can enhance noise resistance and stability in various cross-modal matching models. In this paper, the CLIP (Radford et al. 2021) with ViT-B/32 is implemented as a baseline. Both baseline and NPC are trained on a single RTX 3090 GPU optimized by AdamW (Loshchilov and Hutter 2019). We start training CLIP and NPC with learning rates 5e −7 and 2e −7 with a weight decay of 0.2. In all experiments, we train the model for 5 epochs with a mini-batch size of 256, and the hyperparameter τ is set to 0.99. Comparison with State of the Arts Quantitative Comparison. To illustrate the effectiveness, we compare NPC with various approaches, including general cross-modal matching methods SCAN (Lee et al. 2018), SAF (Diao et al. 2021), noise-robust learning methods NCR (Huang et al. 2021), DECL (Qin et al. 2022), BiCro (Yang et al. 2023), and CLIP with fine-tuning (Radford et al. 2021). It is worth noting that CLIP is the baseline of our method. The results are shown in Table 1. It shows that NPC significantly outperforms all methods across all noise ratios. Notably, on Flickr30K with 60% noise ratio, NPC outperforms the current state-of-the-art approach BiCro with a large R@1 performance gap. To be specific, the R@1 performance of NPC is 15.4% higher than BiCro on image-text matching (i2t), as well as 16.9% higher than BiCro on text-to-image matching (t2i). Compared to the baseline CLIP, NPC has achieved immense improvement in all metrics and benchmarks. Furthermore, as the noise ratio increases, the performance gap between NPC and baseline becomes larger. For instance, on the MSCOCO 1K set, when the noise ratio ranges from 0% to 60%, the R@1 performance gap between NPC and baseline separately increases from 2.3% to 11.2% on i2t, and 3.3% to 13.4% on t2i. This The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7345 0 0.2 0.4 0.6 Noise…ratio 60 70 80 90 Image-to-text…R@1 77.3 73.5 68.1 13.9 79.8 77.5 72.7 65.2 81.7 78.1 74.6 67.6 86.2 82.3 76.2 66.3 87.9 87.3 85.6 83.0 NPC(var = 3.61) CLIP…(var=56.40) BiCro…(var=27.11) DECL…(var=31.21) NCR…(var=664.85) Figure 4: Variation and variance (var) of matching performance at different noise ratio. method image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 CLIP 68.8 87.0 92.9 67.8 86.4 90.9 NPC 71.1 92.0 96.2 73.0 90.5 94.8 Table 2: Comparison with baseline on CC120K. phenomenon is powerful to prove the effectiveness of NPC on robust learning. Stability Comparison. To further explore the superiority of NPC on stable learning, we illustrate the R@1 change curves of different methods under different noise ratios in Fig. 4. We can observe that NPC outperforms all other methods in all noise ratios. Meanwhile, as the noise ratio increases, the performance decline of NPC is significantly smaller than that of other methods. Furthermore, we calculate the variance of each method on different noise ratios to quantify the stability of the methods. NPC shows remarkable stability with only 3.61% variance, outperforming all other methods with a huge gap. Compared to the baseline CLIP, NPC yields a large drop on var of 52.79%. The large decrease in variance indicates the performance stability is significantly improved by NPC. Comparison with ViT-B/32 Backbone Methods In Table 2, we compare the NPC with baseline on the CC120K which is with real noisy correspondences. From the results, our proposed method outperforms the baseline by a considerable margin in terms of all metrics. Specifically, NPC is 2.3% and 5.2% higher than CLIP on i2t and t2i R@1, respectively. For a fair comparison, we also compare the NPC to the methods with the same CLIP ViT-B/32-based backbone, including VSE∞(Chen et al. 2021), PCME (Chun et al. 2021), PCME++ (Chun 2023), and PAU (Li et al. 2023). The results on noise-free MSCOCO 5K are shown in Table 3. It demonstrates that NPC consistently outperforms other methods in all metrics. Besides, we also report the average R@1 of image-to-text and text-to-image of MSCOCO 1K and 5K method image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 VSE∞ 60.2 85.4 92.2 46.9 75.5 84.8 PCME 59.9 85.8 92.3 46.1 75.0 84.6 PCME++ 61.8 87.0 93.0 47.9 76.5 85.4 PAU 63.6 85.2 92.2 46.8 74.4 83.7 CLIP 62.2 84.6 90.9 45.1 72.3 81.8 NPC 65.4 87.3 93.1 48.5 75.4 84.4 Table 3: Comparison of methods with ViT-B/32 backbone on noise-free MSCOCO 5K. noise method 1K R@1 5K R@1 1K RSUM 20% VSE∞ 72.0 51.4 520.2 PCME 69.9 48.1 519.3 PCME++ 70.8 49.5 522.4 PAU 71.4 51.7 521.5 CLIP 66.8 47.2 507.2 NPC 73.1 53.8 529.8 50% VSE 38.5 18.4 390.5 PCME 65.8 43.0 505.7 PCME++ 65.7 44.0 503.9 PAU 69.3 49.6 513.4 CLIP 60.9 41.4 486 NPC 71.3 51.9 523.4 Table 4: Comparison of methods with ViT-B/32 backbone on noisy MSCOCO. in Table 4 at different noise ratios. Meanwhile, the sum of R@1, R@5, and R@10 on both i2t and t2i on MSCOCO 1K is also reported. As the noise ratio increases, NPC outperforms others by larger margins, surpassing the second best model PAU by 2.0% at 20% noise ratio, while 2.3% at 50% noise ratio for 5K R@1. All these experiments effectively demonstrate the effectiveness and superiority of NPC. Ablation Study Analysis on w and LMB. According to Eq. 11, there are two important components of confidence weight w and memory bank loss LMB in the re-training process. To explore the effect of each component, we exhaustively ablate them in Flickr30K with three noise ratios. The results are shown in Table 5. We observe that both w and LMB obtain significant performance improvements in different noise ratios. They bring almost the same improvements for NPC compared with the baseline. Specifically, training with 60% noise, the ablative NPC exceeds the baseline by 11.8% and 6.95% on average R@1 of image-to-text and text-to-image, indicating that w and LMB have independent effect of anti-noise. Moreover, the full version NPC outperforms by a much larger margin than the baseline, indicating that both components can complement each other and collaborate to achieve robust learning. The reason why w and LMB can achieve robust learning is that the confidence weight w mitigates the degree of negative impact from the noisy sample to the model, and the memory bank loss LMB can provide correct correspondences for these noisy samples. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7346 1. Bird 's eye view of a city worker walking on the street. (𝑤= 0.037) 2. A man windsurfs with a group of other windsurfers. (𝑤= 1) 3. A man is using a chisel to do wood work on a wooden pole. (𝑤= 0.042) 4. The man is on a ski boat in the water. (𝑤= 1) 5. A man windsurfing. (𝑤= 1) 1. An ethnic woman is sewing some sort of blue and gold material with elaborate designs. (𝑤= 0.042) 2. A woman in a purple sweatshirt with a large ivory hat is painting a scene of a boat launch on her canvas, affixed to an easel. (𝑤= 0.046) 3. Two brothers preparing the family dinner. (𝑤= 1) 4. Two boys with blue and green on cooking. (𝑤= 1) 5. Many marathon runners on a stretch of path in a wintry park. (𝑤= 0.102) A man with short hair is apprehended and restrained on a street. (𝑤= 0.053) (𝑤= 0.041) A guy wearing blue in a hole. (𝑤= 0.312) People are sitting on a park bench while there are 3 pictures visible in the foreground. Two boys are walking down a brick sidewalk. (𝑤= 0.045) (a) The image and its 5 annotated captions. The noise captions are in red. The correct captions are in green. (b) The caption and its noisy correspondent image. Figure 5: Some noisy correspondences in Flickr30K training set under 40% noise. The average confidence weight (w) in all epochs is shown for examples. The w of correctly matched pairs are obviously larger than noisy pairs. noise threshold τ image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 0% 0.5 87.2 98.1 99.2 74.5 93.7 96.9 0.7 87.6 97.9 99.4 74.9 93.5 97.1 0.99 87.9 98.1 99.4 75.0 93.7 97.2 60% 0.5 78.3 94.2 96.7 59.2 82.6 88.8 0.7 82.2 95.9 98.3 67.8 89.4 94.2 0.99 83.1 95.9 98.6 68.1 89.6 94.2 Table 5: Ablation study of threshold τ on Flickr30k. noise method image-to-text text-to-image w LMB R@1 R@5 R@10 R@1 R@5 R@10 20% ✓ ✓ 87.3 97.5 98.8 72.9 92.1 95.8 ✓ 85.3 97.3 98.8 71.8 91.3 95.2 ✓ 85.4 97.2 98.6 71.9 91.4 95.2 82.3 95.5 98.3 66.0 88.5 93.5 40% ✓ ✓ 85.6 97.5 98.4 71.3 91.3 95.3 ✓ 79.9 95.5 97.7 62.4 85.5 91.1 ✓ 79.0 95.0 97.5 62.3 85.2 91.1 76.2 93.3 96.5 59.4 85.0 90.9 60% ✓ ✓ 83.0 95.9 98.6 68.1 89.6 94.2 ✓ 78.2 93.5 96.8 59.0 82.5 88.4 ✓ 78.0 93.9 96.6 59.1 82.3 88.7 66.3 87.3 93.0 52.1 78.8 87.4 Table 6: Ablation studies for w and LMB on Flickr30K. Analysis on hyperparameter τ. τ is a very important parameter, which can control the clean degree of clean set Dc in Eq. 5 and memory bank MB. A smaller value of τ leads to a larger scale of Dc, potentially containing more noise pairs. The purity of Dc directly impacts the quality of MB, which in turn influences the model’s matching performance. To explore the impact of the selection threshold τ on the model, we report the matching performance with different τ on Flickr30K with 0% and 60% noise ratios in Table 6, respectively. The results show that when training with 0% noise, the impact of varying τ on performance reduction is not noticeable. However, in the case of training with 60% noise, performance drops by 4.8% and 7.0% on R@1 when τ changes from 0.99 to 0.5. It implies that a rigorous selection of Dc is necessary to establish a trustworthy MB. Visualization To illustrate the effectiveness of NPC, we showcase examples from Flickr30K in Fig. 5. The average confidence weight (w) for each pair across five epochs is depicted. Noisy pairs consistently exhibit notably low w values. Especially in Fig. 5 (a), there is a very obvious contrast between the w of the same image with correct annotations and noisy annotations. That is to say, with the support of MB, NPC effectively differentiates between clean and noisy correspondences. It also avoids model learning errors by assigning a small w to the noisy correspondence. Conclusion This paper studies a novel challenge of maintaining stable performance for the noise-robust learning model as noise increases. To tackle this, a novel approach NPC is proposed. We introduce a novel NP paradigm to estimate per-sample negative impact before it is learned by the model. To obtain the negative impact, the memory bank of the training set is constructed by strict selection. To mitigate negative impact on the model, each sample is assigned a confidence weight based on the memory bank. Extensive experiments indicate the effectiveness of each component in our method. The NPC achieves notable enhancement in matching accuracy and performance stability compared to the state-of-theart approach on both noise and noise-free datasets. Acknowledgement. This work is partially supported by National Natural Science Foundation of China under Grants (62176188) and the Key Research and Development Program of Hubei Province (2021BAD175) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7347 References Arpit, D.; Jastrzebski, S.; Ballas, N.; Krueger, D.; Bengio, E.; Kanwal, M. S.; Maharaj, T.; Fischer, A.; Courville, A. C.; Bengio, Y.; and Lacoste-Julien, S. 2017. A Closer Look at Memorization in Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, (ICML-17), volume 70 of Proceedings of Machine Learning Research, 233–242. Sydney, NSW, Australia: PMLR. Chen, J.; Hu, H.; Wu, H.; Jiang, Y.; and Wang, C. 2021. Learning the Best Pooling Strategy for Visual Semantic Embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-21), 15789–15798. virtual: IEEE. Chun, S. 2023. Improved Probabilistic Image-Text Representations. arXiv:2305.18171. Chun, S.; Oh, S. J.; de Rezende, R. S.; Kalantidis, Y.; and Larlus, D. 2021. Probabilistic Embeddings for Cross-Modal Retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-21), 8415–8424. virtual: IEEE. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT-19), 4171– 4186. Minneapolis, MN, USA: Association for Computational Linguistics. Diao, H.; Zhang, Y.; Ma, L.; and Lu, H. 2021. Similarity reasoning and filtration for image-text matching. In Proceedings of the AAAI conference on artificial intelligence (AAAI-21), 1218–1226. Palo Alto, California: AAAI Press. Ding, M.; Yang, Z.; Hong, W.; Zheng, W.; Zhou, C.; Yin, D.; Lin, J.; Zou, X.; Shao, Z.; Yang, H.; and Tang, J. 2021. CogView: Mastering Text-to-Image Generation via Transformers. In Advances in Neural Information Processing Systems (NeurIPS-21), 19822–19835. virtual. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv:2010.11929. Faghri, F.; Fleet, D. J.; Kiros, J. R.; and Fidler, S. 2018. VSE++: Improving Visual-Semantic Embeddings with Hard Negatives. In British Machine Vision Conference 2018 (BMVC-18), 12. Newcastle, UK: BMVA Press. Han, H.; Miao, K.; Zheng, Q.; and Luo, M. 2023. Noisy Correspondence Learning with Meta Similarity Correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-23), 7517–7526. IEEE. Huang, Z.; Niu, G.; Liu, X.; Ding, W.; Xiao, X.; Wu, H.; and Peng, X. 2021. Learning with noisy correspondence for cross-modal matching. In Advances in Neural Information Processing Systems (NeurIPS-21), volume 34, 29406– 29419. virtual. Jiang, D.; and Ye, M. 2023. Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person Retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-23), 2787–2797. Lee, K.-H.; Chen, X.; Hua, G.; Hu, H.; and He, X. 2018. Stacked cross attention for image-text matching. In Proceedings of the European conference on computer vision (ECCV18), 201–216. Munich, Germany: Springer. Lei, S. W.; Gao, D.; Wu, J. Z.; Wang, Y.; Liu, W.; Zhang, M.; and Shou, M. Z. 2023. Symbolic replay: Scene graph as prompt for continual learning on vqa task. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-23), 1250–1259. Washington, DC, USA: AAAI Press. Li, H.; Song, J.; Gao, L.; Zeng, P.; Zhang, H.; and Li, G. 2022. A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal Retrieval. In Advances in Neural Information Processing Systems (NeurIPS22), volume 35, 11934–11946. Li, H.; Song, J.; Gao, L.; Zhu, X.; and Shen, H. T. 2023. Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval. In NeurIPS. Li, J.; Socher, R.; and Hoi, S. C. H. 2020. DivideMix: Learning with Noisy Labels as Semi-supervised Learning. arXiv:2002.07394. Li, K.; Zhang, Y.; Li, K.; Li, Y.; and Fu, Y. 2019a. Visual semantic reasoning for image-text matching. In 2019 IEEE/CVF International Conference on Computer Vision, (ICCV-19), 4654–4662. Seoul, Korea (South): IEEE. Li, S.; Tao, Z.; Li, K.; and Fu, Y. 2019b. Visual to text: Survey of image and video captioning. IEEE Transactions on Emerging Topics in Computational Intelligence, 3: 297– 312. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Proceedings of the European conference on computer vision (ECCV-14), 740– 755. Zurich, Switzerland: Springer. Lin, Y.; Xie, Y.; Chen, D.; Xu, Y.; Zhu, C.; and Yuan, L. 2022. Revive: Regional visual representation matters in knowledge-based visual question answering. In Advances in Neural Information Processing Systems (NeurIPS-22), volume 35, 10560–10571. Liu, S.; Niles-Weed, J.; Razavian, N.; and FernandezGranda, C. 2020. Early-Learning Regularization Prevents Memorization of Noisy Labels. In Advances in Neural Information Processing Systems (NeurIPS-20), volume 33, 20331–20342. Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. In 7th International Conference on Learning Representations (ICLR-19). New Orleans, LA, USA: OpenReview.net. Permuter, H.; Francos, J.; and Jermyn, I. 2006. A study of Gaussian mixture models of color and texture features for image classification and segmentation. Pattern recognition, 39(4): 695–706. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7348 Qian, S.; Xue, D.; Zhang, H.; Fang, Q.; and Xu, C. 2021. Dual adversarial graph neural networks for multi-label cross-modal retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-21), 2440–2448. Palo Alto, California: AAAI Press. Qin, Y.; Peng, D.; Peng, X.; Wang, X.; and Hu, P. 2022. Deep evidential learning with noisy correspondence for cross-modal retrieval. In Proceedings of the 30th ACM International Conference on Multimedia (ACM MM-22), 4948– 4956. New York, NY, United States: Association for Computing Machinery. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning ICML-21, 8748– 8763. Virtual Event: PMLR. Sharma, P.; Ding, N.; Goodman, S.; and Soricut, R. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL-18), 2556–2565. Melbourne, Australia: Association for Computational Linguistics. Song, Y.; and Soleymani, M. 2019. Polysemous visualsemantic embedding for cross-modal retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-19), 1979–1988. Long Beach, CA, USA: IEEE. Stefanini, M.; Cornia, M.; Baraldi, L.; Cascianelli, S.; Fiameni, G.; and Cucchiara, R. 2022. From show to tell: A survey on deep learning-based image captioning. IEEE transactions on pattern analysis and machine intelligence, 45: 539–559. Wang, L.; Li, Y.; Huang, J.; and Lazebnik, S. 2018. Learning two-branch neural networks for image-text matching tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2): 394–407. Wang, N.; Xie, J.; Luo, H.; Cheng, Q.; Wu, J.; Jia, M.; and Li, L. 2023. Efficient Image Captioning for Edge Devices. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-23), 2608–2616. Washington, DC, USA: AAAI Press. Xia, X.; Liu, T.; Han, B.; Gong, C.; Wang, N.; Ge, Z.; and Chang, Y. 2021. Robust early-learning: Hindering the memorization of noisy labels. In 9th International Conference on Learning Representations, (ICLR-21). Virtual Event, Austria: OpenReview.net. Yang, S.; Xu, Z.; Wang, K.; You, Y.; Yao, H.; Liu, T.; and Xu, M. 2023. BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity Consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-23), 19883–19892. IEEE. Ye, M.; Li, H.; Du, B.; Shen, J.; Shao, L.; and Hoi, S. C. H. 2022. Collaborative Refining for Person Re-Identification With Label Noise. IEEE Trans. Image Process., 31: 379– 391. Ye, M.; and Yuen, P. C. 2020. PurifyNet: A Robust Person Re-Identification Model With Noisy Labels. IEEE Trans. Inf. Forensics Secur., 15: 2655–2666. You, H.; Guo, M.; Wang, Z.; Chang, K.-W.; Baldridge, J.; and Yu, J. 2023. CoBIT: A Contrastive Bi-directional Image-Text Generation Model. arXiv:2303.13455. Young, P.; Lai, A.; Hodosh, M.; and Hockenmaier, J. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2: 67–78. Zhang, X.; Niu, X.; Fournier-Viger, P.; and Dai, X. 2023. Image-text Retrieval via Preserving Main Semantics of Vision. In 2023 IEEE International Conference on Multimedia and Expo (ICME), 1967–1972. Zhou, Y.; Zhang, R.; Gu, J.; Tensmeyer, C.; Yu, T.; Chen, C.; Xu, J.; and Sun, T. 2022. Tigan: Text-based interactive image generation and manipulation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-22), 3580–3588. Virtual Event: AAAI Press. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7349
2024
816
18,647
Compositional Inversion for Stable Diffusion Models Xulu Zhang1,2, Xiao-Yong Wei3,1*, Jinlin Wu2,5, Tianyi Zhang1, Zhaoxiang Zhang2,4,5, Zhen Lei2,4,5, Qing Li1 1Department of Computing, the Hong Kong Polytechnic University, Hong Kong 2Center for Artificial Intelligence and Robotics, HKISI, CAS, Hong Kong 3College of Computer Science, Sichuan University, Chengdu, China 4School of Artificial Intelligence, UCAS, Beijing, China 5State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA, Beijing, China Abstract Inversion methods, such as Textual Inversion, generate personalized images by incorporating concepts of interest provided by user images. However, existing methods often suffer from overfitting issues, where the dominant presence of inverted concepts leads to the absence of other desired concepts. It stems from the fact that during inversion, the irrelevant semantics in the user images are also encoded, forcing the inverted concepts to occupy locations far from the core distribution in the embedding space. To address this issue, we propose a method that guides the inversion process towards the core distribution for compositional embeddings. Additionally, we introduce a spatial regularization approach to balance the attention on the concepts being composed. Our method is designed as a post-training approach and can be seamlessly integrated with other inversion methods. Experimental results demonstrate the effectiveness of our proposed approach in mitigating the overfitting problem and generating more diverse and balanced compositions of concepts in the synthesized images. The source code is available at https://github.com/zhangxulu1996/Compositional-Inversion. Introduction Recently, image synthesis has witnessed remarkable performance from text-to-image diffusion models such as DALL•E (Ramesh et al. 2021), Stable Diffusion (Rombach et al. 2022), Imagen (Saharia et al. 2022). These models typically consist of two modules: semantic embedding and diffusion. Given a simple text prompt like “a cat chasing butterflies”, the semantic embedding module represents the semantics as embeddings, while the diffusion module transforms the embeddings into images that incorporate the desired concepts (e.g., cat, butterflies). However, these models produce concepts in a general sense, resulting in randomly assigned appearances for the cat. This limitation becomes apparent when users seek specific concepts, such as their own cat. It raises challenges to these models in the era of pursuing personalized customization. Textual Inversion (TI) (Gal et al. 2022) remains a core technology to address this limitation. The underlying hypothesis is that an optimal point exists within the embedding *Corresponding Author (x1wei@polyu.edu.hk). Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. space that can represent the semantics of a specific concept, even if it is difficult to describe in words. TI evaluates the distance of current embedding to the optimal point through back-propagated gradients based on the reconstruction loss from a few sample images provided by the user. Instead of updating the model weights like in regular training, TI updates the values of the current embedding towards the optimal while keeping the weights fixed. The post-training feature enables personalization for a significantly wider range of users and researchers, as it demands fewer computational resources compared to the extensive requirement for pretraining or fine-tuning diffusion models. We hereafter use a star to denote an inverted specific concept (e.g., cat*), commonly referred to as a pseudo word in literature. Despite the presence of promising outcomes, the composition of inverted concepts with other concepts proves to be challenging. As shown in Fig. 1, the results maintain fidelity to the user samples for cat*, but the concept butterflies is absent. This occurs because the method primarily emphasizes the reconstruction loss while disregarding the compositional aspect of the target concept in relation to others. Similar findings are reported in (Tewel et al. 2023) which suggests the dominance of the inverted concepts in the generation process encroaches upon the spotlight of other concepts. However, this is simply attributed to an over-fitting problem, with underlying rationale remaining unexplored. This paper represents an initial endeavor to delve into the underlying reasons and offer straightforward solutions from an internal perspective. Specifically, we have discovered that Textual Inversion leads to the inverted concepts being out of distribution (OOD). Modern models are always trained on large-scale dataset like LAION (Schuhmann et al. 2022) containing text-image pairs on the billions scale. Most existing concepts have thus been trained to be compositional to others due to their frequent occurrence in the dataset. It forms a core distribution where the pretrained concepts are easily combinable. We have evaluated the compositionality of each concept by combining with others in prompts and testing the probability of their presence in resulting images using object detection. In Fig. 2, the core distribution becomes evident through the visualization of the probabilities based on their coordinates in the embedding space. This visual representation clearly showcases the OOD issue of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7350 Input Images + On the street + Piloting a spaceship + Chasing butterflies Textual Inversion / DreamBooth / Custom Diffusion Compositional Inversion + On the street + Piloting a spaceship + Chasing butterflies + Coding + Wearing a hat + Gazing at galaxy + Wearing a hat + Gazing at galaxy + Coding vs. vs. vs. vs. Cat* Dog* Figure 1: Image synthesis using traditional inversion methods and the proposed compositional inversion: concepts of butterflies, street, and spaceship are absent when composed with concepts inverted with traditional methods. Figure 2: Visualization of compositionality in the embedding space with the evident core distribution and the OOD. inverted concepts. The OOD results from the calculation of reconstruction loss that is spanned the entire image rather than the target concept region. It makes other concepts in the background being “inverted”, leading to the degradation in the purity of the semantics within the inverted concept. In Fig. 2, due to the distraction of background semantics, the inverted concept dog* converges to an OOD area instead of the theoretically more appropriate neighborhood around the concept dog. This observation is further supported by the fact the average entropy of the inverted embeddings has been increased by 3% from that of the pretrained concepts. The larger entropy consequently causes the dominance of Images T = 20 cat* chair T = 10 T = 1 T = 5 (0.6616) (0.7279) (0.7352) (0.7282) (0.0407) (0.0345) (0.0365) (0.0333) Other concepts when composed with cat Other concepts when composed with cat* cat cat* Figure 3: Development of the relative attention similarity and attention maps of various types of concepts. the inverted concept over others in the diffusion module. The diffusion module utilizes Transformer blocks to transfer text semantics into visual content, where embeddings are employed to construct the K, and the Q is typically initialized with random noise. Therefore, the presence of a concept heavily relies on the cross-attention of its embedding to the random noise. As the larger entropy of the inverted concept implies a broader span of dimensions to store semantics, it may have a higher probability of obtaining greater attention similarity compared to other concepts. In Fig. 3, we present statistics of the development of the attention similarity between K and Q over iterations. The divergence between the inverted concept and others is much more pronounced than that between a pretrained concept and others. The crossattention mechanism iteratively integrates the attention map of the inverted concept into other concepts, ultimately reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7351 sulting in the absence of other concepts or their replacement with the inverted concept (e.g., Fig. 3). Based on the aforementioned analysis, we propose a compositional inversion approach comprising two components: a semantic inversion component which guides the embedding search towards the core distribution, and a spatial inversion component which regularizes the attention maps to avoid the dominance of the inverted concepts. The framework of the proposed method is shown in Fig. 4. Related Work Text-to-image synthesis has earned significant attention for its potential applications in content creation, virtual reality, and computer graphics. The objective is to bridge the semantic gap and enable machines to understand and generate images that align with text prompts. For several years, generative adversarial networks (GANs) (Goodfellow et al. 2014; Karras, Laine, and Aila 2019) have been the dominant approach (Zhu et al. 2019; Tao et al. 2022). With recent improvements in DDPM (Ho, Jain, and Abbeel 2020) and DDIM (Song, Meng, and Ermon 2020), text-conditioned diffusion models have made remarkable progress. Building upon the latent images, the Latent Diffusion Model (LDM) (Rombach et al. 2022) was introduced and further extended to Stable Diffusion (Rombach et al. 2022), which is regarded as one of the most promising models for text-to-image synthesis. Another notable framework, Imagen (Saharia et al. 2022) takes a different approach by diffusing pixels directly using a pyramid structure, without relying on latent images. DALL•E2 (Ramesh et al. 2022) uses a prior network that takes text embedding as input to produce an image embedding as the input of the diffusion model. Inversion for Customization and Personalization As aforementioned, it is a demanding feature for the models to generate images containing specific concepts of interest (CoI) implied by user samples. This requires models’ capacity to “invert” the samples into concept embeddings, which can be used in future prompts for customized generations. Textual Inversion (Gal et al. 2022) is one of the initial methods that directly searches for the optimal solution in the embedding space to address this issue. However, the remaining methods, although employing similar approaches of searching for inverted embeddings, rely on either retraining or fine-tuning for this purpose. For instance, DreamBooth (Ruiz et al. 2023) retrains the entire Imagen for constructing embeddings for CoI, while Custom Diffusion (Kumari et al. 2023), Perfusion (Tewel et al. 2023), SVDiff (Han et al. 2023), and Cones (Liu et al. 2023) only fine-tune partial parameters of the Stable Diffusion model. To mitigate language drift and overfitting problems, a large number of images from the same CoI class are typically utilized as regularization during the training/fine-tuning process. Compositionality of Inverted Concepts Current methods in the field of compositionality primarily focus on combining inverted concepts with each other rather than with a broader range of pretrained concepts. This approach, known as multi-concept composition, is related to but distinct from the scope of this paper. Existing methods include Custom Diffusion, SVDiff, Cones, and Perfusion. Custom Diffusion archives this by merging the outputs of multiple models that have been fine-tuned to invert various CoI. It can be considered as a model-level composition approach. SVDiff manually combines objects selected from different CoI concepts as training images, enabling the model to learn to compose them. Cones evaluates the neurons’ contributions to the fidelity of inverted concepts and deactivates those with minor contributions during composition. Perfusion fuses the V components of inverted concepts to balance their contribution to generation. These methods all rely on training/fine-tuning, which requires effort and expertise to gather the regularization images. In contrast, the compositional inversion proposed in this paper is a posttraining approach that can be applied to any trained or finetuned models and thus is compatible to all the aforementioned methods. Furthermore, the proposed method can be employed for the composition of both pretrained and inverted concepts, making this paper a study of compositionality in a more general sense. Spatial Guidance in Text-to-Image Synthesis In terms of imposing spatial constraints, there is another category of methods specifically designed for controlling the contours, shapes, or layouts of objects. ControlNet (Zhang, Rao, and Agrawala 2023) trains a new branch that incorporates spatial constraints as input and injects them into each layer of the diffusion module for customized synthesis. Prompt-to-prompt (Hertz et al. 2022) enables objectspecific editing by replacing the attention map in the crossattention module. GLIGEN (Li et al. 2023) designs a gated self-attention layer to incorporate spatial conditions, such as bounding boxes. Layout-control (Chen, Laina, and Vedaldi 2023) employs a training-free approach that ensures higher activation values of the attention maps within the bounding box regions. ReCo (Yang et al. 2023) achieves layout control by encoding regional tokens as part of the text prompt. The spatial inversion module in our proposed method draws inspiration from these methods in terms of controlling the layout. However, these methods are not developed for inversion purpose but rather assume the constrains as a prior, while our focus is on automatically discovering the underlying spatial distribution without any user specifications. Method Preliminaries By taking an encoder-decoder view that is similar to that of variational autoencoders (VAEs) (Kingma and Welling 2014), it is straightforward to inspect diffusion models. The encoding is more commonly called a forward process that iteratively “diffuses” a sequence of Gaussian noises (ϵt)T t=1 ∼ N(0, I) into an image x0 using a Markov chain of T steps, producing a sequence of noisy samples (xt)T t=1 with xt = √αtxt−1 + √ 1 −αtϵt, 1 ≤t ≤T, (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7352 A photo of bear* A wine gglass and a bear* bear* glass Spatial Inversion glass bear* ebear* eglass ebear* Text Transformer Diffusion Module ܮ௥௘௖ ܮ௔௡௖ Semantic Module Inversion Path Generation Path Embedding Path ܮ௟௢௖ Figure 4: The framework of the proposed method consisting of semantic and spatial inversion components. where αt controls the variance of the Gaussian noises ϵt. It also defines a Gaussian distribution q(xt|x0) that we can use to sample latent representations for xt in the generation. The decoding is more commonly referred to as a reverse diffusion process, in which the goal is to learn another Gaussian distribution q(x0|xt) so that we can reconstruct x0 from xt. Since the Markov encoding is non-reversible, the reverse diffusion is implemented by approximating q(x0|xt) using a model f (e.g., a neural network) which is parameterized on θ and learns an estimated distribution pθ. This can be formulated as a T-step “denoising” process where, at the tth step, it tries to reconstruct x0 by removing noise from xt and results in an estimation ˆx0 = f(xt; θ) ∼pθ(x0|xt). (2) The learning of the model can be done based on the loss of the estimation ˆx0 from x0. To implement text-to-image synthesis, text embedding e will also be fused with xt to generate a conditioned image using Eq. (2) as ˆx0 = f(xt ◦e; θ) where ◦is a reserved fusion operator which is implemented differently in various models. The loss is then written Lrec = E  wt∥ˆx0 −x0∥2 2  , = E  wt∥f(xt ◦e; θ) −x0∥2 2  , (3) = E  wt∥f((√¯αtx0 + √ 1 −¯αtϵ0) ◦e; θ) −x0∥2 2  , where wt is a time-step dependent weight. As the generated content is indeed controlled by the only input e, we can generate desired content as long as we know its text embeddings. This is easy for pretrained concepts (e.g., cat) because the learners have seen enough samples during the training, but hard for specific concepts (e.g., my own cat). To address this issue, Textual Inversion is a method to backtrack the text embeddings of specific concepts (Gal et al. 2022). It feeds a few samples of the target concept (e.g., 3–5 images of the user’s cat) and updates the pseudo-concept embedding (ecat*). It is formulated as e∗= arg min e Lrec. (4) Semantic Inversion As visualized in Fig. 2, Textual Inversion will make the new (pseudo-)embeddings OOD and incompatible to other concepts in the embedding space, because it does not have enough interactions with others during the post-training learning. Our idea is then straightforward to improve its interactions for better compositionality. To this end, we select a set of general concepts as anchors (e.g., dog, car, chair, building) and collect their text embeddings {eanc}. These concepts can be found from existing benchmark dataset like COCO (Lin et al. 2014) and even be combined for a wider coverage of semantic references (Deng et al. 2009; Wei and Yang 2012, 2011). We will use the anchor concepts as attractors to guide the search of the pseudo-embedding towards the core distribution. A loss regularization is introduced as Lanc = E   1 ∥{eanc}∥ X ei∈{eanc} δi∥e −ei∥2 2  , r.s.t ∥δ∥0 < c (5) where δ = {δi} is a weighting vector to control the strength of the ith attractor, and the constraint ∥δ∥0 < c limits the number of active attractors to c. This is to avoid the distraction from irrelevant attractors. For example, when searching for a pseudo-embedding for a cat related concept, active attractors like cat, pet are preferred over irrelevant ones like car, airplane. We implement the weighting vector δ using sparse coding (Olshausen and Field 1996). Once a pseudo-word S* has been assigned with the embedding e∗, it can be used in the same way as a real word in a prompt for image generation (e.g., “a S* cat sitting next to a dog”). Each word in a prompt is more commonly referred to as a token. The model first assigns an embedding for each token and feeds these embeddings into a text Transformer where they are refined into the actual token embeddings that will be used as conditions in the generation (or reverse) process. To simplify the description, we will still use the symbol e’s to represent these refined token embeddings. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7353 Comp. w/ Pretrain Concepts Comp. w/ Inverted Concepts Methods Text-Align. CoI Likelihood Image-Align. Text-Align. Image-Align. Textual Inversion (Gal et al. 2022) 0.603 0.032 0.784 0.606 0.656 + Semantic Inversion 0.645 0.121 0.762 0.633 0.664 + Spatial Inversion 0.631 0.116 0.749 0.620 0.645 + Semantic + Spatial 0.702 0.284 0.732 0.662 0.658 Custom Diffusion (Kumari et al. 2023) 0.695 0.226 0.802 0.702 0.700 + Semantic Inversion 0.701 0.352 0.760 0.706 0.681 + Spatial Inversion 0.738 0.425 0.727 0.689 0.652 + Semantic + Spatial 0.734 0.459 0.683 0.703 0.628 DreamBooth (Ruiz et al. 2023) 0.716 0.431 0.734 0.691 0.695 + Semantic Inversion 0.720 0.436 0.718 0.704 0.683 + Spatial Inversion 0.750 0.534 0.657 0.705 0.632 + Semantic + Spatial 0.753 0.529 0.646 0.710 0.616 Table 1: Evaluation of performance by composing with pretrained and inverted concepts, with ablation of semantic and spatial inversion components. The best results are in bold font. Spatial Inversion To make the image generation conditioned on the token embeddings in the reverse process, a popular way is to use transformer blocks. More specifically, an attention map will be calculated for each token embedding to indicate its appearance or how it is attended in the resulting image (e.g., location, shape, details). This is implemented by the crossattention mechanism as Ai = softmax ϕ(xt)κ(ei)⊤ √dk  , (6) where ϕ and κ are the image feature extractor and text feature extractor respectively and dk is the dimensionality of κ(ei). Several pioneer works have found that the appearance of the token can be controlled by manipulating this attention map (Hertz et al. 2022; Parmar et al. 2023; Chen, Laina, and Vedaldi 2023). Therefore, we can regulate the attention maps to be attended on the right tokens to avoid the situation that the pseudo tokens dominate the generation process. In spatial inversion, we propose a method to recover the coherent locations of a pseudo token (e.g., S*) and concepts being combined (e.g., dog) in a prompt. The locations are then used to regulate the attention maps of tokens. We implement the location recovery by training an MLP model which takes two token embeddings as the input and outputs the locations as li, lj = MLP(ei, ej), (7) where li, lj ∈R4 are coordinates of the bounding boxes of the attended areas of the two tokens. We simplify the method by only considering noun tokens and construct a vocabulary of frequently used nouns (Lin et al. 2014). The nouns are then combined as prompts and used to generate images. The object detection (Carion et al. 2020) is conducted on the resulting images to find the bounding boxes of these nouns which serve as the ground truth for the training. For tokens in the vocabulary, we assign it to the noun that is the nearest one in the embedding space. The MLP is then able to recover the locations of any given tokens. With the location bounding boxes, we convert each of them into an attention mask Mi which is with the same size as that of Ai. We can then manipulate the attention maps by introducing a location regularization loss as Lloc = 1 N N X i=1  1 − P (Mi ◦Ai) P Ai  . (8) It encourages the tokens to be attended on the locations indicated by the masks and penalizes deviations. It is calculated at the first 10 reverse steps to update the latent variable xt. Experiments To evaluate the performance of our proposed methods, we conduct experiments by combining the inverted concepts with both pretrained and inverted concepts. Datasets. We construct a comprehensive dataset by accumulating almost all open-sourced concepts used in previous studies (Kumari et al. 2023; Gal et al. 2022; Ruiz et al. 2023). It consists of 10 concepts of 2 animal, 2 furniture, 2 object/container, 1 house, 1 plant, 2 toy categories. To test the generalizability, we generate prompts by combining the inverted concepts with 80 categories from the COCO dataset (Lin et al. 2014) using the conjunction word “and”. This results in 1600 prompts and generates 16000 images. We also combine the inverted concepts to each other, resulting in 90 prompts and 900 images generated. This is also aligned to the multi-concept composition task in previous studies. Evaluation Metrics. We utilized three evaluation metrics: 1) Text-alignment which quantifies the extent to which a generated image accurately represents the semantics of the text prompt, as determined by the CLIP similarity (Radford et al. 2021). 2) CoI Likelihood which measures the probability that CoIs present in the results using an object detector (DETR (Carion et al. 2020) based on ResNet101 (He et al. 2016) and pretrained on the COCO dataset). 3) Imagealignment which evaluates the extent to which the generated images are visually similar to the user samples, as determined by the cosine similarity of their CLIP image features. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7354 Input Images Textual Inversion DreamBooth Custom Diffusion + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. cat* and backpack dog* and book Textual Inversion DreamBooth Custom Diffusion + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. Input Images Figure 5: Examples of composing inverted concepts cat* and dog* with pretrained concepts backpack and book. Baselines. We employ 3 poplar state-of-the-art (SOTA) methods as the baselines including 1) Textual Inversion (TI) (Gal et al. 2022) which focuses on fine-tuning the text embedding exclusively. We employ the Stable Diffusion version, using the parameters reported by the authors in their paper. 2) DreamBooth (Ruiz et al. 2023) which fine-tunes all parameters of the U-Net architecture. As DreamBooth does not fine-tune the text embedding, we integrate TI into DreamBooth to apply the proposed method in this paper. 3) Custom Diffusion (Kumari et al. 2023) which aims to finetune partial parameters in the cross-attention modules. For the composition of inverted concepts, we adopt the joint training strategy, as it has been highlighted in the paper as the best-performing approach. The third-party implementations from HuggingFace are used for all the aforementioned methods. In the fine-tuning and inference stages, we follow the usual practice to use the S* and superclass token to represent the inverted concept (e.g., “cat* cat”). Performance The results are presented in Table 1. The proposed method exhibits improvements over SOTA methods in terms of 16.4% (787.5%), 5.6% (103.1%), 5.2% (22.7%) on TextAlign (CoI Likelihood) compared to TI, Custom Diffusion, and DreamBooth, respectively. There is only a slight tradeoff of 6.6%, 14.8%, and 12.0% on Image-Align when compared to the three methods. The performance gain on CoI Likelihood reaches 52.9% when composing with pretrained concepts, indicating a significant improvement. Another observation is that the augmented TI achieves a comparable performance to the original Custom Diffusion and DreamBooth. This is surprising because SOTA performance is achieved without any fine-tuning of network parameters. Fig. 5 shows two examples of composing inverted concepts with pretrained concepts. The proposed method clearly improves the performance in the presence of the pretrained concepts. Note that the semantic inversion module primarily emphasizes semantic completeness, occasionally resulting in the generation of low-probability scenes (such as half a cat in a backpack or a dog reading a book). On the other hand, the spatial inversion module tends to generate scenes that align with more common statistical occurrences. Fig. 6 presents two examples of composing the inverted concepts to each other. The presence of the CoIs is also significantly increased. A noticeable difference compared to the results in Fig. 5 is the larger variation in the appearance of the concepts of interest. Specifically, the generated cats, dogs, and barns exhibit a wider range of viewpoints. User Study To assess the computational efficiency and quality of the synthesis, we conducted a user study. We randomly selected 1,600 images generated by the proposed method and enlisted the participation of two users to rate the synthesis quality. The ratings were divided into three categories: Excellent represents the successful generation of two CoIs without any The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7355 Input Images Textual Inversion DreamBooth Custom Diffusion + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. cat* and chair* dog* and barn* + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. + Spatial + Semantic + Sem. + Spat. Input Images Textual Inversion DreamBooth Custom Diffusion Figure 6: Examples of composing inverted concepts of cat*, chair*, dog*, and barn* to each other. Excellent Satisfactory Mediocre Figure 7: Assessment of compositional synthesis quality through user evaluations. unnatural details; Satisfactory indicates that the CoIs were generated in an acceptable manner, though some minor flaws may be present; and Mediocre signifies the presence of obvious unreasonable details or missing CoIs. The results of the user study are presented in Fig. 7. The high capability of the proposed method in generating quality images is clearly evident, as indicated by a probability of 81.9% for receiving ratings above the Satisfactory. Additionally, we assume that the quality rating serves as an indicator of compositionality. When the probability of generating high-quality images through the composition of two concepts is higher, it suggests that those concepts are easier to compose. In Fig. 7, it becomes apparent that rigid objects (e.g. book, tv) are more straightforward to compose. This observation is supported by the fact that 9 out of the 10 rightmost concepts in the figure are rigid objects. This finding aligns with our understanding as rigid objects possess more consistent appearances and visual characteristics. In contrast, non-rigid objects like animals (e.g., cow, sheep) are challenging to compose, as indicated by the fact that 7 out of the 10 leftmost concepts are non-rigid objects. Conclusion We have identified the mechanism that causes the overfitting and dominance of the inverted concepts in generation. To address the issue, we propose a compositional inversion method which consists of two modules of semantic and spatial inversions. The semantic inversion guides the inversion towards the core distribution to ensure better coherence with other concepts, while the spatial inversion discovers the underlying layout distribution for CoIs and uses it to regularize the attention maps. The experimental results have validated the effectiveness of the method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7356 Acknowledgments Joint support for this research was provided by the Hong Kong Research Grants Council through the General Research Fund (Project No.: 15200023), the National Natural Science Foundation of China (Grant No.: 62372314), and InnoHK program. References Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European Conference on Computer Vision, 213–229. Springer. Chen, M.; Laina, I.; and Vedaldi, A. 2023. Training-free layout control with cross-attention guidance. arXiv preprint arXiv:2304.03373. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 248–255. Ieee. Gal, R.; Alaluf, Y.; Atzmon, Y.; Patashnik, O.; Bermano, A. H.; Chechik, G.; and Cohen-or, D. 2022. An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion. In International Conference on Learning Representations. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in Neural Information Processing Systems, 27. Han, L.; Li, Y.; Zhang, H.; Milanfar, P.; Metaxas, D.; and Yang, F. 2023. Svdiff: Compact parameter space for diffusion fine-tuning. arXiv preprint arXiv:2303.11305. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 770–778. Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-or, D. 2022. Prompt-to-Prompt Image Editing with Cross-Attention Control. In International Conference on Learning Representations. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851. Karras, T.; Laine, S.; and Aila, T. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4401–4410. Kingma, D. P.; and Welling, M. 2014. Auto-Encoding Variational Bayes. In International Conference on Learning Representations. Kumari, N.; Zhang, B.; Zhang, R.; Shechtman, E.; and Zhu, J.-Y. 2023. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1931–1941. Li, Y.; Liu, H.; Wu, Q.; Mu, F.; Yang, J.; Gao, J.; Li, C.; and Lee, Y. J. 2023. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22511–22521. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European Conference on Computer Vision, 740–755. Springer. Liu, Z.; Feng, R.; Zhu, K.; Zhang, Y.; Zheng, K.; Liu, Y.; Zhao, D.; Zhou, J.; and Cao, Y. 2023. Cones: Concept neurons in diffusion models for customized generation. arXiv preprint arXiv:2303.05125. Olshausen, B. A.; and Field, D. J. 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583): 607–609. Parmar, G.; Kumar Singh, K.; Zhang, R.; Li, Y.; Lu, J.; and Zhu, J.-Y. 2023. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, 1–11. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 8748–8763. PMLR. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-toimage generation. In International Conference on Machine Learning, 8821–8831. PMLR. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684– 10695. Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22500–22510. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494. Schuhmann, C.; Beaumont, R.; Vencu, R.; Gordon, C.; Wightman, R.; Cherti, M.; Coombes, T.; Katta, A.; Mullis, C.; Wortsman, M.; et al. 2022. Laion-5b: An open largescale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35: 25278–25294. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising Diffusion Implicit Models. In International Conference on Learning Representations. Tao, M.; Tang, H.; Wu, F.; Jing, X.-Y.; Bao, B.-K.; and Xu, C. 2022. DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis. In Proceedings of the IEEE/CVF The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7357 Conference on Computer Vision and Pattern Recognition, 16515–16525. Tewel, Y.; Gal, R.; Chechik, G.; and Atzmon, Y. 2023. Keylocked rank one editing for text-to-image personalization. In ACM SIGGRAPH 2023 Conference Proceedings, 1–11. Wei, X.-Y.; and Yang, Z.-Q. 2011. Coached active learning for interactive video search. In Proceedings of the 19th ACM international conference on Multimedia, 443–452. Wei, X.-Y.; and Yang, Z.-Q. 2012. Coaching the exploration and exploitation in active learning for interactive video retrieval. IEEE Transactions on Image Processing, 22(3): 955–968. Yang, Z.; Wang, J.; Gan, Z.; Li, L.; Lin, K.; Wu, C.; Duan, N.; Liu, Z.; Liu, C.; Zeng, M.; et al. 2023. Reco: Regioncontrolled text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14246–14255. Zhang, L.; Rao, A.; and Agrawala, M. 2023. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3836–3847. Zhu, M.; Pan, P.; Chen, W.; and Yang, Y. 2019. Dm-gan: Dynamic memory generative adversarial networks for textto-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5802– 5810. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7358
2024
817
18,648
Cross-Modal Match for Language Conditioned 3D Object Grounding Yachao Zhang1, Runze Hu2, Ronghui Li1, Yanyun Qu 3, Yuan Xie 4, Xiu Li1* 1Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China 2 School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China 3School of Informatics, Xiamen University, Xiamen, 361000, China 4School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China {yachaozhang, li.xiu}@sz.tsinghua.edu.cn Abstract Language conditioned 3D object grounding aims to find the object within the 3D scene mentioned by natural language descriptions, which mainly depends on the matching between visual and natural language. Considerable improvement in grounding performance is achieved by improving the multimodal fusion mechanism or bridging the gap between detection and matching. However, several mismatches are ignored, i.e., mismatch in local visual representation and global sentence representation, and mismatch in visual space and corresponding label word space. In this paper, we propose crossmodal match for 3D grounding from mitigating these mismatches perspective. Specifically, to match local visual features with the global description sentence, we propose BEV (Bird’s-eye-view) based global information embedding module. It projects multiple object proposal features into the BEV and the relations of different objects are accessed by the visual transformer which can model both positions and features with long-range dependencies. To circumvent the mismatch in feature spaces of different modalities, we propose crossmodal consistency learning. It performs cross-modal consistency constraints to convert the visual feature space into the label word feature space resulting in easier matching. Besides, we introduce label distillation loss and global distillation loss to drive these matches learning in a distillation way. We evaluate our method in mainstream evaluation settings on three datasets, and the results demonstrate the effectiveness of the proposed method. Introduction 3D scene understanding based on point cloud has attracted a lot of attention and achieved great success along with the development of deep learning (Guo et al. 2020; Yan et al. 2020). Most of the existing 3D scene understanding methods focus on visual modality, i.e., point cloud modality (Graham, Engelcke, and Van Der Maaten 2018; Wang et al. 2019; Zhang et al. 2021b), image (Long, Shelhamer, and Darrell 2015; Olaf Ronneberger 2015) or the fusion of them (Jaritz et al. 2020; Peng et al. 2021; Zhang et al. 2022). Recently, language conditioned 3D grounding aims to discover and locate an object in the 3D scene referred to by the *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: An overview of the traditional two-stage method (the top part) and our xM Match (the bottom part). denotes the cross-modal consistency constraint (xMCL). natural language sentence. It can enhance the interaction between humans and machines by enabling more natural and intuitive communication using natural language. Existing 3D grounding methods can be divided into two types: one-stage method and two-stage method. The former focuses on bridging the gap between detection and matching in the 3D visual grounding task, and thus achieving the target localization at a single stage. However, these methods suffer from the high training complexity compared to their twostage counterpart, because the multi-modal feature extraction (language sentences and the entire 3D scene) and target object regression are implemented simultaneously. Distinctly, the two-stage method primarily attempts to explore relations between proposals and referred language sentences to distinguish the target object. For this type of method, there are mismatches between visual and linguistic modalities, mainly in two aspects. 1) Mismatch in local view and global view. Since the visual features are extracted from each proposal individually (local view representation) (Yang et al. 2021; Chen et al. 2022b; Bakr, Alsaedy, and Elhoseiny 2022), the visual features lack the interaction between every object while the representation of linguistic description is based on the global view of the entire scene. Even some fancy techniques, i.e., graph neural network (Huang et al. 2021), and attention mechanisms (Zhao et al. 2021), focus on modeling the relations of two modalities to promote the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7359 matching, mismatch in global and local representation is still inevitable. 2) Mismatch in different feature spaces. In previous methods (Achlioptas et al. 2020; Luo et al. 2022; Chen et al. 2022b), the different modal features are extracted independently without explicitly modeling modal consistency, resulting in the mismatch of visual feature and label word feature spaces. In this paper, we focus on how to mitigate these mismatches and propose a cross-modal match method dubbed as xM Match. In contrast to the existing two-stage approach (top part of Fig. 1), we introduce two new components, i.e., BEV-based global information embedding (BEV-GIE) and cross-modal consistency learning (xMCL) shown as the bottom part of Fig. 1. Specifically, BEV-GIE can provide global information for object proposal features from the perspective of a lightweight manner. It projects multiple object proposals of a scene into a unified view and the relations of different objects are accessed by the visual transformer that specializes in modeling both positions and features with long-range dependencies. Therefore, the global information of the 3D scene is introduced, improving the matching with the global natural language description. To further alleviate the mismatch feature space between visual and label words, we introduce xMCL, which performs both point cloud recognition and cross-modal consistency constraints to help align visual feature space to the feature space of object label words. Additionally, we introduce two distillation losses to facilitate cross-modal match learning. To summarize, the following are the main contributions: • We present xM Match for language conditioned 3D object grounding from a novel perspective of promoting crossmodal matching between visual data and natural language. • To make the local visual representation match the global description of natural language, we present BEVbased global information embedding module to supplement the global information for visual features extracted from separated object proposals. • To enhance multi-modal interaction, we introduce cross-modal consistency learning to align visual feature space to the label word feature space. • Extensive experimental results demonstrate that xM Match achieves state-of-the-art performance and outperforms the most of competitors on three datasets. Related Work Scene Understanding Based on Point Cloud Recently, deep learning on point clouds has become even thriving, which has been successfully used to solve various 3D vision problems, including 3D shape classification (Qi et al. 2017; Zhang, Hua, and Yeung 2019; Li et al. 2019), 3D object detection and tracking (Qi et al. 2019; Yang, Luo, and Urtasun 2018; Jiang et al. 2020), and 3D point cloud segmentation (Zhang et al. 2021a,b; Landrieu and Simonovsky 2018; Hu et al. 2020). According to the data type of input for neural networks, existing 3D scene understanding methods can be divided into multi-view based methods, volumetric-based methods, and point-based methods. Point-based methods directly work on raw point clouds which can avoid explicit information loss (Guo et al. 2020). Qi et al. (Qi et al. 2017) proposed a hierarchical network, named PointNet++, by capturing fine geometric structures from the neighborhood of each point and is widely used in various tasks citehu2021simulation as the backbone for 3D point cloud feature extraction. Based on the convenience of point-based methods and fair comparison, we also choose PointNet++ to serve our visual feature extraction in the 3D grounding task. 3D Visual Grounding Multi-modality (including 2D images, 3D point clouds, and language) brings important cues to improve 3D scene perception for the agent (Wu et al. 2023). 3D grounding task requires a model to find the object mentioned by natural language in a wild point cloud. However, 3D grounding is still in its infancy due to the unique challenges confronted by the processing of point clouds, language, and their matching. With some methods being proposed to deal with the above challenge, these methods can be roughly divided into two types: one-stage method and two-stage method. In one-stage methods (Luo et al. 2022; Wu et al. 2023) linguistic features are densely fused with every point or sampled point of the entire scene to generate multi-modal feature maps for regressing the bounding box. However, they are usually computationally massive, due to the time-consumed extracting features for the entire point cloud and modeling the relationship of all candidate points and linguistic features. The two-stage method is a mainstream method, which transforms the regression problem into a matching problem (Feng et al. 2021; He et al. 2021; Roh et al. 2022; Yang et al. 2021; Yuan et al. 2021), where a detection-thenmatching strategy is introduced. These methods mainly focus on better modeling the relationship among objects and language to locate the target object. Some fancy techniques, i.e., graph neural network (Huang et al. 2021), and attention mechanisms (Zhao et al. 2021), are proposed to improve the matching performance. To avoid the huge computational consumption of large-scale point cloud scene feature extraction, these methods use feature extraction independently on the proposal sub-point cloud and then compute matching scores with linguistic features. However, the global information of the scene is not well preserved, which obviously mismatches the reference sentences covering the whole scene. In this paper, we supplement the global information of visual features with the help of BEV. Figure 2: ViL3DRel framework. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7360 Proposed Method Problem Definition and Preliminary Problem Definition. Given one 3D scene represented by a point cloud Ps ∈RK×6 with K points, the goal of the 3D grounding task is to discover and locate an object in this scene referred by the natural language sentence S. Each object can be represented as a subset of Ps, denoted as Oi ⊂Ps, Oi ∈RKi×6 which contains Ki points and each point represented by 3-d coordinates (XYZ) and 3-d color values (RGB). We follow the two-stage method, where a list of object proposals {O1, · · · , ON} is obtained via 3D object detector (Jiang et al. 2020) or ground-truth annotations (depending on the evaluation setup), and then the 3D grounding model FΘ(·) outputs the matching one referred by S among N object proposals. Compactly, it can be formulated as: OT = FΘ({O1, · · · , ON}, S), (1) where OT is the bounding box BT ∈R6 of target object. Preliminary. ViL3DRel (Chen et al. 2022b) utilizes a cross-modal transformer to explore the relations of natural language and proposal embedding, and a knowledge distillation strategy is given. The framework is shown in Fig. 2. The distillation loss can be formulated as: Llocal = D[FT ΘT (f rgb(Ci), f gt(Gi)|S)||FS ΘS(f P N2(Oi)|S)], (2) where FT /S ΘT /S(·) denotes the Teacher/Student network, and Ci and Gi are the color of point cloud and object label word, respectively. D(·||·) denotes the distillation loss in ViL3DRel. f rgb(Ci), f gt(Gi), and f P N2(Oi) are multimodal data encoding networks. Overview of xM Match The language conditioned 3D grounding requires bridging the gap between the linguistic and visual modalities. We find that the mismatches of cross-modal are mainly manifested in two aspects: i) Mismatch in local visual representation and global sentence representation. Natural language describes the target object according to the global information of the scene, and these two-stage methods independently encode every object proposal where the global information hidden between objects has been discarded. ii) Mismatch in visual space and corresponding label word space. 3D point cloud and object label words are encoded independently using the pre-trained networks resulting in different feature spaces, which can not be favorable for feature interaction. Therefore, we propose xM Match, and the overall framework is shown in Fig. 3. Specifically, we use ViL3DRel as the baseline, and first introduce BEV-GIE module which introduces global information about the 3D scene to alleviate the mismatch of local visual feature and global language feature from a lightweight perspective. Then, we introduce cross-modal consistency learning (xMCL) to supervise visual feature learning label-word-related vectors to alleviate the lack of label word encoding in the student network. It promotes multi-modal feature space alignment and facilitates subsequent matching. Additionally, we impose global distillation loss and label distillation loss to promote the learning of the above modules. Cross-modal Consistency Learning With the help of natural language representation of object label words, the 3D grounding performance of the teacher network can be greatly improved. However, label information is not available during model inference, so it is impossible to obtain this representation. The visual feature space is mismatched with object label word encoding and knowledge transfer is limited from the teacher network to the student network. Our goal is that the visual feature can express the 3D scene and also encode the object label word. We hold that a memory bank of object label word features for every category can reach the above goal. During student network training, we can query the object label words feature from this bank by similarity retrieval. As different modalities belong to different feature spaces, the key problem that needs to be solved in detail is how to make visual coding effectively retrieve memorized features. Therefore, the consistency constraint is introduced. Consistency constraint. We introduce two heads for the point cloud encoder. One is used for object recognition and the other for mapping the 3D visual feature space of the proposal to the label word feature space. To ensure the alignment of the two feature spaces, we directly align the features between the encoding point cloud feature and the label word feature by minimizing: Lxmmse = 1 N N X i=1 ||H1(f P N2(Oi)) −f gt(Gi)||2, (3) where H1(·) is the mapping head, containing two linear layers. The correlation level consistency can enhance the consistency of the two modal feature spaces. Therefore, we construct two correlation graphs according to the discrepancy between different objects for two modalities and constrain their consistency, where the label word graph Ggt ij is formulated as: Ggt ij = f gt(Gi) · f gt(Gj) ||f gt(Gi)|| × ||f gt(Gj)||. (4) We utilize the same way to get the visual feature graph Gvisual ij by replacing the f gt(Ogt j ) with H2(f P N2(Oi)). The correlation level consistency Lxmgraph is formulated as: Lxmgraph = 1 N(N −1) N X i=1 N X j=1,i̸=j ||Ggt ij −Gvisual ij ||2. (5) The overall objective of cross-modal consistency constraint is Lxm formulated as: Lxm = Lxmgraph + Lxmmse. (6) Label word-like encoding. The memory bank M ∈ RC×dim of the label words is initialized with the label-tovector same as the baseline in the teacher network and without updating. C is the number of categories. To ensure that the input of the student network is as consistent as possible with the teacher network, the mapping head features (ˆegt Oi = H1(f P N2(Oi)), ∈R1×dim) of the student network The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7361 Figure 3: xM Match framework. BT and BERT are the global feature backtrack and the pre-trained text embedding model. are used to query the features of the corresponding category labels from the memory bank, where an attention-based query strategy is proposed. Firstly, we can obtain the attention map (A) by: A = ˆegt Oi · M T ||ˆegt Oi|| × ||M T || , (7) where A ∈R1×C denotes the similarity between an object and all category codes. Superscript T represents matrix transpose. The label word-like encoding of the student network can be updated by: ˆegt Oi := A · M. (8) To learn label word-like encoding better for student network, a label distillation loss Lgt is introduced to constrain the consistency with the teacher network, which can be formulated as: Lgt = ||MLP(ˆegt Oi) −MLP(egt Oi)||2, (9) where MLP(·) denotes the linear layers followed by label word feature and label word-like encoding. BEV-based Global Information Embedding To describe an object in one scene, natural language is usually based on the global information, i.e., the object size, distance, relationships and spatial location of every object. The two-stage methods perform a detection-then-matching strategy (Yang et al. 2021; Chen, Chang, and Nießner 2020; Chen et al. 2022b) for every sub-point cloud Oi independently or only using the center coordinate of every object to model the global relations. This global information based solely on the center position information is insufficient. A direct solution is to encode the entire scene, which typically contains millions of points. Directly modeling global features undoubtedly increases the complexity of point cloud representation. Considering a lightweight way to embed global information, the BEV map is emerging, R Hor. Between Allocentric Support Vertical P 81% 9% 4% 2% 4% Table 1: Statistics of Sr3D. R and P denote relationships and proportion. wherein a mini transformer is used to simultaneously characterize the positional relationship between BEV planes (positional encoding) and the correlation learning between multiple objects (multi-head attention mechanism). The module is identical in both branches, we only detail the BEV-GIE module in the teacher network branch for a concise presentation as follows. BEV projection. In real scenarios, more horizontal relations are used to describe the target object. For example, there are over 80% of objects are described by horizontal relationship (Hor.) in Sr3D (Achlioptas et al. 2020) (refer to Tab. 1). Therefore, we choose projection-based BEV to model global horizontal relations. For a scene point cloud Ps = {O1, ..., ON}, we extract the features by f P N2(Oi) for every object proposals same as baseline. For multi-scale PointNet++, we use inverse distance weighted average based on 3 nearest neighbors to up-sample the third scale to the same scale as the second scale and concatenate two scale features. For an object Oi, the concatenated feature of one point p can be denoted as f Oi p , p = {1, ..., Ki}. We aggregate the point features in all objects of the entire scene: f Ps = {f Oi p } | i = {1, ..., N}, p = {1, ..., Ki}. We quantize it along the x-axis and y-axis to generate pillar voxels evenly. The points are assigned to these voxels according to their coordinates. The feature of a voxel is obtained by max-pooling (MAX(·)) of points inside it. For example, the feature in the i, j-th grid cell is: f bev i,j =MAX({f Ps | (i −1)w < xp < iw, (j −1)w < yp < jw}), p = {1, ..., KN}, (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7362 Methods Nr3D Sr3D Overall Easy Hard V-Dep. V-Indep. Overall Easy Hard V-Dep. V-Indep. ReferIt3D 35.6 43.6 27.9 32.5 37.1 40.8 44.7 31.5 39.2 40.8 ScanRefer 34.2 41.0 23.5 29.9 35.4 TGNN 37.3 44.2 30.6 35.8 38.0 InstanceRefer 38.8 46.0 31.8 34.5 41.9 48.0 51.1 40.5 45.4 48.1 FFL-3DOG 41.7 48.2 35.0 37.1 44.7 3DVG 40.8 48.5 34.8 34.8 43.7 51.4 54.2 44.9 44.6 51.7 TransRefer3D 42.1 48.5 36.0 36.5 44.9 57.4 60.5 50.2 49.9 57.7 LanguageRefer 43.9 51.0 36.6 41.7 45.0 56.0 58.9 49.3 49.2 56.3 SAT 49.2 56.3 42.4 46.9 50.4 57.9 61.2 50.0 49.2 58.3 3D-SPS 51.5 58.1 45.1 48.0 53.2 62.6 56.2 65.4 49.2 63.2 Multi-view 55.1 61.3 49.1 54.3 55.4 64.5 66.9 58.8 58.4 64.7 ViL3DRel 64.4 70.2 57.4 62.0 64.5 72.8 74.9 67.9 63.8 73.2 LAR 48.9 58.4 42.3 47.4 52.1 59.4 63.0 51.2 50.0 59.1 xM Match 66.2 72.8 59.9 63.8 67.5 74.6 75.9 71.3 65.0 74.7 Table 2: Grounding accuracy (%) on Nr3D and Sr3D datasets with ground-truth object proposals. “V-Dep.” and “V-Indep.” represent view-dependent setting and view-independent setting, respectively. where f bev i,j ∈R1×dim. The size of a grid cell is w × w. xp/yp is the x/y coordinate of 3D point p, i.e., its locations in the BEV space. Finally, the BEV feature map of Ps is: f bev = n f bev i,j | i ∈{1, 2 . . . , W} , j ∈{1, 2 . . . , L} o , (11) where f bev ∈RW ×L×dim. W and L denote the number of grid cells along the x-axis and y-axis, respectively. Meanwhile, we record the proposal identity of the point in each cell, denoted as idxproposal. Global information embedding. To embed the global information, an encoder that can well embody both scene embedding and spatial position is required. Vision Transformer (ViT) contains the self-attention mechanism and positional encoding, which allows it to capture long-range dependencies between image patches (Hu et al. 2022). These components enable ViT to learn global BEV map representations. Considering that we have already obtained the features of each cell, we introduce a lightweight version with 3 layers of ViT denoted as V iT(·) to maintain the efficiency. We use cells of the BEV map as patches and flatten them into a sequence of tokens. These tokens coupled with pixel position embedding of the BEV map are processed by multiple transformer layers that allow for global interactions between all the BEV cells through self-attention mechanisms, formulated as: f V iT = V iT(f bev). (12) Global feature backtrack. To attach global information for every object proposal, we need to fuse the patch feature containing global information with the local object proposal. We utilize the index of BEV projection to backtrack the patch feature vectors to the local point features and perform an averaging operation on the points within the proposal, denoted as: f global = Qscatter mean(f V iT , idxproposal). Finally, we combine this patch feature vector with each object proposal feature through a summation operation: f Oi = H2(f P N2(Oi)) + f global. (13) For the student branch, we get the global features ˆf global and ˆf Oi same as the teacher branch. To train the student network, we introduce global distillation loss Lglobal and can be formulated as: Lglobal = 1 dim dim X d=1 MSE(f global d −ˆf global d ). (14) Overall Objective Function We follow 3D object grounding loss Log, sentence classification loss Lsent used in the previous works (Achlioptas et al. 2020; Chen, Chang, and Nießner 2020). Based on distillation loss Llocal = Latten + Lhidden, we add global distillation loss Lglobal and label distillation loss Lgt. For object classification losses, based on two object classification losses Lu obj and Lm obj in previous works (Achlioptas et al. 2020; Chen, Chang, and Nießner 2020; Chen et al. 2022b), we introduce the cross-modal loss Lxm. Therefore, the overall training objective is as follows: L = Log+Lsent+Lu obj+Lm obj+Lxm+Llocal+λaLgt+λbLglobal, (15) where λa and λb are used to trade off the proposed two distillation losses. Experiments and Results Datasets We leverage three recently released datasets, i.e., Nr3D (Achlioptas et al. 2020), Sr3D (Achlioptas et al. 2020) and ScanRefer (Chen, Chang, and Nießner 2020) built on the 3D scenes of ScanNet (Dai et al. 2017) to evaluate performance. We follow the official split for training and validation. Additional split validation subsets. For Nr3D and Sr3D datasets, two splits during evaluation are introduced. 1) According to the number of distractors (more distractors indicate more difficulty), the sentences are split into an “easy” The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7363 subset (less than or equal to 2 distractors) and a “hard“ subset (more than 2 distractors) in evaluation. 2) According to whether the sentence requires a specific viewpoint to ground the referred object, the dataset can also be partitioned into “view-dependent” and “view-independent” subsets. Evaluation Metrics. For Nr3D and Sr3D datasets, we choose the default ground-truth object proposals evaluation setting. The metric is the accuracy (%) of selecting the target bounding box among the proposals. For ScanRefer, we utilize the ground-truth object proposals and proposals obtained by a 3D detector. acc@0.25 and acc@0.5 are used to evaluate the performance of detected proposals. The acc@0.25/0.5 represents the percentage of correctly predicted bounding boxes whose IoU is larger than 0.25/0.5 with the ground truth. Implementation Details For fair comparisons, we use the same point cloud backbone (PointNet++ (Qi et al. 2017)) and text encoding module (BERT (Kenton and Toutanova 2019)) same with ViL3DRel (Chen et al. 2022b). For PointNet++ (Qi et al. 2017), we set Ki = 1024, which denotes sampling 1, 024 points for all the objects. We set batch size as 128, and learning rate as 0.0005 with a warm-up of 5, 000 iterations and cosine decay scheduling. Our model is trained 100 epochs using Adam optimizer. We directly set αa = 1 and αb = 1. We set the grid w of BEV as 0.5m. We implement our model by using PyTorch based on Python 3.8. It is trained and evaluated on one NVIDIA RTX 3090 GPU with 24GB RAM. Comparison to State-of-the-art Methods Compared methods. We choose methods directly related to ours for comparison, containing ReferIt3D (Achlioptas et al. 2020), ScanRefer (Chen, Chang, and Nießner 2020), InstanceRefer (Yuan et al. 2021), FFL-3DOG (Feng et al. 2021), 3DVG (Zhao et al. 2021),TransRefer3D (Roh et al. 2022), LanguageRefer (He et al. 2021), SAT (Yang et al. 2021), 3D-SPS (Luo et al. 2022), Multi-view (Huang et al. 2022), ViL3DRel (Chen et al. 2022b), LAR (Bakr, Alsaedy, and Elhoseiny 2022), TGNN (Huang et al. 2021), SAT (Yang et al. 2021), BUTD-DETR (Jain et al. 2022), D3Net (Chen et al. 2022a), EDA (Wu et al. 2023). Evaluation on ground-truth object proposals. Overall, compared with recent 3D grounding methods, xM Match only uses 3D point cloud and achieves the best performance on all settings against the state-of-the-art methods even some of them using 2D image assistance. From Tab. 2, xM Match gains the improvements of 1.8% and 1.8% in terms of overall to baseline (ViL3DRel) on real-world Nr3D and synthetic Sr3D datasets (Achlioptas et al. 2020). The performance improvement is more evident in terms of view-dependent and hard settings, which are two more challenging subsets. The improvements are 2.5% (Hard) and 1.8% (V-Dep.) on Nr3D, 3.4% (Hard) and 1.2% (V-Dep.) on Sr3D. Because ours introduces global information embedding and multi-modal consistency constraints, which improves the matching between language descriptions and visual features, resulting in better matching performance. Furthermore, we report the results on ScanRefer dataset with ground-truth object proposals in Tab. 3. It can be observed that we achieve consistent improvements. We also give the qualitative results of xM Match and baseline on Nr3D datasets. From Fig. 4, our method accurately gives the target objects for the two challenging cases. For example, the sample of the second row in Fig. 4 is difficult because the positions are very near to the two objects with the same category. It requires global information which contains the correlation between the target object and other referred objects. For the failure case in the third row of Fig. 4, due to the lack of clear view guidance in the corresponding sentence and the symmetrical distractors in this scene, the failure case is mainly caused by inaccurate language description. Therefore, we can conclude that 3D grounding can benefit from our xM Match. Methods Modality Det-pro. GT-pro. 0.25 0.5 Overall ScanRefer 3D+2D 41.2 27.4 ReferIt3D 3D 26.4 16.9 46.9 TGNN 3D 37.4 29.7 InstanceRefer 3D 40.2 32.9 SAT 3D+2D 44.5 30.1 53.8 FFL-3DOG 3D 41.3 34.0 3DVG 3D+2D 47.6 34.7 3D-SPS 3D+2D 48.8 37.0 3DJCG 3D+2D 49.6 37.3 BUTD-DETR 3D 50.4 38.6 D3Net 3D+2D 37.9 ViL3DRel 3D 47.9 37.7 59.8 xM Match 3D 51.8 39.3 60.6 EDA 3D 54.6 42.3 EDA+xM Match 3D 54.9 42.7 Table 3: Grounding accuracy (%) on ScanRefer (Chen, Chang, and Nießner 2020) datasets with detector proposals and ground-truth object proposals. Evaluation on detected object proposals. The results of using detected proposals on ScanRefer dataset are reported in Tab. 3. Compared to the baseline (ViL3DRel), ours gains clear improvements. EDA (Wu et al. 2023) utilizes RoBERTa (Liu et al. 2019) which is a better text encoding model than BERT, and achieves better performance than ours with ViL3DRel backbone. We also modify our BEVGIE and cross-modal learning and added them to EDA, abbreviated as EDA+xM Match. It can be seen that EDA can benefit from our method. These results demonstrate the effectiveness of the proposed method. Ablation Study Effectiveness of BEV-based global information embedding. In our approach, we construct BEV-based global information embedding (BEV-GIE) to model the global information. To demonstrate its benefits, we conduct an ablation study by removing BEV-GIE from our method, denoted as w/o BEV-GIE. From the comparison of #1 and #2 of Tab. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7364 Figure 4: Qualitative results on Nr3D. The correctly predicted, incorrectly predicted, and ground-truth target objects are marked in green, red, and blue, respectively. Condition 1-3 are ”The desk beneath the clock on the right hand side of the two desks grouped together”, ”The single computer tower under the desk in the corner”, and ”Chair on the left side closest to couch”. Settings Nr3D Sr3D O E H O E H xM Match 66.2 72.8 59.9 74.6 75.9 71.3 w/o BEV-GIE 65.0 72.5 57.8 73.3 74.6 70.2 w/o xMCL 65.3 72.4 58.5 73.2 75.6 67.4 w/o Lgt 65.8 73.0 58.8 74.2 76.6 69.4 w/o Lglobal 65.4 72.7 58.4 74.0 76.3 68.6 Table 4: Ablation study of different components. O, E, and H denote as Overall, Easy, and Hard, respectively. 4, the model of w/o BEV-GIE gets performance degradation by 1.2% and 1.3% on Sr3D and Nr3D in terms of overall, respectively. For the Hard subsets of two datasets, the performance degradation is more significant. Because, in this setting, a scene contains more than two distractors (objects of the same category as the target object), and the lack of global information has a greater impact on performance. These results show that global information can provide additional supplementary information. Effectiveness of cross-modal consistency learning. We study the performance of the model without cross-modal consistency learning (w/o xMCL), whose results are reported in #3. From the comparison of #1 and #3 in Tab. 4, it drops 0.9% and 1.4% in terms of overall accuracy on the two datasets. The consistent decreasing trends are also revealed for both Easy and Hard subset settings. Therefore, xMCL can also promote the learning of multi-modal learning. Effectiveness of introduced distillation losses. To evaluate whether label distillation loss and global distillation loss can promote cross modal learning, we remove any of them and report their performance in #4 and #5 respectively. From the comparison of #4, #5 and #1, These distillations can further improve the performance. Settings Overall Easy Hard #1 BEV-based fusion 74.6 75.9 71.3 #2 Plain concatenation 71.6 73.2 67.7 #3 Attention-based query 74.6 75.9 71.3 #4 Direct feature using 73.6 74.9 70.6 Table 5: Comparison with alternative components on Sr3D. Overall, according to the comparison of #1 with #2∼#5, the network can benefit from the key components, which demonstrates the effectiveness of xM Match. Model Analysis BEV-based fusion vs. Plain concatenation. To better demonstrate the effectiveness of the global and local fusion based on BEV map and transformer (BEV-based fusion), we study the performance of a general way to fuse global and local features, i.e., directly features concatenation dubbed plain concatenation. The results are shown in Tab. 5. The plain concatenation achieves 71.6% on Sr3D. The BEV-based design gets better performance by a significant margin (3.0%) against the plain concatenation. Therefore, our method works by BEV-GIE design instead of as a feature concatenation strategy. Attention-based query vs. Direct feature using. We directly use the features of mapping head (Directly feature using) instead of our attention-based query. The results are shown in #4 of Tab. 5. From the comparison of #3 in Tab. 4 and the #4 in Tab. 5, It can be observed that direct feature using strategy has a slight positive effect, while the attentionbased query strategy improves significantly (1.7% improvement than w/o Lxm ). These results demonstrate that exploiting the matching of multi-modal feature space is rewarding for 3D grounding. Conclusion We propose xM Match, a novel language conditioned 3D object grounding method with explicit global information embedding and multi-modal consistency constraints. In contrast to existing two-stage 3D grounding methods, we coencode multiple independently encoded object proposals into a horizontal view. This can address the mismatch of local visual representation and global sentence representation. Besides, we solve the feature space mismatch of visual space and corresponding label word space by cross-modal consistency constraint. In addition, we introduce two distillation losses to drive teacher-student network learning. According to the extensive experiments on three datasets, xM Match gets better performance under both real-world and synthetic settings against state-of-the-art methods. Acknowledgements This work was supported in part by the China Postdoctoral Science Foundation (No.2023M731957), in part by the National Natural Science Foundation of China under Grant 62306165, in part by the Shenzhen Key Laboratory The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7365 of next generation interactive media innovative technology (No.ZDSYS20210623092001004). References Achlioptas, P.; Abdelreheem, A.; Xia, F.; Elhoseiny, M.; and Guibas, L. 2020. ReferIt3D: Neural listeners for finegrained 3d object identification in real-world scenes. In ECCV, 422–440. Bakr, E.; Alsaedy, Y.; and Elhoseiny, M. 2022. Look around and refer: 2d synthetic semantics knowledge distillation for 3d visual grounding. In NeurIPS, 37146–37158. Chen, D. Z.; Chang, A. X.; and Nießner, M. 2020. Scanrefer: 3d object localization in RGB-D scans using natural language. In ECCV, 202–221. Chen, D. Z.; Wu, Q.; Nießner, M.; and Chang, A. X. 2022a. D3Net: A unified speaker-listener architecture for 3D dense captioning and visual grounding. In ECCV, 487–505. Chen, S.; Guhur, P.-L.; Tapaswi, M.; Schmid, C.; and Laptev, I. 2022b. Language conditioned spatial relation reasoning for 3D object grounding. In NeurIPS, 1–14. Dai, A.; Chang, A. X.; Savva, M.; Halber, M.; Funkhouser, T.; and Nießner, M. 2017. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In CVPR, 5828–5839. Feng, M.; Li, Z.; Li, Q.; Zhang, L.; Zhang, X.; Zhu, G.; Zhang, H.; Wang, Y.; and Mian, A. 2021. Free-form description guided 3D visual graph network for object grounding in point cloud. In ICCV, 3722–3731. Graham, B.; Engelcke, M.; and Van Der Maaten, L. 2018. 3d semantic segmentation with submanifold sparse convolutional networks. In CVPR, 9224–9232. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; and Bennamoun, M. 2020. Deep learning for 3d point clouds: A survey. TPAMI, 43(12): 4338–4364. He, D.; Zhao, Y.; Luo, J.; Hui, T.; Huang, S.; Zhang, A.; and Liu, S. 2021. Transrefer3d: Entity-and-relation aware transformer for fine-grained 3d visual grounding. In ACMMM, 2344–2352. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; and Markham, A. 2020. RandLA-Net: Efficient semantic segmentation of large-scale point clouds. In CVPR, 11108–11117. Hu, R.; Monebhurrun, V.; Himeno, R.; Yokota, H.; and Costen, F. 2022. An uncertainty analysis on finite difference time-domain computations with artificial neural networks: improving accuracy while maintaining low computational costs. IEEE Antennas and Propagation Magazine, 65(1): 60–70. Huang, P.-H.; Lee, H.-H.; Chen, H.-T.; and Liu, T.-L. 2021. Text-guided graph neural networks for referring 3D instance segmentation. In AAAI, 1610–1618. Huang, S.; Chen, Y.; Jia, J.; and Wang, L. 2022. Multiview transformer for 3d visual grounding. In CVPR, 15524– 15533. Jain, A.; Gkanatsios, N.; Mediratta, I.; and Fragkiadaki, K. 2022. Bottom up top down detection transformers for language grounding in images and point clouds. In ECCV, 417– 433. Jaritz, M.; Vu, T.-H.; Charette, R. d.; Wirbel, E.; and P´erez, P. 2020. xmuda: Cross-modal unsupervised domain adaptation for 3d semantic segmentation. In CVPR, 12605–12614. Jiang, L.; Zhao, H.; Shi, S.; Liu, S.; Fu, C.-W.; and Jia, J. 2020. PointGroup: Dual-set point grouping for 3D instance segmentation. In CVPR, 4867–4876. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 4171–4186. Landrieu, L.; and Simonovsky, M. 2018. Large-scale point cloud semantic segmentation with superpoint graphs. In ICCV, 4558–4567. Li, G.; Muller, M.; Thabet, A.; and Ghanem, B. 2019. Deepgcns: Can GCNs go as deep as CNNs? In ICCV, 9267–9276. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In CVPR, 3431–3440. Luo, J.; Fu, J.; Kong, X.; Gao, C.; Ren, H.; Shen, H.; Xia, H.; and Liu, S. 2022. 3D-SPS: Single-stage 3d visual grounding via referred point progressive selection. In CVPR, 16454– 16463. Olaf Ronneberger, T. B., Philipp Fischer. 2015. U-Net: Convolutional networks for biomedical image segmentation. In MICCAI, 234–241. Peng, D.; Lei, Y.; Li, W.; Zhang, P.; and Guo, Y. 2021. Sparse-to-dense feature matching: Intra and inter domain cross-modal learning in domain adaptation for 3D semantic segmentation. In ICCV, 7108–7117. Qi, C. R.; Litany, O.; He, K.; and Guibas, L. J. 2019. Deep hough voting for 3d object detection in point clouds. In ICCV, 9277–9286. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 5099–5108. Roh, J.; Desingh, K.; Farhadi, A.; and Fox, D. 2022. Languagerefer: Spatial-language model for 3d visual grounding. In Conference on Robot Learning, 1046–1056. PMLR. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph CNN for learning on point clouds. ACM Transactions On Graphics, 38(5): 1– 12. Wu, Y.; Cheng, X.; Zhang, R.; Cheng, Z.; and Zhang, J. 2023. EDA: Explicit text-decoupling and dense alignment for 3D visual and language learning. In CVPR, 19231– 19242. Yan, X.; Zheng, C.; Li, Z.; Wang, S.; and Cui, S. 2020. PointASNL: Robust point clouds processing using nonlocal neural networks with adaptive sampling. In CVPR, 5589– 5598. Yang, B.; Luo, W.; and Urtasun, R. 2018. Pixor: Real-time 3d object detection from point clouds. In CVPR, 7652–7660. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7366 Yang, Z.; Zhang, S.; Wang, L.; and Luo, J. 2021. Sat: 2d semantics assisted training for 3d visual grounding. In ICCV, 1856–1866. Yuan, Z.; Yan, X.; Liao, Y.; Zhang, R.; Wang, S.; Li, Z.; and Cui, S. 2021. Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring. In ICCV, 1791– 1800. Zhang, Y.; Li, M.; Xie, Y.; Li, C.; Wang, C.; Zhang, Z.; and Qu, Y. 2022. Self-supervised Exclusive Learning for 3D Segmentation with Cross-Modal Unsupervised Domain Adaptation. In ACMMM, 3338–3346. Zhang, Y.; Li, Z.; Xie, Y.; Qu, Y.; Li, C.; and Mei, T. 2021a. Weakly supervised semantic segmentation for large-scale point cloud. In AAAI, 3421–3429. Zhang, Y.; Qu, Y.; Xie, Y.; Li, Z.; Zheng, S.; and Li, C. 2021b. Perturbed Self-Distillation: Weakly supervised large-scale point cloud semantic segmentation. In ICCV, 15520–15528. Zhang, Z.; Hua, B.-S.; and Yeung, S.-K. 2019. ShellNet: Efficient point cloud convolutional neural Nnetworks using concentric shells statistics. In ICCV, 1607–1616. Zhao, L.; Cai, D.; Sheng, L.; and Xu, D. 2021. 3DVGTransformer: Relation modeling for visual grounding on point clouds. In ICCV, 2928–2937. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7367
2024
818
18,649
MotionGPT: Finetuned LLMs Are General-Purpose Motion Generators Yaqi Zhang1,2, Di Huang3, Bin Liu1,2*, Shixiang Tang3, Yan Lu3, Lu Chen4, Lei Bai4, Qi Chu1,2, Nenghai Yu1,2, Wanli Ouyang4 1School of Cyber Science and Technology, University of Science and Technology of China 2CAS Key Laboratory of Electromagnetic Space Information 3The University of Sydney 4Shanghai AI Laboratory zhangyq99@mail.ustc.edu.cn, flowice@ustc.edu.cn Abstract Generating realistic human motion from given action descriptions has experienced significant advancements because of the emerging requirement of digital humans. While recent works have achieved impressive results in generating motion directly from textual action descriptions, they often support only a single modality of the control signal, which limits their application in the real digital human industry. This paper presents a Motion General-Purpose generaTor (MotionGPT) that can use multimodal control signals, e.g., text and single-frame poses, for generating consecutive human motions by treating multimodal signals as special input tokens in large language models (LLMs). Specifically, we first quantize multimodal control signals into discrete codes and then formulate them in a unified prompt instruction to ask the LLMs to generate the motion answer. Our MotionGPT demonstrates a unified human motion generation model with multimodal control signals by tuning a mere 0.4% of LLM parameters. To the best of our knowledge, MotionGPT is the first method to generate human motion by multimodal control signals, which we hope can shed light on this new direction. Visit our webpage at https://qiqiapink.github.io/MotionGPT/. Introduction Human motion is pivotal in various applications such as video gaming, filmmaking, and virtual reality. Recent advancements in AI (Saharia et al. 2022; Yu et al. 2022; Ramesh et al. 2022; Rombach et al. 2022; Ramesh et al. 2021; Ouyang et al. 2022; Lu et al. 2023) have paved the way for novel approaches to motion creation, enabling various control conditions including textual descriptions, music pieces, and human poses. However, one significant shortcoming of existing works (Petrovich, Black, and Varol 2022; Zhang et al. 2022; Tevet et al. 2023; Petrovich, Black, and Varol 2021; Zhuang et al. 2022) is that they only target a single type of control condition, greatly limiting their applications in the real world, e.g., unable to generate motion sequences conditioned on text descriptions and several keyframe human poses. To facilitate such applications, it is important to develop a unified human motion generation framework that can efficiently utilize multiple control signals simultaneously. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. This paper proposes a novel and more unified framework for text-motion generation. The framework facilitates the generation of human motions using multiple control conditions, formulated as output_motion = f(text, task, input_motion). Newly added inputs task and input_motion represent the task and given motion prompts, respectively. Here, task indicates the specific task the model should adapt to, while input_motion provides the keyframe poses corresponding to the given task. This framework is a departure from traditional text-motion generation models as the introduction of input_motion enables more precise control. For example, given an input_motion and set the task as "generate motion given initial poses", the model should compensate for the subsequent frames of the given frames. Such a framework offers a more practical and comprehensive solution for human motion generation, where task instructions and multimodal conditions can flexibly control motion generation. The challenge of building a model to complete such (text, motion)-motion generation task lies in understanding multimodal control conditions and generating human motions with varying motion lengths and richer patterns. We argue that these challenges can be naturally resolved by adapting from LLMs for the following reasons. First, recent studies have demonstrated that LLMs can understand multimodal inputs, e.g., images (Zhu et al. 2023; Du et al. 2023; Li et al. 2023a; Liu et al. 2023; Ye et al. 2023) and videos (Li et al. 2023b), through a lightweight adapter (Hu et al. 2021a). Therefore, we expect the LLMs can also understand motion sequences with an appropriate adapter. Second, LLMs can provide diverse human motion contexts for motion generation because they have encoded diverse motion patterns from extensive large-scale text data. This enables our motion generator finetuned from LLMs can produce motions with rich patterns. Third, since LLMs output tokens aggressively, producing human motion with flexible sequences is no longer an obstacle. To this end, we propose a Motion General-Purpose generaTor (MotionGPT) by fine-tuning an LLM following designed instructions. Specifically, MotionGPT first maps human poses into discrete motion codes via the pre-trained motion VQ-VAE and then generates instructions by combining codes from language prompts and motion prompts. The LLMs are fine-tuned by answering the correct human pose sequences to the instructions in an efficient way of well-known The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7368 LoRA adaptation. The designed motion instruction tuning framework can incorporate pose sequence information into the fine-tuned large language model while taking advantage of strong motion priors in the original large language model. We conduct extensive experiments on the HumanML3D (Guo et al. 2022a) and KIT-ML (Plappert, Mandery, and Asfour 2016) datasets, demonstrating MotionGPT has a strong ability for motion generation with multiple control conditions. Remarkably, MotionGPT achieves this with a significantly small set of training parameters (33 M), and in less training time (about 4 hours, or just 10% of the time taken by other methods). We observe that joint training under multiple control instructions outperforms training with a single type of control signal, showing the effectiveness of our unified motion generation training paradigm. Our contributions can be summarized as follows: • We introduce a novel model, MotionGPT, for generating human motions, which allows for multiple types of control during the generation process. To the best of our knowledge, MotionGPT is the first method for using both text and poses as conditions. It supports generating subsequent, preceding, or ‘in-betweening’ motions using a single and unified model. • We demonstrate that a pre-trained LLM can be readily tuned to function as a human motion generator, suggesting the potential for directly utilizing LLMs for human motion generation. • We present a comprehensive set of experiments, showcasing the effectiveness of our proposed MotionGPT with multiple types of control signals. Experimental results also indicate that using a more powerful LLM results in superior motion generation quality, indicating that further advancements in LLM technology could substantially enhance the performance of MotionGPT in the future. Related Work Large language models Recently, large language models (Devlin et al. 2018; Radford et al. 2018, 2019; Brown et al. 2020; OpenAI 2023; Touvron et al. 2023) have been developed dramatically, e.g., BERT (Devlin et al. 2018), GPT (Radford et al. 2018), and Google T5 (Raffel et al. 2020). These models, such as GPT-4 (OpenAI 2023), demonstrate exceptional performance on various linguistic tasks, thanks to the extensive training data (45 gigabytes in the case of GPT-4) and the large number of parameters they leverage. Previously, language models were task-specific, focusing on areas such as translation and sentiment analysis. However, recent developments, like ChatGPT, have expanded the capability of these models. Based on GPT-4, ChatGPT can interact with humans, showcasing its strong natural language understanding abilities. This effectiveness has opened up possibilities for a myriad of downstream tasks achieved through fine-tuning these LLMs. However, fine-tuning such models, considering their extensive parameters, is a challenging task. To address this issue, efficient fine-tuning strategies have been proposed, including prompt tuning (Lester, Al-Rfou, and Constant 2021; Liu et al. 2021; Hu et al. 2021b), adapters (Houlsby et al. 2019; He et al. 2021; Le et al. 2021), and LoRA (Hu et al. 2021a). Our work draws inspiration from the recent progress in LLMs, but it also addresses a distinct problem by introducing a new modality into the LLMs. Human motion generation Motion generation (Tevet et al. 2022; Habibie et al. 2017; Petrovich, Black, and Varol 2021; Li et al. 2017; Zhang et al. 2022; Guo et al. 2020; Tevet et al. 2023; Petrovich, Black, and Varol 2022; Li et al. 2021) is a long-history task that can be conditioned on various conditions, such as motion description, actions, and music. For instance, HP-GAN (Barsoum, Kender, and Liu 2018) and (Martinez, Black, and Romero 2017) utilize a sequence-tosequence model to anticipate future poses based on prior poses. ACTOR (Petrovich, Black, and Varol 2021) employs a transformer VAE for both unconditional and action-based generation. TRAJEVAE (Kania, Kowalski, and Trzci´nski 2021), when supplied with an initial pose and a trajectory, can generate a motion sequence that follows the given path. In recent years, text-conditional motion generation has garnered significant attention. This approach focuses on generating human motion sequences conditioned on textual descriptions. TEMOS (Petrovich, Black, and Varol 2022) proposes a VAE model that learns a shared latent space for both motion and text. MotionDiffuse (Zhang et al. 2022) integrates a diffusion model into the text-to-motion generation framework and accomplishes impressive results. MDM (Tevet et al. 2023), aiming to enhance motion-text consistency, uses CLIP (Radford et al. 2021) as the text encoder to incorporate more robust text priors into the model. In comparison to previous methods, our work, MotionGPT, stands out as the first unified motion generation model that supports multimodal controls. MotionGPT: A Motion General-Purpose Generator MotionGPT proposes a Motion General-Purpose generaTor controlled by multimodal conditions, i.e., texts and human poses in keyframes. Our motivation is to formulate human motion as a problem of asking the Large Language Model to generate desirable human motions according to task prompts and control conditions. Specifically, we quantize motion controls into discrete codes using the widely-used VQ-VAE (Van Den Oord, Vinyals et al. 2017). Motion discrete codes, text control conditions, and designed task instructions are then organized into a unified question template for the LoRAfinetuned LLM to generate a human motion sequence answer. Following the typical framework of instruction tuning, we leverage cross-entropy loss to supervise the LoRA adapter. More importantly, our MotionGPT can address not only existing human motion generation tasks, e.g., text-to-motion generation, but also new motion generation tasks by simply adjusting task instructions, showing the potential of MotionGPT as a generic baseline framework for motion generation. Motion Code Generation VQ-VAE proposed in (Van Den Oord, Vinyals et al. 2017) enables the model to learn discrete representations for generative models. Given a human pose m, the motion VQ-VAE The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7369 (Text,Motion)-tomotion a person walks forward Text a person walks forward Text & Initial Token Text & Last Token Text & Key Tokens a person walks forward a person walks forward Text-tomotion Forecast In-between a person walks forward Previous Methods MotionGPT Figure 1: This work proposes a novel human motion generation method via fine-tuned LLMs, named MotionGPT. Compared with previous methods, MotionGPT has the unique ability to accept multiple control conditions and solve various motion generation tasks using a unified model. can be trained by the reconstruction loss, the embedding loss and the commitment loss, i.e., LVQVAE = ||D(E(m)) −m||2 + ∥sg[E(m)] −e∥2 2 +β∥E(m) −sg[e]∥2 2, (1) where E, D are the motion encoder and the motion decoder, respectively. sg indicates the stop gradient operation. Here, the estimated embedding e after qunatization can be found by searching the nearest embedding in a learnable codebook B = {b1, b2, ..., bN}, where N is the size of the codebook, which can be mathematically formulated as e = arg min bk∈B ∥E(m) −bk∥2. (2) Based on the estimation latent representation e of the motion m, the reconstructed human pose ˆm can be produced by the decoder of VQ-VAE and the motion code p of human pose m can be calculated as the index of its nearest embedding in the codebook, i.e., ˆm = D(e), p = arg min k ∥E(m) −bk∥2. (3) Instruction Generation In MotionGPT, we design instructions that combine task prompts and control conditions to enable (text, motion)motion generation tasks. Specifically, given the task prompts T = {t1, t2, ..., tnt}, the text control conditions X = {x1, x2, ..., xnx} and the pose control conditions P = {p1, p2, ..., pnp} where nt, nx and np are the number of codes in T , X and P, the instruction I is formulated as % General control conditions format Control Conditions: {Text control conditions X <x1, x2, ..., xnx>} {Pose control conditions P <p1, p2, ..., pnp>} % General instruction format Instruction I: {Task Prompts T <t1, t2, ..., tnt>} {Control Conditions} Here, the pose control conditions P = {p1, p2, ..., pnp} presents pose codes, generated by using the same motion VQ-VAE mentioned earlier. Consequently, the entire instruction I can be regarded as a sequence of specialized text inputs. By generating different motion instructions, our MotionGPT can address existing human motion generation tasks and new human motion generations. Fine-tuning LLM by Motion Instructions Instruction tuning (Wei et al. 2021) enables LLMs to handle various generation tasks by asking the LLM questions in different instructions. Therefore, we design various instructions that combine both task descriptions and control conditions to fine-tune large language model by the widely-used and efficient Low-Rank Adaptation (LoRA) (Hu et al. 2021a). Specifically, given a large language model F, the general template of our instructions I and the answer of the LLM ˆP = F(I) are formulated as Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. % Task Prompts: Code sequences of Task Prompts % Control Conditions: Code sequences of Control Conditions Instruction I: {Task Prompts T } {Control Conditions} Answer ˆP: {Sequences of Human Motions} The answer of LLM ˆP = {ˆp1, ˆp2, ..., ˆpnˆ p} is a series of generated motion codes, which can be decoded to human The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7370 VQVAE "259, 467, ..." MotionGPT: LLM      + LoRA       Text "a person walks straight forward" Pose Tokens Encoder Codebook Decoder "Generate a sequence of motion tokens matching the following human motion description given the initial token" Codebook "259, 494, ..." CE Loss "259" Generated Tokens Ground Truth Tokens Task Prompt Control Conditions , Figure 2: The pipeline of MotionGPT, a Motion General-Purpose generaTor. Given text and poses as an input example, we organize task descriptions (Instruction) and multiple control conditions (Input) within a question template. MotionGPT fine-tunes an LLM to generate the corresponding motion answer, which can then be decoded into human motions using a VQ-VAE decoder. motion using Eq. 3. Similar to most language models, we employ cross-entropy loss which constrains the similarity between estimated and ground-truth tokens, to fine-tune LLMs by LoRA, which can be presented as Llora = CE( ˆP, ˆ Pgt), (4) where ˆ Pgt is the motion codes of ground-truth motions calculated by Eq. 3 and ˆP is the motion codes predicted by the LLM F. Generalization to Existing and New Tasks Leveraging the general template given before, our MotionGPT is capable of being a general-purpose motion generator, supporting various generation tasks. Specifically, for existing text-to-motion generation setting, MotionGPT address it by constructing following instruction I: Instruction (I) : {Task Prompts: "Generate a sequence of motion tokens matching the following human motion description."} {Control Conditions: Text control condition X} By adjusting instructions, MotionGPT can be easily adapted to multiple control conditions, e.g. text and an arbitrary number of human poses: Instruction (I) : {Task Prompts: "Generate a sequence of motion tokens matching the following human motion description given the init/last/key pose tokens."} {Control Conditions: Text control condition X <Motion Token> Pose control conditions P </Motion Token>} Experiment Datasets and Evaluation Metrics Datasets We apply two widely-used datasets, HumanML3D (Guo et al. 2022a) and KIT-ML (Plappert, Mandery, and Asfour 2016) for evaluation. Evaluation metrics Our evaluation comprises two categories of metrics. Firstly, to assess the quality of the generated motion, we adopt evaluation metrics consistent with previous methods. These include the Frechet Inception Distance (FID), Multi-modal Distance (MM Dist), R-Precision (calculating the Top-1/2/3 motion-to-text retrieval accuracy), and the Diversity metric. These metrics collectively provide a robust indication of both the realism and diversity of the generated motion. Secondly, we introduce new metrics tailored to our proposed motion generation setting, including Reconstruction Loss (Recon) and Velocity Loss (Vel). Specifically, these metrics aim to measure the consistency between the provided pose conditions and the generated motion. More information about datasets, proposed new metrics, and implementation details are included in the supplementary material (Zhang et al. 2023b). Comparisons for Motion Generation with Multiple Control Conditions In this section, we conduct four different generation experiments with 1) text as the condition, 2) text and initial pose as the condition, 3) text and last pose as the condition, and 4) text and random keyframe pose as the condition. For both 2) and 3), we use 4 frame poses as the input pose condition; While for 4), we random sample 12 to 20 frame poses as the pose condition. The quantitative results of motion quality are depicted in Tab. 1 and Tab. 2. As illustrated in Tab. 1, our proposed model, MotionGPT, exhibits a performance that is competitive with state-of-the-art methods for text-to-motion generation. Specifically, MotionGPT consistently achieves comparable results across all metrics on both HumanML3D (Guo et al. 2022a) and KIT-ML (Plappert, Mandery, and Asfour 2016) datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7371 Methods HumanML3D KIT-ML FID ↓ MM Dist ↓ Diversity ↑ FID ↓ MM Dist ↓ Diversity ↑ Real motion 0.002 2.974 9.503 0.031 2.788 11.08 TEMOS (Petrovich, Black, and Varol 2022) 3.734 3.703 8.973 3.717 3.417 10.84 TM2T (Guo et al. 2022b) 1.501 3.467 8.589 1.501 3.467 8.589 T2M (Guo et al. 2022a) 1.087 3.347 9.175 3.022 3.488 10.72 MotionDiffuse (Zhang et al. 2022) 0.630 3.113 9.410 1.954 2.958 11.10 MDM (Tevet et al. 2023) 0.544 5.566 9.559 0.497 9.191 10.85 MLD (Xin et al. 2023) 0.473 3.196 9.724 0.404 3.204 10.80 T2M-GPT (Zhang et al. 2023a) 0.116 3.118 9.761 0.514 3.007 10.92 MotionGPT-13B (Ours) 0.567 3.775 9.006 0.597 3.394 10.54 Table 1: Comparisons of text-to-motion generation with the state-of-the-art methods on HumanML3D and KIT-ML test set. MotionGPT-13B achieves comparable performance on all metrics. Bold and underline indicate the best and the second best result. Text + Initial Token a person is doing jumping jacks, then starts jogging in place person went around to sit on chair Text + Last Token a person walks forward with his arms at his side slowly Text + Key Tokens a man steps in a circular motion using both hands simultaneously to point at someone as if they are having a conversation a man walks forward, does two kicks to the side and then one kick to the front the figure walks forward walks forward then steps to the side then steps backwards then to the side again a person is dancing by putting their arms out making a t-pose and rotating their wrists, then moves their legs up and out one at a time Figure 3: Generated motion by MotionGPT with multiple control conditions on HumanML3D. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7372 a standing man leans down to a kneeled position with his left knee contacting the ground and his right leg planted foot down. the man then stands up. the figure steps forward then turns slightly right and proceeds to walk in that direction GT MDM Ours the person is lifting his dumbbell while bending his legs staring with arms out in a t, a person brings their hands together for a clap and proceeds to take two steps to sit down to relax Figure 4: Qualitative comparison of the state-of-the-art motion generation method MDM with text-only conditions on HumanML3D. Methods FID ↓ MM Dist ↓ Diversity ↑ HumanML3D Text-only 0.567 3.775 9.006 Text + Initial poses 0.520 3.844 9.588 Text + Last poses 0.591 3.718 9.251 Text + Random poses 0.367 3.598 9.176 KIT-ML Text-only 0.597 3.394 10.54 Text + Initial poses 0.664 3.445 10.39 Text + Last poses 0.856 3.336 10.58 Text + Random poses 0.671 3.411 10.76 Table 2: Motion generation quality on HumanML3D and KIT-ML test set for diverse control conditions. In addition to text conditions, MotionGPT can also incorporate human poses as a secondary control modality and the motion quality results are demonstrated in Tab. 2. The adoption of additional control conditions, such as initial, last, or key tokens, does not compromise the quality of the generated motions. In some instances, such as when provided with initial or key tokens, MotionGPT even outperforms its text-only counterpart from 0.567 to 0.520 or 0.367 under FID metric on HumanML3D, demonstrating its robustness and flexibility in handling diverse control modalities. Nevertheless, a slight decrease in performance is observed when the model is given the final pose as input, which is in line with our expectations, as generating motions with a predetermined end pose presents an inherently greater challenge. Despite this, MotionGPT’s performance remains commendable, further affirming its capability to generate high-quality, diverse motions under various control conditions. We present visualization results in Fig. 3 and Fig. 4. As the Fig. 3 shown, the motions generated by our model exhibit a notable alignment with the provided poses, while also displaying a consistent adherence to the textual descriptions. For the text-to-motion generation task, we compare our model, MotionGPT, with the MDM, as depicted in Fig. 4. Our model demonstrates superior text-consistency and textcompleteness compared to MDM (Tevet et al. 2023). The motions generated by the MDM model often tend to align with only the initial segment of the description, ignoring the latter half. In contrast, our approach exhibits a more comprehensive understanding of the motion descriptions by leveraging the powerful capabilities of LLMs, thus generating more complete and nuanced motion sequences. Ablation Study Additionally, extensive ablation studies are conducted on HumanML3D (Guo et al. 2022a) dataset to indicate the effectiveness of our MotionGPT. More ablation studies are included in the supplementary material (Zhang et al. 2023b). Capability of pre-trained LLM Pre-trained LLMs can provide robust priors about human motion from texts. In this context, we experiment with base models pre-trained to varying degrees, including LLaMA-7B, LLaMA-13B, and LLaMA without pre-training. For the un-pretrained LLaMA, we adopt the same network structure as LLaMA-7B without loading the pre-trained weights. The randomly initialized LLaMA is tuned by LoRA as well, fixing weights during training. As demonstrated in Tab. 3, our results show a strong correlation between the level of pre-training in LLMs and the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7373 Pre-trained Model FID ↓ MM Dist ↓ R-Precision ↑ Diversity ↑ Top-1 Top-2 Top-3 LLaMA w/o pre-trained 26.01 8.445 0.032 0.067 0.106 9.745 LLaMA-7B 0.590 3.796 0.376 0.553 0.657 9.048 LLaMA-13B 0.542 3.584 0.411 0.594 0.696 9.311 Table 3: Evaluation of text-to-motion generation using different pre-trained LLaMA on HumanML3D validation set. Bold indicates the best result. Task Training FID ↓ MM Dist ↓ R-Precision ↑ Diversity ↑ Strategy Top-1 Top-2 Top-3 Text Separate 0.670 4.267 0.299 0.469 0.577 9.745 + Initial token 0.756 3.802 0.374 0.556 0.658 9.148 + Last token 1.409 4.516 0.290 0.446 0.564 8.771 + Key tokens 0.702 3.690 0.370 0.546 0.668 8.974 Text Joint 0.590−.180 3.796−.471 0.376+.077 0.553+.084 0.657+.080 9.048−.697 + Initial token 0.493−.263 3.750−.052 0.384+.010 0.564+.008 0.666+.008 9.378+.230 + Last token 0.646−.763 3.675−.841 0.393+.103 0.577+.131 0.681+.117 9.030+.259 + Key tokens 0.390−.663 3.492−.198 0.416+.046 0.597+.051 0.713+.045 9.621+.647 Table 4: Comparisons between separate training for each task and joint training for multiple tasks on HumanML3D validation set using MotionGPT-7B. Superscripts indicate the improvement or decrement in the metric. Joint training can achieve better performance for all tasks. Methods Recon ↓ Vel ↓ Initial token Text-only 24.70 1.095 Text + Initial poses 13.78 0.549 Last token Text-only 19.70 1.172 Text + Last poses 6.831 0.397 Key tokens Text-only 8.035 3.813 Text + Random poses 5.383 2.423 Table 5: Evaluation of the effectiveness of pose control conditions on HumanML3D test set using MotionGPT-13B model. performance of our model in the text-to-motion generation task. This highlights the significant influence of motion prior extracted from LLM. Note that the training parameters of LoRA are same. Consistency with pose control conditions We demonstrate the effectiveness of pose control conditions by assessing the consistency between pose controls and generated motion on the HumanML3D test set. For each task (initial/last/key), we generate motion with and without pose controls using (text+pose)-to-motion and text-to-motion methods, respectively. The results are shown in Tab. 5. In comparison to text-only generation, better keyframe pose consistency arises from generating under pose conditions, showcasing (text+pose)-to-motion’s effectiveness with pose control. Comparison with separate training To further evaluate the effectiveness of our unified motion generation approach, we conduct separate training for each task on the HumanML3D dataset (Guo et al. 2022a). The aim is to investigate if multi-task learning could improve the performance of individual control conditions. The comparison results are depicted in Table 4. We find that joint training across all tasks yields significant improvements in all metrics. This effect is especially pronounced when text and last poses are used as conditions. These findings underscore the utility of our unified motion generation approach. It appears that the model’s ability to generate motions under a specific control type is boosted by the knowledge derived from other related control conditions. Conclusion and Limitations Conclusion This study introduces MotionGPT, a novel method capable of generating human motion using multimodal control signals, such as text and single-frame poses. The approach effectively discretizes pose conditions and creates a unified set of instructions by combining codes from both textual and pose prompts. With MotionGPT, we envision a path toward more practical and versatile motion generation systems, offering a fresh perspective in the field. Limitations Although current MotionGPT may support any control modalities beyond current human poses and text, this paper only validates the effectiveness on text and human poses. Validating our MotionGPT on a broader spectrum of possible modalities, such as music pieces, would be highly beneficial to more applications in the real world. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7374 Acknowledgments This work is supported by the National Natural Science Foundation of China (Grant No. 62121002 and Grant No. 62272430). References Barsoum, E.; Kender, J.; and Liu, Z. 2018. Hp-gan: Probabilistic 3d human motion prediction via gan. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 1418–1427. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877– 1901. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Du, Y.; Konyushkova, K.; Denil, M.; Raju, A.; Landon, J.; Hill, F.; de Freitas, N.; and Cabi, S. 2023. Vision-language models as success detectors. arXiv:2303.07280. Guo, C.; Zou, S.; Zuo, X.; Wang, S.; Ji, W.; Li, X.; and Cheng, L. 2022a. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5152–5161. Guo, C.; Zuo, X.; Wang, S.; and Cheng, L. 2022b. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In Computer Vision– ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXV, 580–597. Springer. Guo, C.; Zuo, X.; Wang, S.; Zou, S.; Sun, Q.; Deng, A.; Gong, M.; and Cheng, L. 2020. Action2motion: Conditioned generation of 3d human motions. In Proceedings of the 28th ACM International Conference on Multimedia, 2021–2029. Habibie, I.; Holden, D.; Schwarz, J.; Yearsley, J.; and Komura, T. 2017. A recurrent variational autoencoder for human motion synthesis. In Proceedings of the British Machine Vision Conference (BMVC). He, R.; Liu, L.; Ye, H.; Tan, Q.; Ding, B.; Cheng, L.; Low, J.-W.; Bing, L.; and Si, L. 2021. On the effectiveness of adapter-based tuning for pretrained language model adaptation. arXiv:2106.03164. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; De Laroussilhe, Q.; Gesmundo, A.; Attariyan, M.; and Gelly, S. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, 2790–2799. PMLR. Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2021a. Lora: Low-rank adaptation of large language models. arXiv:2106.09685. Hu, S.; Ding, N.; Wang, H.; Liu, Z.; Wang, J.; Li, J.; Wu, W.; and Sun, M. 2021b. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv:2108.02035. Kania, K.; Kowalski, M.; and Trzci´nski, T. 2021. TrajeVAE: Controllable Human Motion Generation from Trajectories. arXiv:2104.00351. Le, H.; Pino, J.; Wang, C.; Gu, J.; Schwab, D.; and Besacier, L. 2021. Lightweight adapter tuning for multilingual speech translation. arXiv:2106.01463. Lester, B.; Al-Rfou, R.; and Constant, N. 2021. The power of scale for parameter-efficient prompt tuning. arXiv:2104.08691. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023a. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv:2301.12597. Li, K.; He, Y.; Wang, Y.; Li, Y.; Wang, W.; Luo, P.; Wang, Y.; Wang, L.; and Qiao, Y. 2023b. VideoChat: Chat-Centric Video Understanding. arXiv:2305.06355. Li, R.; Yang, S.; Ross, D. A.; and Kanazawa, A. 2021. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13401–13412. Li, Z.; Zhou, Y.; Xiao, S.; He, C.; Huang, Z.; and Li, H. 2017. Auto-conditioned recurrent networks for extended complex human motion synthesis. arXiv:1707.05363. Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2023. Visual Instruction Tuning. arXiv:2304.08485. Liu, X.; Ji, K.; Fu, Y.; Tam, W. L.; Du, Z.; Yang, Z.; and Tang, J. 2021. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv:2110.07602. Lu, Z.; Huang, D.; Bai, L.; Liu, X.; Qu, J.; and Ouyang, W. 2023. Seeing is not always believing: A Quantitative Study on Human Perception of AI-Generated Images. arXiv:2304.13023. Martinez, J.; Black, M. J.; and Romero, J. 2017. On human motion prediction using recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2891–2900. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744. Petrovich, M.; Black, M. J.; and Varol, G. 2021. Actionconditioned 3D human motion synthesis with transformer VAE. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10985–10995. Petrovich, M.; Black, M. J.; and Varol, G. 2022. TEMOS: Generating diverse human motions from textual descriptions. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXII, 480–497. Springer. Plappert, M.; Mandery, C.; and Asfour, T. 2016. The KIT motion-language dataset. Big data, 4(4): 236–252. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7375 et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. 2018. Improving language understanding by generative pre-training. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language Models are Unsupervised Multitask Learners. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485–5551. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv:2204.06125. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-toimage generation. In International Conference on Machine Learning, 8821–8831. PMLR. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–10695. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494. Tevet, G.; Gordon, B.; Hertz, A.; Bermano, A. H.; and CohenOr, D. 2022. Motionclip: Exposing human motion generation to clip space. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXII, 358–374. Springer. Tevet, G.; Raab, S.; Gordon, B.; Shafir, Y.; Cohen-or, D.; and Bermano, A. H. 2023. Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023. Llama: Open and efficient foundation language models. arXiv:2302.13971. Van Den Oord, A.; Vinyals, O.; et al. 2017. Neural discrete representation learning. Advances in neural information processing systems, 30. Wei, J.; Bosma, M.; Zhao, V. Y.; Guu, K.; Yu, A. W.; Lester, B.; Du, N.; Dai, A. M.; and Le, Q. V. 2021. Finetuned language models are zero-shot learners. arXiv:2109.01652. Xin, C.; Jiang, B.; Liu, W.; Huang, Z.; Fu, B.; Chen, T.; Yu, J.; and Yu, G. 2023. Executing your Commands via Motion Diffusion in Latent Space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Ye, Q.; Xu, H.; Xu, G.; Ye, J.; Yan, M.; Zhou, Y.; Wang, J.; Hu, A.; Shi, P.; Shi, Y.; Jiang, C.; Li, C.; Xu, Y.; Chen, H.; Tian, J.; Qi, Q.; Zhang, J.; and Huang, F. 2023. mPLUGOwl: Modularization Empowers Large Language Models with Multimodality. arXiv:2304.14178. Yu, J.; Xu, Y.; Koh, J. Y.; Luong, T.; Baid, G.; Wang, Z.; Vasudevan, V.; Ku, A.; Yang, Y.; Ayan, B. K.; et al. 2022. Scaling autoregressive models for content-rich text-to-image generation. arXiv:2206.10789. Zhang, J.; Zhang, Y.; Cun, X.; Huang, S.; Zhang, Y.; Zhao, H.; Lu, H.; and Shen, X. 2023a. T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, M.; Cai, Z.; Pan, L.; Hong, F.; Guo, X.; Yang, L.; and Liu, Z. 2022. Motiondiffuse: Text-driven human motion generation with diffusion model. arXiv:2208.15001. Zhang, Y.; Huang, D.; Liu, B.; Tang, S.; Lu, Y.; Chen, L.; Bai, L.; Chu, Q.; Yu, N.; and Ouyang, W. 2023b. MotionGPT: Finetuned LLMs are General-Purpose Motion Generators. arXiv:2306.10900. Zhu, D.; Chen, J.; Shen, X.; Li, X.; and Elhoseiny, M. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv:2304.10592. Zhuang, W.; Wang, C.; Chai, J.; Wang, Y.; Shao, M.; and Xia, S. 2022. Music2dance: Dancenet for music-driven dance generation. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 18(2): 1–21. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7376
2024
819
18,650
Prompt-Based Distribution Alignment for Unsupervised Domain Adaptation Shuanghao Bai1, Min Zhang2, Wanqi Zhou1,4, Siteng Huang2, Zhirong Luan3, Donglin Wang2* , Badong Chen1* 1Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, China 2Westlake University Institute of Advanced Technology, Westlake Institute for Advanced Study 3School of Electrical Engineering, Xi’an University of Technology, Xi’an, China 4RIKEN AIP Abstract Recently, despite the unprecedented success of large pretrained visual-language models (VLMs) on a wide range of downstream tasks, the real-world unsupervised domain adaptation (UDA) problem is still not well explored. Therefore, in this paper, we first experimentally demonstrate that the unsupervised-trained VLMs can significantly reduce the distribution discrepancy between source and target domains, thereby improving the performance of UDA. However, a major challenge for directly deploying such models on downstream UDA tasks is prompt engineering, which requires aligning the domain knowledge of source and target domains, since the performance of UDA is severely influenced by a good domain-invariant representation. We further propose a Prompt-based Distribution Alignment (PDA) method to incorporate the domain knowledge into prompt learning. Specifically, PDA employs a two-branch prompttuning paradigm, namely base branch and alignment branch. The base branch focuses on integrating class-related representation into prompts, ensuring discrimination among different classes. To further minimize domain discrepancy, for the alignment branch, we construct feature banks for both the source and target domains and propose image-guided feature tuning (IFT) to make the input attend to feature banks, which effectively integrates self-enhanced and crossdomain features into the model. In this way, these two branches can be mutually promoted to enhance the adaptation of VLMs for UDA. We conduct extensive experiments on three benchmarks to demonstrate that our proposed PDA achieves state-of-the-art performance. The code is available at https://github.com/BaiShuanghao/Prompt-basedDistribution-Alignment. Introduction Unsupervised domain adaptation (UDA) aims to improve the generalization performance in the target domain of the pre-trained model by using the labeled source domain and unlabeled target domain (Wilson and Cook 2020; Zhu et al. 2023a). Many methods have been proposed to address the UDA problem, mainly including adversarial training (Ganin and Lempitsky 2015; Rangwani et al. 2022) and metric learning (Saito et al. 2018; Tang, Chen, and Jia 2020; Zhang, Wang, and Gai 2020). However, mitigating distribution by domain alignment may inadvertently result in a loss of semantic information, which comes from the entangled nature *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 𝑳𝑳𝟐𝟐 𝒊𝒊𝒊𝒊𝒊𝒊𝒊𝒊𝒊𝒊(T) 𝑴𝑴𝑴𝑴𝑴𝑴−𝟏𝟏(𝑰𝑰𝒔𝒔, 𝑰𝑰𝒕𝒕) 𝑲𝑲𝑲𝑲−𝟏𝟏(𝑰𝑰𝒔𝒔, 𝑰𝑰𝒕𝒕) ViT CLIP MaPLe PDA (Ours) 𝑨𝑨𝑨𝑨𝑨𝑨 𝒓𝒓(𝑰𝑰𝒔𝒔) 𝒓𝒓(𝑰𝑰𝒕𝒕) Figure 1: Metric comparisons on Office-Home. Higher values are better. r measures the compactness of features (i.e., the division of inner-class L2 distance and inter-class L2 distance Linter 2 ). MMD and KL divergence measure the domain discrepancy. T, Is and It denote the text features, and image features of the source and target domain, respectively. Our method demonstrates the most discriminable text features, the most compact image features, the lowest domain discrepancy, and the best accuracy. of semantic and domain information (Tang, Chen, and Jia 2020; Ge et al. 2022; Zhang, Huang, and Wang 2022). Recently, large vision language models (VLMs) like CLIP (Radford et al. 2021) have shown impressive generalization performance in various downstream tasks. With the disentangled visual and semantic representations, this may avoid the loss of semantic information and improve UDA performance. In light of this, we conduct an empirical experiment to demonstrate the applicability of VLMs to the UDA problem. Specifically, we evaluated the performance of both unimodal model Vision Transformer (ViT) (Dosovitskiy et al. 2021) and zero-shot CLIP with hand-crafted prompts. In Figure 1, although the compactness of source features r(Is) and target features r(It) of CLIP is similar to that of supervised-trained ViT, yet maximum mean discrepancy (MMD) and KL divergence (KL) minimize, resulting higher accuracy of target domain (Acc). This indicates CLIP The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 729 has the potential to minimize the domain discrepancy for UDA, which benefits from the multi-modal interaction. To further adapt VLMs to downstream UDA tasks, one of the most efficient paradigms is prompt tuning. Current stateof-the-art prompt tuning methods, such as CoOp (Zhou et al. 2022b) and MaPLe (Khattak et al. 2023), have demonstrated superior performance on some specific downstream tasks. CoOp method adopts soft prompts to learn an appropriate text prompt, and MaPLe further introduces vision-language prompts to ensure mutual synergy. As shown in Figure 1, we observe that 1) MaPLe takes a step towards aligning domains compared to CLIP, as evidenced by its lower KL divergence and MMD, which indicates that the prompts tuning can help minimize the domain shift. 2) The image features of MaPLe are more compact, indicating prompt tuning can further improve the discriminative ability of CLIP model. Nonetheless, these prompt tuning methods such as CoOp or MaPLe may not be sufficient to address the domain shift problem fully because these methods primarily focus on the placement of the prompt and may not directly tackle the underlying causes of the domain shift. Therefore, we argue that prompts should not only focus on their design but also adapt to different domains by incorporating domain knowledge into the prompt. To this end, we propose a Prompt-based Distribution Alignment (PDA) method for UDA. PDA consists of two branches, namely the base branch and the alignment branch. The base branch generates the image and text representations with prompt tuning, which focuses on integrating class-related representations into prompts, ensuring discrimination among different classes for each domain. The principal objective for UDA is to minimize the distribution shift of image representations. The alignment branch utilizes image representations to introduce domain knowledge to minimize the domain discrepancy. To achieve this, we first construct a source-domain and target-domain feature bank and propose image-guided feature tuning (IFT) to make the image representations of inputs attend to feature banks, which can effectively integrate self-enhanced and cross-domain features into the model. As shown in Figure 1, PDA not only excels in obtaining more discriminable image and text representations but also effectively mitigates the domain discrepancy. Therefore, our method can guarantee the discriminability of the model, and effectively capture important features from both the source and target domains, which enables domain alignment and allows the model to better adapt to the target domain. Our main contributions are as follows: • We first experimentally verify the effectiveness of VLM on UDA downstream tasks. Then, based on this finding, we further propose a prompt-based distribution alignment (PDA) method to tune prompt to the target domain. • The proposed PDA includes two training branches. First, the base branch ensures discrimination among different classes. Second, the aligned branch obtains the domaininvariant information by image-guided feature tuning. • Extensive experiments demonstrate the effectiveness of the proposed PDA, which achieves state-of-the-art performance on Office-Home, Office-31 and VisDA-2017. Related Work Unsupervised Domain Adaptation Unsupervised domain adaptation (UDA) aims to align the source and target domains by learning a domain-invariant feature representation (Zhang et al. 2023b; Chen, Xiao, and Kuang 2022; Xiao et al. 2022). One method of aligning domains is minimizing divergence between different domains. Many divergence measures have been proposed, such as maximum mean discrepancy (MMD) (Long et al. 2015), correlation alignment (CORAL) (Sun, Feng, and Saenko 2016) and maximum density divergence (MDD) (Zhang et al. 2019). Another line of work is motivated by the success of adversarial learning. By modeling the optimization process as a minimax problem (Ganin and Lempitsky 2015; Long et al. 2018; Rangwani et al. 2022; Xiao et al. 2021), a domain discriminator is introduced to distinguish the samples from different domains, with the aim of training the model to generate domain-invariant features that can deceive the domain discriminator. With the advent of transformer models, TVT (Yang et al. 2023) proposes an adaptation module to obtain both transferable and discriminative features, and CDTrans (Xu et al. 2022) leverages the robustness of cross-attention modules and proposes a crossdomain transformer for direct feature alignment. Different from these mainstream unimodal UDA methods, we focus on harnessing the transferability inherent in vision language models, which exhibit a promising capacity for domain alignment due to multimodal interaction. Vision Language Models The pre-trained Vision Language Models (VLMs) learn image-text correlation by various pre-training tasks, such as masked language modeling (Kim, Son, and Kim 2021), masked language modeling (Tan and Bansal 2019), imagetext matching (Huang et al. 2021) and contrastive learning (Jia et al. 2021; Zhang et al. 2022a; Chen et al. 2021). Although these models have achieved unprecedented success across a wide range of tasks including zero-hot and fewshot visual recognition, effectively adapting them to downstream tasks remains a formidable challenge. Many works have been proposed to enhance the generalization ability on downstream tasks by introducing additional feature adapter (Gao et al. 2021; Zhang et al. 2023a; Bai et al. 2024), attention (Guo et al. 2023), cache model (Zhang et al. 2022b) and so on. The prompt learning paradigm, initially employed in the field of Natural Language Processing (NLP), has also been integrated into VLMs, emerging as one of the most efficient approaches for fine-tuning VLMs on various downstream tasks. In this work, we follow the line of prompt learning methods and propose a prompt-based distribution alignment method to improve the transferability of CLIP for addressing the UDA problem. Prompt Tuning in Vision Language Models Prompt tuning is one of the important parts of parameterefficient tuning, which aims at learning only a small number of parameters by means of input composition (Pfeiffer et al. 2023; Zhu et al. 2023b) while keeping the large model fixed. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 730 CoOp (Zhou et al. 2022b) firstly introduces soft prompt in VLMs, demonstrating that suitable text prompts can enhance image recognition performance. CoCoOp (Zhou et al. 2022a) extends the CoOp by integrating lightweight neural networks to dynamically generate prompts for individual images to deal with the overfitting problem of prompts. VPT (Jia et al. 2022) achieves impressive results using a few visual prompts in transformer models. Furthermore, MaPLe (Khattak et al. 2023) combines both text and visual prompts into CLIP to improve the alignment between text and image representations. To exploit the effectiveness of prompt tuning for UDA, we introduce a two-branch training paradigm consisting of base and alignment branches. The base branch leverages prompt tuning to enhance the discriminability of CLIP model. For the alignment branch, we design an imageguided feature tuning to mitigate domain discrepancy. Preliminaries Unsupervised Domain Adaptation UDA focuses on improving the model’s generalization performance with the labeled data from the source domain and unlabeled data from the target domain. Formally, given a labeled dataset Ds = {xs i, ys i }ns i=1 of the source domain and unlabeled dataset Dt = {xt j}nt j=1, where ns and nt denote the size of samples in the source and target domains, respectively. Note that the data of two domains are sampled from two different distributions, and we assume that the two domains share the same label space. We denote the input space as X and denote the label set as Y . There is a mapping M : {X} →Y from images to labels. In this work, we incorporate prompts V into the input, thus the mapping could be rephrased as M : {X, V } →Y from images and prompts to labels. Our goal is to mitigate the issue of domain discrepancy between Ds and Dt, and to learn a generalized prompt P that can facilitate the transfer of knowledge from the source domain to the target domain. Revisiting Prompt Learning Contrastive Language-Image Pre-Training (CLIP) model consists of an image encoder and a text encoder, which encodes images and corresponding natural language descriptions, respectively. Zero-shot inference. The pre-trained CLIP model is adapted to downstream tasks with hand-crafted prompts, rather than fine-tuning the model. The text is always manually designed as ”a photo of a [CLASS]” ([CLASS] is the class token). The image-text matching score is computed using the cosine similarity ⟨wi, z⟩between the image representation z and the text representation wi corresponding to the i-th class. The image representation is derived from the image encoder with an input image, while the text representation wi is extracted from the text encoder using the prompt description associated with the i-th class. The probability of the image belonging to the i-th class can be formulated as: p(y = i | x) = exp(⟨wi, z⟩/τ) PK j=1 exp(⟨wj, z⟩/τ) , (1) where τ denotes temperature parameter, K denotes the number of classes and ⟨·, ·⟩denotes the cosine similarity. Text prompt tuning. It avoids prompt engineering manually and strengthens the transferring ability of CLIP. CoOp (Zhou et al. 2022b) introduces a set of M continuous learnable context vectors v = [v1, v2, ..., vM], then the i-th class of text prompt ti is defined as ti = [v, ci], where ci is the fixed input token embedding. The learnable context vectors can be extended to deeper transformer layers of the text encoder with transformer-based architecture, thus each layer of input can be rephrased as [vj, cj]J j=1, where J is the number of transformer layers in the text encoder and [·, ·] refers to the concatenation operation. Visual prompt tuning. It adopts a similar paradigm as text prompt tuning, where additional context vectors that are fed into each layer of the image encoder are automatically learned. For transformer-based image encoder, VPT (Jia et al. 2022) inserts a collection of prompts ev between a sequence of patch embeddings e and the learnable class token c, which can be designed as [evj, ej, cj]J j=1. Multi-modal prompt tuning. The text prompt v and visual prompt ev are combined into CLIP. For instance, MaPLe (Khattak et al. 2023) tunes the vision and language branches of CLIP together by sharing prompts across both modalities. Method Inspired by the observations in the previous section, we attempt to design an efficient yet effective prompt tuning method for UDA. To enhance the transferability of the prompt, we propose a Prompt-based Distribution Alignment (PDA) method, whose framework is illustrated in Figure 2. We introduce our PDA method as follows. Prompting for Base Branch Prompt design. We mainly adopt the paradigm of multimodal prompt. For the early layers of the image encoder, a text prompt is employed to generate a visual prompt by a projection layer. This means that text prompts are employed to guide the encoding process of images, enabling the images to possess information in the feature space that is relevant to the given text, therefore achieving alignment of images with pertinent textual information. For the later layers of the image encoder, each layer utilizes an independent prompt. This design allows each layer to independently capture distinct visual and semantic features of the image, enabling better image-text interaction and capturing different visual and semantic features. Loss function. Contrastive loss function is then employed to align the image and text representations, which can be formulated as: Lx = − X i ys i log exp(⟨ˆwi, ˆzs⟩/τ) PK j=1 exp(⟨ˆwj, ˆzs⟩/τ) , (2) where ys denotes the one-hot ground-truth of source domain data, K is the number of classes, wi and ˆzs denote the i-th class of final text representation and final image representations of the source domain with prompt tuning, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 731 + 𝑓𝑓𝑝𝑝𝑝𝑝𝑝𝑝 𝑓𝑓𝑝𝑝𝑝𝑝𝑝𝑝 𝑓𝑓𝑝𝑝𝑝𝑝𝑝𝑝 Source Feature Bank 𝑭𝑭𝒔𝒔𝒔𝒔 Target Feature Bank 𝑭𝑭𝒕𝒕𝒕𝒕 Input Source/Target Image Feature 𝑭𝑭𝒗𝒗 dot attention Q K V Q K V 𝑓𝑓𝑝𝑝ost 𝑓𝑓𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 add & norm add & norm weight sharing weight sharing weight sharing 𝛽𝛽1 𝛽𝛽2 Enhanced Image Feature dot attention Architecture of IFT Module A photo of a [flower] A photo of a [TV] …… Text Encoder Image Encode … 𝐏𝐏𝐏𝐏𝐞𝐞𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝐝𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃 Cosine Similarity IFT Module Source Feature Bank Target Feature Bank … 𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝐏𝒂𝒂𝒍𝒍𝒍𝒍𝒍𝒍𝒍𝒍 Eq. (2) + Eq. (3) Cosine Similarity Eq. (7) + Eq. (8) Enhanced Image Feature Overview of PDA Framework … … … Base Branch Alignment Branch … … Word Embed … … Text Prompt Text Embedding Visual Prompt Image Embedding Patch Embed Projection Figure 2: Overview of the proposed Prompt-based Distribution Alignment (PDA) method. The snow denotes the frozen parameters, and the fire denotes the learnable parameters. From left to right, we respectively show the detailed framework of PDA and the architecture of the IFT module. We mainly adopt the multi-modal prompt tuning in our PDA method. Additionally, IFT module makes the visual features attend to the source/target-domain feature bank for domain alignment. To further exploit data of the target domain, we use pseudo labels to train these unlabeled data like Ge et al. (Ge et al. 2022). The pseudo labels are generated by the prediction of CLIP model. In order to enhance the reliability of these pseudo labels, we set a fixed threshold value τ. If the maximum probability τp predicted by CLIP for a given image is lower than this threshold, the pseudo label is discarded. Again, we adopt the contrastive loss function: Lu = −I(τp ≥τ) X i ˆyt ilog exp(⟨ˆwi, ˆzt⟩/τ) PK j=1 exp(⟨ˆwj, ˆzt⟩/τ) , (3) where I(·) is an indicator function, ˆyt denotes the one-hot pseudo label of target domain data and ˆzt denotes final image representations of the target domain with prompt tuning. Pipeline of Alignment Branch For the alignment branch, we construct feature banks for both the source and target domains and propose imageguided feature tuning (IFT) to make the input attend to feature banks to achieve domain alignment. Constructing feature banks. With access to data from both the source and target domains, we can obtain text features and image features from both domains. Based on the strong zero-shot ability of CLIP, we could construct robust and accurate feature banks. Firstly, we produce confidence scores (i.e, maximum probability) for images in the source domain with the prediction in zero-shot CLIP. Similarly, we generate a confidence score and corresponding pseudo label for each image in the target domain. Specifically, the index of the maximum confidence score is the pseudo label of the image. We select the visual features of images with top-C confidence scores in each class for the source and target domains, and construct a K-way C-shot source-domain feature bank and target-domain feature bank, where K denotes the number of classes and C denotes the number of samples in each class. Then we obtain the centroid features of each class as the final source-domain feature bank zsc and target-domain feature bank ztc, respectively. Image-guided feature tuning (IFT). IFT leverages feature banks to guide images to obtain self-enhanced and crossdomain features, as shown in Figure 2 (right). We first apply a weight-shared projector layer fpre, i.e., a three-layer multilayer perceptron, to transform the image feature ˆz, sourcedomain feature bank zsc, and target-domain feature bank ztc into query, key and value, which can be formulated as: Q = fpre(ˆz), Ksc, Vsc = fpre(zsc), Ktc, Vtc = fpre(ztc). (4) We make the image feature attend to source-domain and target-domain feature banks, resulting in augmented image features. These features are then transformed by another weight-shared projector fpost. The whole process with attention can be formulated as: zsa = fpost(softmax(QKT sc ϵ )Vsc), zta = fpost(softmax(QKT tc ϵ )Vtc), (5) where ϵ denotes the scale value and T denotes the transpose operation. Then, we combine an add and norm module with the original visual feature, which can be formulated as: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 732 Method A-C A-P A-R C-A C-P C-R P-A P-C P-R R-A R-C R-P Avg zero-shot CLIP 67.6 89.0 89.4 82.4 89.0 89.4 82.4 67.6 89.4 82.4 67.6 89.0 82.1 linear probe CLIP 60.1 73.7 80.9 66.4 76.4 76.8 63.4 61.0 82.3 74.7 64.8 88.3 72.4 CoOp 70.0 90.8 90.9 83.2 90.9 89.2 82.0 71.8 90.5 83.8 71.5 92.0 83.9 CoCoOp 70.4 91.4 90.4 83.5 91.8 90.3 83.4 70.9 91.0 83.4 71.2 91.7 84.1 VP 66.7 89.1 89.1 81.7 89.0 89.2 81.8 67.0 89.1 81.7 66.6 89.0 81.7 VPT-shallow 69.3 90.1 90.2 83.4 91.0 90.2 82.6 70.6 90.9 83.5 69.6 91.2 83.6 VPT-deep 71.6 89.9 90.3 82.8 91.0 89.7 82.0 71.5 90.3 84.6 71.7 91.6 83.9 IVLP 71.4 91.7 90.8 83.6 90.2 89.3 82.2 72.4 90.4 84.1 72.1 92.0 84.2 MaPLe 72.2 91.6 90.3 82.6 90.9 89.8 82.4 71.6 90.1 85.1 72.0 92.1 84.2 DAPL 70.7 91.0 90.9 85.2 91.0 91.0 85.1 70.7 90.9 85.3 70.4 91.4 84.4 PDA (Ours) 73.5 91.4 91.3 86.0 91.6 91.5 86.0 73.5 91.7 86.4 73.0 92.4 85.7 Table 1: Comparisons with the prompt tuning methods on Office-Home dataset with ViT-B/16 as the backbone. Bold denotes the best scores. Method A-D A-W D-A D-W W-A W-D Avg zero-shot CLIP 77.7 75.8 79.0 75.8 79.0 77.7 77.5 linear probe CLIP 83.1 83.3 74.2 96.5 70.3 98.4 84.3 CoOp 88.5 88.5 82.0 96.1 82.4 99.0 89.4 CoCoOp 86.9 88.2 83.2 94.1 82.8 98.0 88.9 VP 78.5 74.8 77.9 75.5 77.8 79.7 77.4 VPT-shallow 83.5 83.8 77.5 88.6 80.9 91.2 84.2 VPT-deep 89.6 86.5 81.9 96.5 82.8 99.2 89.4 IVLP 85.7 89.2 81.9 98.4 80.3 99.2 89.1 MaPLe 86.9 88.6 83.0 97.7 82.0 99.4 89.6 DAPL 81.7 80.3 81.2 81.8 81.0 81.3 81.2 PDA (Ours) 91.2 92.1 83.5 98.1 82.5 99.8 91.2 Table 2: Comparisons with the prompt tuning methods on Office-31 dataset with ViT-B/16 as the backbone. Bold denotes the best scores. zvs = zsa + ˆz ||zsa + ˆz||2 , zvt = zta + ˆz ||zta + ˆz||2 , (6) where ||·||2 denotes 2-norm. Then the final augmented image representation ˆz can be denoted as β1zvs + β2zvt. Loss function. Contrastive loss function is then employed to align the image representations and feature banks of source and target domains, which can be formulated as: Lxa = − X i ys i log exp(⟨ˆwi, h(ˆzs)⟩/τ) PK j=1 exp(⟨ˆwj, h(ˆzs)⟩/τ) , (7) where h denotes the IFT module and h(ˆzs) denotes augmented image representations of the source domain. Similar to the base branch, we use the data of the target domain and obtain augmented image representations of the target domain ˆzt. Then contrastive loss function is adopted: Lua = −I(τp ≥τ) X i ˆyt ilog exp(⟨ˆwi, h(ˆzt)⟩/τ) PK j=1 exp(⟨ˆwj, h(ˆzt)⟩/τ) . (8) As a result, our PDA method can be trained end-to-end using a total contrastive loss: L = Lx + Lu + γ(Lxa + Lua), (9) where γ is hyper-parameter. During the test phase, we calculate a weighted sum of the predictions from both the base and alignment branches, resulting in the final prediction of our model. These two branches are essential not only for enhancing model discriminability but also for aligning the distribution shift between source and target domains. Experiments In the following section, we describe the datasets, baselines, experimental setup, and results of our analysis. Here we show essential comparison and analysis. More details and experiments are provided in the Appendix. Experimental Setting Datasets. Experiments are conducted on popular benchmark datasets of unsupervised domain adaptation, namely OfficeHome (Venkateswara et al. 2017), Office-31 (Saenko et al. 2010) and VisDA-2017 (Peng et al. 2018). Baselines. For prompt tuning methods, we choose 7 baselines, i.e., CoOp (Zhou et al. 2022b), CoCoOp (Zhou et al. 2022a), VPT (Jia et al. 2022), VP (Bahng et al. 2022), IVLP (Khattak et al. 2023), MaPLe (Khattak et al. 2023) and DAPL (Ge et al. 2022). We also compare PDA with the state-of-the-art (SOTA) methods, including ResNet-based and ViT-based methods. The ResNet-based methods are DANN (Ganin and Lempitsky 2015), JAN (Long et al. 2017), MCD (Saito et al. 2018), MDD (Zhang et al. 2019), MCC (Jin et al. 2020), SHOT (Liang, Hu, and Feng 2020) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 733 Method A-C A-P A-R C-A C-P C-R P-A P-C P-R R-A R-C R-P Avg ERM RN50 34.9 50.0 58.0 37.4 41.9 46.2 38.5 31.2 60.4 53.9 41.2 59.9 46.1 DANN 45.6 59.3 70.1 47.0 58.5 60.9 46.1 43.7 68.5 63.2 51.8 76.8 57.6 JAN 45.9 61.2 68.9 50.4 59.7 61.0 45.8 43.4 70.3 63.9 52.4 76.8 58.3 MDD 54.9 73.7 77.8 60.0 71.4 71.8 61.2 53.6 78.1 72.5 60.2 82.3 68.1 SDAT 56.0 72.2 78.6 62.5 73.2 71.8 62.1 55.9 80.3 75.0 61.4 84.5 69.5 SHOT 57.1 78.1 81.5 68.0 78.2 78.1 67.4 54.9 82.2 73.3 58.8 84.3 71.8 PDA (Ours) 55.4 85.1 85.8 75.2 85.2 85.2 74.2 55.2 85.8 74.7 55.8 86.3 75.3 TVT ViT 74.9 86.8 89.5 82.8 87.9 88.3 79.8 71.9 90.1 85.5 74.6 90.6 83.6 SDAT 69.1 86.6 88.9 81.9 86.2 88.0 81.0 66.7 89.7 86.2 72.1 91.9 82.4 SSRT 75.2 89.0 91.1 85.1 88.3 89.9 85.0 74.2 91.2 85.7 78.6 91.8 85.4 Deit-based 61.8 79.5 84.3 75.4 78.8 81.2 72.8 55.7 84.4 78.3 59.3 86.0 74.8 CDTrans-Deit 68.8 85.0 86.9 81.5 87.1 87.3 79.6 63.3 88.2 82.0 66.0 90.6 80.5 PDA (Ours) 73.5 91.4 91.3 86.0 91.6 91.5 86.0 73.5 91.7 86.4 73.0 92.4 85.7 Table 3: Comparisons with SOTA methods on Office-Home with ResNet50 and ViT as the backbone. Bold is the best scores. Method plane bicycle bus car horse knife mcycl person plant sktbrd train truck Avg ERM RN101 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81.0 26.5 73.5 8.5 52.4 DANN 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 MCD 87.0 60.9 83.7 64.0 88.9 79.6 84.7 76.9 88.6 40.3 83.0 25.8 71.9 MCC 88.1 80.3 80.5 71.5 90.1 93.2 85.0 71.6 89.4 73.8 85.0 36.9 78.8 SDAT 94.8 77.1 82.8 60.9 92.3 95.2 91.7 79.9 89.9 91.2 88.5 41.2 82.1 SHOT 94.3 88.5 80.1 57.3 93.1 94.9 80.7 80.3 91.5 89.1 86.3 58.2 82.9 PDA (Ours) 97.2 82.3 89.4 76.0 97.4 87.5 95.8 79.6 87.2 89.0 93.3 62.1 86.4 TVT ViT 92.9 85.6 77.5 60.5 93.6 98.2 89.3 76.4 93.6 92.0 91.7 55.7 83.9 SDAT 96.3 80.7 74.5 65.4 95.8 99.5 92.0 83.7 93.6 88.9 85.8 57.2 84.5 SSRT 98.9 87.6 89.1 84.8 98.3 98.7 96.3 81.1 94.8 97.9 94.5 43.1 88.8 Deit-based 98.2 73.0 82.5 62.0 97.3 63.5 96.5 29.8 68.7 86.7 96.7 23.6 73.2 CDTrans-Deit 97.1 90.5 82.4 77.5 96.6 96.1 93.6 88.6 97.9 86.9 90.3 62.8 88.4 PDA (Ours) 99.2 91.1 91.9 77.1 98.4 93.6 95.1 84.9 87.2 97.3 95.3 65.3 89.7 Table 4: Comparisons with SOTA methods on VisDA-2017 with ResNet101 and ViT as the backbone. Bold is the best scores. and SDAT (Rangwani et al. 2022), and the ViT-based methods are Deit (Touvron et al. 2021), CDTrans (Xu et al. 2022), SDAT, SSRT (Sun et al. 2022) and TVT (Yang et al. 2023). Experimental Setup. We adopt ResNet50 (He et al. 2016), ResNet101 and ViT-B/16 (Dosovitskiy et al. 2021) as our backbones. Following Zhou et al. (Zhou et al. 2022b), we adopt text prompt as prompt design for ResNet-based backbone. Following Khattak et al. (Khattak et al. 2023), we adopt the multi-modal prompt as prompt design for the ViTbased backbone. The parameters in the encoders of CLIP are fixed, and we train the prompt and IFT module using the SGD optimizer for 10 epochs on the Office-Home and VisDA-2017 datasets, and for 20 epochs on the Office-31 dataset, with a batch size of 32. For all prompt tuning methods, we set the learning rate initially to around 0.003 initially and decay it using a cosine annealing rule. Moreover, the context tokens length is set to 2 for MaPLe and our PDA method, 10 for VPT and VP, and 16 for CoOp and CoCoOp. Comparisons with Prompt Tuning Methods Results on Office-Home. As shown in Table 1, our PDA achieves the best performance on almost all tasks with 85.7% accuracy, and achieves an average accuracy improvement of 3.6%, 1.8%, and 1.5%, respectively, compared with zero-shot CLIP, CoOp and MaPLe. For some tasks, such as C-A and P-A, we observe improvements of around 4.0% compared with MaPLe. Furthermore, we find that multimodal prompt tuning methods perform better than singlemodal prompt tuning methods. Results on Office-31. As shown in Table 2, our PDA method also outperforms all other prompt tuning methods. We observe that prompt tuning can significantly improve the transferability of zero-shot CLIP, as PDA outperforms zero-shot CLIP by 13.7% on average accuracy. For some tasks, such as W-D and D-W, our PDA outperforms zero-shot CLIP by 22.1% and 22.3%, respectively, indicating that the domain shift problem is well alleviated. Comparisons with SOTA Methods Results on Office-Home. Table 3 shows the quantitative comparison with the ResNet-based and ViT-based methods. PDA outperforms other SOTA methods with identical backbones. For instance, with ResNet50 as the backbone, PDA outperforms SHOT by 3.5% and SDAT by 5.8% by a large margin, respectively. With ViT as the backbone, PDA outperforms SSRT by 0.3% and TVT by 2.1%, respectively. Compared with these unimodal methods, PDA exhibits superior performance with multi-modal interaction. Results on VisDA-2017. Table 4 shows the experimental results on the VisDA-2017 dataset. Our PDA method also achieves SOTA performance on the VisDA-2017 dataset with different backbones. For example, PDA outperforms The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 734 zero-shot CLIP MaPLe PDA (Ours) a) task A→C on the Office-Home dataset c) task A→W on the Office-31 dataset zero-shot CLIP MaPLe PDA (Ours) b) task C→A on the Office-Home dataset d) task S→R on the VisDA-2017 dataset Figure 3: The t-SNE visualization for different tasks on the three datasets with zero-shot CLIP, MaPLe and our PDA method. Image features extracted from the source and target domain are shown in blue and red, respectively. Lx Lxa Lu Lua O.H. Office-31 VisDA-2017 82.1 77.5 88.9 ✓ 84.2 89.6 83.5 ✓ ✓ 84.6 89.8 85.2 ✓ ✓ ✓ 85.2 90.5 89.0 ✓ ✓ ✓ ✓ 85.7 91.2 89.7 Table 5: Ablation on different constraint losses. The average results of three datasets are reported. O.H. denotes the Office-Home dataset. SHOT and SDAT by a large margin of 3.5% and 4.3%, respectively. With ResNet101 as the backbone, PDA outperforms SSRT by 1.1% and TVT by 5.8%, respectively. Ablation Study Effect of each constraint loss. Table 5 shows the experimental results of integrating different constraint losses. In most cases, each constraint loss contributes positively to enhancing the model’s performance. For Office-Home dataset, we observe a consistent performance improvement with the introduction of each constraint loss, and the combination of them improves the averaged results by 3.6%. For Office31 dataset, a notable improvement of 12.1% is achieved by incorporating the Lx, which ensures discrimination among different classes. The combined influence of these constraint losses results in an impressive average performance improvement of 13.7%. For VisDA-2017 dataset, we encounter a tendency towards overfitting to data of source domain when employing Lx, but this issue is mitigated by the application of other constraint losses. Sensitivity analysis of the pseudo label threshold τ and context token length. Figure 4 presents the results of varying the context token length and pseudo label threshold, respectively. The results suggest that the performance of our method is generally robust to both of them. Figure 4: Sensitivity analysis of the context token length (left) and pseudo label threshold τ (right) on three datasets. Visualization As shown in Figure 3, we visualize the image features extracted from zero-shot CLIP, MaPLe and our PDA on four tasks from the three datasets via t-SNE. We can observe that our PDA method can better align the two domains. Conclusion In this paper, we demonstrate the effectiveness of vision language models and prompt tuning of VLMs for unsupervised domain adaptation. Based on this, we introduce distribution alignment into prompt tuning and propose a Prompt-based Distribution Alignment (PDA) method with a two-branch training paradigm. These two branches play a vital role not only in improving model discriminability but also in mitigating the distribution shift between the source and target domains. Extensive experiments confirm the effectiveness of our proposed method and our PDA method achieves new state-of-the-art performance for unsupervised domain adaptation. Due to the transferability of the learned prompts, we may further explore prompt alignment for unsupervised domain adaptation or other downstream tasks in future work. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 735 Acknowledgments This work was supported by the National Natural Science Foundation of China under grant number U21A20485, 62088102, and NSFC General Program under grant number 62176215. References Bahng, H.; Jahanian, A.; Sankaranarayanan, S.; and Isola, P. 2022. Exploring Visual Prompts for Adapting Large-Scale Models. arXiv:2203.17274. Bai, S.; Zhou, W.; Luan, Z.; Wang, D.; and Badong, C. 2024. Improving Cross-domain Few-shot Classification with Multilayer Perceptron. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. Chen, Z.; Ge, J.; Zhan, H.; Huang, S.; and Wang, D. 2021. Pareto Self-Supervised Training for Few-Shot Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13663–13672. Chen, Z.; Xiao, T.; and Kuang, K. 2022. BA-GNN: On Learning Bias-Aware Graph Neural Network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), 3012–3024. IEEE. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. International Conference on Learning Representations. Ganin, Y.; and Lempitsky, V. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, 1180–1189. PMLR. Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; and Qiao, Y. 2021. CLIP-Adapter: Better Vision-Language Models with Feature Adapters. arXiv:2110.04544. Ge, C.; Huang, R.; Xie, M.; Lai, Z.; Song, S.; Li, S.; and Huang, G. 2022. Domain Adaptation via Prompt Learning. arXiv:2202.06687. Guo, Z.; Zhang, R.; Qiu, L.; Ma, X.; Miao, X.; He, X.; and Cui, B. 2023. CALIP: Zero-Shot Enhancement of CLIP with Parameter-Free Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1): 746–754. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Huang, Z.; Zeng, Z.; Huang, Y.; Liu, B.; Fu, D.; and Fu, J. 2021. Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning. In The IEEE Conference on Computer Vision and Pattern Recognition. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, 4904–4916. PMLR. Jia, M.; Tang, L.; Chen, B.-C.; Cardie, C.; Belongie, S.; Hariharan, B.; and Lim, S.-N. 2022. Visual prompt tuning. In Proceedings of the 17th European Conference on Computer Vision, 709–727. Springer. Jin, Y.; Wang, X.; Long, M.; and Wang, J. 2020. Minimum class confusion for versatile domain adaptation. In Proceedings of the 16th European conference on Computer vision, 464–480. Springer. Khattak, M. U.; Rasheed, H.; Maaz, M.; Khan, S.; and Khan, F. S. 2023. Maple: Multi-modal prompt learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 19113–19122. Kim, W.; Son, B.; and Kim, I. 2021. Vilt: Vision-andlanguage transformer without convolution or region supervision. In International Conference on Machine Learning, 5583–5594. PMLR. Liang, J.; Hu, D.; and Feng, J. 2020. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning, 6028–6039. PMLR. Long, M.; Cao, Y.; Wang, J.; and Jordan, M. 2015. Learning transferable features with deep adaptation networks. In International conference on machine learning, 97–105. PMLR. Long, M.; Cao, Z.; Wang, J.; and Jordan, M. I. 2018. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31. Long, M.; Zhu, H.; Wang, J.; and Jordan, M. I. 2017. Deep transfer learning with joint adaptation networks. In International conference on machine learning, 2208–2217. PMLR. Peng, X.; Usman, B.; Kaushik, N.; Wang, D.; Hoffman, J.; and Saenko, K. 2018. Visda: A synthetic-to-real benchmark for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2021–2026. Pfeiffer, J.; Ruder, S.; Vuli´c, I.; and Ponti, E. M. 2023. Modular Deep Learning. arXiv:2302.11529. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Rangwani, H.; Aithal, S. K.; Mishra, M.; Jain, A.; and Radhakrishnan, V. B. 2022. A closer look at smoothness in domain adversarial training. In International Conference on Machine Learning, 18378–18399. PMLR. Saenko, K.; Kulis, B.; Fritz, M.; and Darrell, T. 2010. Adapting visual category models to new domains. In Proceedings of the 11th European conference on Computer vision: Part IV, 213–226. Springer. Saito, K.; Watanabe, K.; Ushiku, Y.; and Harada, T. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3723–3732. Sun, B.; Feng, J.; and Saenko, K. 2016. Return of Frustratingly Easy Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 736 Sun, T.; Lu, C.; Zhang, T.; and Ling, H. 2022. Safe selfrefinement for transformer-based domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7191–7200. Tan, H.; and Bansal, M. 2019. LXMERT: Learning CrossModality Encoder Representations from Transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Tang, H.; Chen, K.; and Jia, K. 2020. Unsupervised domain adaptation via structurally regularized deep clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8725–8735. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and J´egou, H. 2021. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, 10347–10357. PMLR. Venkateswara, H.; Eusebio, J.; Chakraborty, S.; and Panchanathan, S. 2017. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5018–5027. Wilson, G.; and Cook, D. J. 2020. A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology (TIST), 11(5): 1–46. Xiao, T.; Chen, Z.; Guo, Z.; Zhuang, Z.; and Wang, S. 2022. Decoupled self-supervised learning for graphs. Advances in Neural Information Processing Systems, 35: 620–634. Xiao, T.; Chen, Z.; Wang, D.; and Wang, S. 2021. Learning how to propagate messages in graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 1894–1903. Xu, T.; Chen, W.; Wang, P.; Wang, F.; Li, H.; and Jin, R. 2022. Cdtrans: Cross-domain transformer for unsupervised domain adaptation. International Conference on Learning Representations. Yang, J.; Liu, J.; Xu, N.; and Huang, J. 2023. Tvt: Transferable vision transformer for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 520–530. Zhang, M.; Huang, S.; Li, W.; and Wang, D. 2022a. Tree structure-aware few-shot image classification via hierarchical aggregation. In European Conference on Computer Vision,ECCV, 453–470. Springer. Zhang, M.; Huang, S.; and Wang, D. 2022. Domain generalized few-shot image classification via meta regularization network. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3748–3752. IEEE. Zhang, M.; Wang, D.; and Gai, S. 2020. Knowledge distillation for model-agnostic meta-learning. In ECAI 2020, 1355–1362. IOS Press. Zhang, M.; Yuan, J.; He, Y.; Li, W.; Chen, Z.; and Kuang, K. 2023a. MAP: Towards Balanced Generalization of IID and OOD through Model-Agnostic Adapters. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11921–11931. Zhang, M.; Zhuang, Z.; Wang, Z.; Wang, D.; and Li, W. 2023b. RotoGBML: Towards Out-of-Distribution Generalization for Gradient-Based Meta-Learning. arXiv preprint arXiv:2303.06679. Zhang, R.; Zhang, W.; Fang, R.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; and Li, H. 2022b. Tip-adapter: Training-free adaption of clip for few-shot classification. In European Conference on Computer Vision, 493–510. Springer. Zhang, Y.; Liu, T.; Long, M.; and Jordan, M. 2019. Bridging theory and algorithm for domain adaptation. In International conference on machine learning, 7404–7413. PMLR. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhu, D.; Li, Y.; Shao, Y.; Hao, J.; Wu, F.; Kuang, K.; Xiao, J.; and Wu, C. 2023a. Generalized Universal Domain Adaptation with Generative Flow Networks. arXiv preprint arXiv:2305.04466. Zhu, D.; Li, Y.; Zhang, M.; Yuan, J.; Liu, J.; Kuang, K.; and Wu, C. 2023b. Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance. arXiv preprint arXiv:2306.15955. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 737
2024
82
18,651
Concept-Guided Prompt Learning for Generalization in Vision-Language Models Yi Zhang1,2, Ce Zhang3, Ke Yu2, Yushun Tang2, Zhihai He2,4* 1Harbin Institute of Technology 2Southern University of Science and Technology 3Carnegie Mellon University 4Pengcheng Laboratory zhangyi2021@mail.sustech.edu.cn, cezhang@cs.cmu.edu {yuk2020, tangys2022}@mail.sustech.edu.cn, hezh@sustech.edu.cn Abstract Contrastive Language-Image Pretraining (CLIP) model has exhibited remarkable efficacy in establishing cross-modal connections between texts and images, yielding impressive performance across a broad spectrum of downstream applications through fine-tuning. However, for generalization tasks, the current fine-tuning methods for CLIP, such as CoOp and CoCoOp, demonstrate relatively low performance on some fine-grained datasets. We recognize the underlying reason is that these previous methods only projected global features into the prompt, neglecting the various visual concepts, such as colors, shapes, and sizes, which are naturally transferable across domains and play a crucial role in generalization tasks. To address this issue, in this work, we propose Concept-Guided Prompt Learning (CPL) for vision-language models. Specifically, we leverage the well-learned knowledge of CLIP to create a visual concept cache to enable conceptguided prompting. In order to refine the text features, we further develop a projector that transforms multi-level visual features into text features. We observe that this concept-guided prompt learning approach is able to achieve enhanced consistency between visual and linguistic modalities. Extensive experimental results demonstrate that our CPL method significantly improves generalization capabilities compared to the current state-of-the-art methods. Introduction Recent studies in pre-trained Vision-Language Models (VLMs), such as CLIP (Radford et al. 2021) and ALIGN (Jia et al. 2021), highlight a promising direction for foundation models in performing a variety of open-vocabulary tasks. By understanding various visual concepts learned from extensive image-text pairs, these models exhibit impressive capabilities across a broad spectrum of downstream tasks in a zero/few-shot manner (Radford et al. 2021; Alayrac et al. 2022; Yu et al. 2022). Although the zero-shot CLIP model demonstrates competitive performance in various visual tasks, its nature as a pre-trained model hinders its ability to generalize to unseen domains. Therefore, several works focus on fine-tuning these pre-trained VLMs for downstream tasks through designing learnable prompts derived from training instances. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Examples and performance comparisons on baseto-novel generalization and cross-dataset transfer tasks. Our proposed CPL exhibits remarkable generalization capabilities in comparison to other state-of-the-art methods. For example, CoOp (Zhou et al. 2022b) firstly introduces learnable prompts to distill task-relevant knowledge; CoCoOp (Zhou et al. 2022a) suggests adjusting the prompt based on each individual image; and TaskRes (Yu et al. 2023) proposes to incorporate a prior-independent task residual that doesn’t undermine the well-learned knowledge of CLIP. For clarification, we provide an overview of each of the aforementioned methods in Figure 2. For the generalization tasks as shown in Figure 1, we obThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7377 Figure 2: An illustration comparing our proposed proposed CPL approach with related baselines. We include CoOp (Zhou et al. 2022b), CoCoOp (Zhou et al. 2022a) and TaskRes (Yu et al. 2023) for comparison. serve that the current fine-tuning methods for CLIP, such as CoOp and CoCoOp, demonstrate relatively low performance on some difficult fine-grained datasets such as DTD (texture recognition), FGVC Aircraft (fine-grained classification), EuroSAT (satellite image recognition), and UCF101 (action recognition). We recognize that this issue may arise from CoOp and CoCoOp’s direct tuning of the input text prompts to the text encoder, which can potentially undermine the previously well-learned knowledge of VLMs. To address this issue, TaskRes attempts to incorporate priorindependent learnable contexts to preserve this knowledge. To further explore the potential of prompt tuning methods, we ask: is it possible to utilize the prior knowledge of VLMs during the fine-tuning process without destroying it? We also observe that the previous fine-tuning methods only considered adapting to a specific task using supervised loss, which is not fully effective in generalizing to unseen domains. This limitation stems from the fact that they primarily consider class-specific features and overlook lowlevel visual concepts, such as colors, shapes, and materials. However, these low-level concepts are naturally transferable across domains and are therefore essential for enabling vision-language models to generalize. As a result, we are prompted to ask: is it possible to incorporate visual concepts into the fine-tuning process for VLMs to enhance their transfer capabilities? To address the problems above, in this work, we propose Concept-Guided Prompt Learning (CPL) for visionlanguage models. Specifically, we leverage the well-learned knowledge of CLIP to create a visual concept cache to enable concept-guided prompting. In order to refine the text features, we further develop a projector that projects multilevel visual features into text features. We observe that this concept-guided prompt learning approach is able to achieve enhanced consistency between visual and linguistic modalities, leading to improved generalization capability. We conducted extensive experiments to evaluate the proposed CPL approach on base-to-novel generalization, crossdataset transfer, and domain generalization tasks. Our comprehensive empirical results demonstrate the significantly superior performance of CPL compared to existing state-ofthe-art methods. Related Work Vision-Language Models In recent years, vision-language models have attracted significant attention from researchers, emerging as a novel paradigm for performing visual tasks. Specifically, largescale VLMs have been utilized to acquire general visual representations guided by natural language supervision (Radford et al. 2021). Current studies highlight that these models, pre-trained on vast image-text pairs available online, are capable of understanding both the semantics of images paired with their respective textual descriptions (Radford et al. 2021; Yu et al. 2022). Recent studies (Zhang et al. 2021; Zhou et al. 2022b) have showcased that with a robust comprehension of open-vocabulary concepts, VLMs are able to tackle various downstream visual tasks, including image retrieval (Duan et al. 2022), depth estimation (Hu et al. 2023), visual grounding (Li et al. 2022), visual question answering (Duan et al. 2022). Fine-Tuning VLMs Fine-tuning is crucial in adapting VLMs to downstream tasks (Duan et al. 2022). Among various fine-tuning methods for VLMs, two primary approaches stand out: prompt tuning methods and adapter-based methods, respectively. Prompt Tuning Methods. Prompt tuning methods transform prompts into continuous vector representations for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7378 end-to-end objective function optimization, distilling taskrelevant information from prior knowledge of VLMs (Zhou et al. 2022a,b). As the foundational work in this field, CoOp (Zhou et al. 2022b) optimizes the prompt context by a continuous set of learnable vectors. Further, CoCoOp (Zhou et al. 2022a) recognizes the generalization issue not addressed by CoOp and proposes to generate prompts on each individual image. MaPLe (Khattak et al. 2023) tunes both vision and language branches via a vision-language coupling function in order to induce cross-modal synergy. Adapter-Based Methods. Another series of works directly transforms the features extracted by encoders of CLIP to perform adaptation to downstream tasks. These methods are referred to as adapter-based methods. For example, CLIP-Adapter (Gao et al. 2023), one of the pioneering works, leverages an extra feature adapter to enhance traditional fine-tuning outcomes. Following CLIP-Adapter, TipAdapter (Zhang et al. 2022) introduces a training-free approach by constructing a key-value cache model based on few-shot samples. CCLI (Zhang et al. 2023b) proposes to enable concept-level image representation to perform downstream tasks. BDC-Adapter (Zhang et al. 2023a) enhances vision-language reasoning by providing a more robust metric for measuring similarity between features. In addition, Zhu et al. (2023b) proposes APE, which harnesses the prior knowledge of VLMs by a prior cache model, and explores the trilateral relationships among test images, the prior cache model, and textual representations. Visual Concept Learning Existing literature has suggested two major approaches to visual concept learning. The first approach typically uses hand-crafted semantic concept annotations (e.g., colors, textures, and fabric) for the training images (Patterson and Hays 2012, 2016; Pham et al. 2021), which is labor-intensive in practice. To address this issue, researchers propose the second approach, which aims at designing data-driven concepts through unsupervised learning (Fei-Fei and Perona 2005; Liu, Kuipers, and Savarese 2011; Huang, Loy, and Tang 2016). While these acquired concepts might initially appear sensible, they can often carry inherent biases, ultimately constraining their overall performance. Empowered by CLIP (Radford et al. 2021), in this work, we design an unsupervised concept mining-and-cache technique that is capable of discovering a large set of visual concepts with semantics corresponding to pre-defined text concepts. Method Background CLIP. CLIP (Radford et al. 2021) stands out as a foundational model that constructs an shared embedding space through the fusion of visual and semantic understanding. This architecture is composed of two encoders: a visual encoder denoted as Ev responsible for handling image input x, and a text encoder referred to as Et designed to process the corresponding textual prompt tc built as “a photo of [CLS]c”, where [CLS]c represents the word embedding for the class c. During training, CLIP learns to optimize the resemblance between the image feature and the prompt embeddings associated with the true label. CoOp and CoCoOp. CoOp (Zhou et al. 2022b) replaces manual prompt construction by introducing an alternate method that involves learned prompts. This method utilizes a collection of n adaptable context vectors {[V1], [V2], · · · , [Vn]}, each having the same dimension as word embeddings. These vectors are iteratively updated through gradient descent. For a specific class c, the respective prompt can be represented as tc = {[V1], [V2], · · · , [Vn], [CLS]c}. CoCoOp (Zhou et al. 2022a) integrates visual features into prompt learning by utilizing a meta-network hθ(x) that generates a meta-token π, denoted as π = hθ(x). The meta-token, combined with the context vectors, form the textual prompts tc = {[V1(x)], [V2(x)], · · · , [Vn(x)], [CLS]c}, where Vn(x) = Vn + π represents the nth text token. Concept-Guided Prompt Learning Overview. In Figure 3, we present an overview of our proposed CPL method. Figure 3 (a) shows the visual concept cache establishing process. We first construct a list of text concepts Ψt that describe major visual concepts. Then we leverage CLIP’s robust text-image correlation capability to discover the image feature vj with the highest similarity score for each text concept feature ci t ∈Ct. These “matched” features are stored in the visual concepts cache as keys, with their corresponding text concepts ψi ∈Ψt as values. Figure 3 (b) shows the concept-guided discovery process: we first extract the image feature v by Ev, then use the image feature as the query to find Top-K similar keys using cosine distance, and finally we utilize the corresponding values to generate a concept-guided prompt. Figure 3 (c) presents the training pipeline for CPL. We first extract the visual features for a given image x using the visual encoder, then we can obtain the concatenated outputs of different layers ˆE(x) as the multi-level features. Next, we follow (b) to generate the concept-guided prompt and extract text features by Et. These features are used as input for the projector, which is a transformer decoder for mapping multi-level visual features into the textual feature space, providing the multi-level visual context. Combined with the multi-level visual context and a task adapter, refined text features work as a classifier for final prediction. Visual Concept Cache. In Figure 3 (a), following Zhang et al. (2023b), we start by constructing a comprehensive list Ψt comprising I text concepts that describe major visual concepts. This list Ψt incorporates 2000 common text descriptions for visual concepts gathered from established visual concept datasets (Zhao et al. 2019; Pham et al. 2021). The descriptions encompass words representing materials, colors, shapes, etc. Illustrations of these terms can be found in Figure 4. The dictionary is represented as Ψt ≜{ψi}I i=1. Adhering to CLIP’s zero-shot setup, we begin by appending ψi to a manually designed prompt ϕ = “The photo is ...” to form a concept-specific textual input {ϕ; ψi}. ConseThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7379 Figure 3: An overview of our proposed Concept-Guided Prompt Learning (CPL) method. Subfigure (a) shows the visual concept cache-establishing process. Subfigure (b) shows the concept-guided prompt discovery process. Subfigure (c) presents the training pipeline of our proposed CPL, where the projector and task adapter are learnable. quently, utilizing the text encoder Et, we generate text concept features Ct ≜{ci t}I i=1, denoted as ci t = Et(ϕ; ψi). Within CPL, the visual concepts are discovered by leveraging the text concept features Ct and the CLIP model, derived from the training images. In the scenario of N-shot D-class few-shot learning, where there exist N labeled images within each of the D classes, the training set is denoted as Tr ≜{xj}ND j=1. Utilizing the CLIP visual encoder Ev, we generate their respective image features V ≜{vj}ND j=1, expressed as vj = Ev(xj). For every text concept feature ct ∈Ct, the similarity score St is calculated against all visual features in V using the formula St = sim (ct, vj) = ctvj, where both ct and vj are normalized. Subsequently, we identify the image feature with the highest similarity score as the key and its corresponding text concept word ψ as the associated value, stored within the visual concept cache. Projector for Vision-to-Language Prompting. Incorporating depictions of rich visual semantics can enhance the precision of the textual content. Multi-level visual features provide richer visual semantics than only high-level (classspecific) features. Therefore, we explore how to utilize multi-level features to optimize the text features. Obviously, we can use a projector to transform multi-level features into the space of text features. Transformer decoder (Vaswani et al. 2017; Rao et al. 2022; Lu et al. 2021) can model the interactions between vision and language by adopting cross-attention mechanism. Hence, we use the Transformer decoder as the projector. Several studies (Lin et al. 2017; Wang et al. 2021) have already demonstrated that in deep neural networks, the features generated by the earlier layers differ from those produced by the subsequent layers in level. Typically, the earlier layers yield low-level features, such as edges and colors, whereas the later layers produce high-level features, referred to as class-specific features. Inspired by Singha et al. (2023), our aim is to incorporate the multi-level visual features from Ev into the projector P. To achieve this, we utilize global average pooling to reduce the spatial dimensions of individual channels. This produces ˆEq v(x) ∈RC×1, where Eq v ∈RW ×H×C signifies the output derived from the qth layer, where W, H, C are the width, height, and number of channels of the feature maps. Incorporating this method, we formulate ˆE(x) as the concatenation of multi-level features acquired from all Q encoder layers within Ev, denoted as [ ˆE1 v(x); · · · ; ˆEQ v (x)]. We subsequently pass ˆE(x) and ft through the projector P, which generates multi-level visual context ftv: ftv = P(ft, ˆE(x)), (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7380 Figure 4: Example text concepts collected from existing visual attribute datasets. Here we present several instances of terms that illustrate color, material, size, and shape within our dictionary of text concepts. where ftv is the extracted visual contexts, and ft is the text feature generated by the CLIP text encoder. This implementation fosters the exploration of text features to identify the most relevant visual cues. Task Adapter. As illustrated in Figure 3, we append a learnable matrix (i.e. task adapter), denoted by A, to text features ft generated by the text encoder Et. A is task-specific and updated by gradient descent during the training process. In this way, it performs directly on the text-based classifier and explicitly decouples the inherent knowledge of the pre-trained models and the new knowledge for a target task. Therefore, we can preserve the prior knowledge of conceptguided prompts and assimilate knowledge from new tasks, improving the adaptability of the proposed model. CPL Training and Inference CPL Training. In the training phase, we utilize the supervised contrastive loss, represented as Lce, as the loss function for our approach. This cross-entropy loss guarantees an appropriate alignment between visual and textual feature representations. Given an image x, we first generate its visual features by Ev(x), denoted as fv, we follow the concept-guided prompt discovery process to find the concept-guided prompt, denoted as Pc, then we can obtain the text features by Et(Pc), denoted as ft. According to Equation (1), we get ftv. Finally, we can calculate the refined text features eft by, eft = ft + αftv + βA, (2) where α and β are learnable parameters to control the scaling of the residual and text features are updated through a residual connection. The values assigned to parameters α and β upon initialization are exceptionally diminutive (e.g., 10−4). This choice aims to uphold the language priors extensively within the original text features. The prediction probability for x pertaining to label i is represented as p(y = i|x) = exp(sim(ef i t, fv)/τ) PK j=1 exp(sim(ef j t, fv)/τ)) , (3) where τ is a temperature coefficient and ‘sim’ represents the cosine similarity. The cross-entropy loss is calculated by Lce = −arg min θP, A E (x,y)∈Dtr Ytr X k=1 yk log(p(yk|x)), (4) where θP is the parameter weights of the projector P, A is the learnable matrix of the task adapter, and Ytr are the class labels for the training dataset. CPL Inference. During the inference phase for the test dataset Dte, where Yte signifies the labels of this dataset, we calculate the cosine similarity between the images xte and the prompt embeddings for all classes within the test dataset Yte. The class exhibiting a higher probability value is subsequently chosen: ˆyte = arg max y∈Yte p(y|xte). (5) Experiments Benchmark Settings Task Settings. We follow previous work to evaluate our proposed approach on four challenging task settings: • Generalization from Base to Novel Classes. We evaluate the generalization capability of our method in a zeroshot scenario by dividing the datasets into base and novel classes. We train our model with few-shot images on the base classes and then evaluate on unseen novel classes. • Cross-Dataset Transfer. We conduct a direct evaluation of our ImageNet-trained model across various other datasets. Following previous work, we train our model on all 1,000 ImageNet classes in a few-shot setting. • Domain Generalization. We evaluate the robustness of our method on OOD datasets. Similarly, we evaluate our model trained on ImageNet directly on four ImageNet variants that encompass different types of shifts. Datasets. For base-to-novel generalization, cross-dataset transfer tasks, we follow previous work (Radford et al. 2021; Zhou et al. 2022b,a) to conduct the experiments on 11 representative image classification datasets, including ImageNet (Deng et al. 2009) and Caltech101 (FeiFei, Fergus, and Perona 2004) for generic object classification; OxfordPets (Parkhi et al. 2012), StanfordCars (Krause et al. 2013), Flowers102 (Nilsback and Zisserman 2008), Food101 (Bossard, Guillaumin, and Van Gool 2014), and FGVCAircraft (Maji et al. 2013) for fine-grained classification; SUN397 (Xiao et al. 2010) for scene recognition; UCF101 (Soomro, Zamir, and Shah 2012) for action recognition; DTD (Cimpoi et al. 2014) for texture classification; and EuroSAT (Helber et al. 2019) for satellite image recognition. For domain generalization, we utilize ImageNet as the source dataset and four ImageNet variants as target datasets including ImageNet-A (Hendrycks et al. 2021b), ImageNet-R (Hendrycks et al. 2021a), ImageNet-V2 (Recht et al. 2019), ImageNet-Sketch (Wang et al. 2019). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7381 (a) Average over 11 datasets. Method Base Novel HM CLIP 69.34 74.22 71.70 CoOp 82.69 63.22 71.66 CoCoOp 80.47 71.69 75.83 MaPLe 82.28 75.14 78.55 ProGrad 82.48 70.75 76.16 KgCoOp 80.73 73.60 77.00 Ours 84.38 78.03 81.08 +1.69 +2.89 +2.53 (b) ImageNet. Method Base Novel HM CLIP 72.43 68.14 70.22 CoOp 76.47 67.88 71.92 CoCoOp 75.98 70.43 73.10 MaPLe 76.66 70.54 73.47 ProGrad 77.02 66.66 71.46 KgCoOp 75.83 69.96 72.78 Ours 78.74 72.03 75.24 +1.72 +1.49 +1.77 (c) Caltech101. Method Base Novel HM CLIP 96.84 94.00 95.40 CoOp 98.00 89.81 93.73 CoCoOp 97.96 93.81 95.84 MaPLe 97.74 94.36 96.02 ProGrad 98.02 93.89 95.91 KgCoOp 97.72 94.39 96.03 Ours 98.35 95.13 96.71 +0.33 +0.74 +0.68 (d) OxfordPets. Method Base Novel HM CLIP 91.17 97.26 94.12 CoOp 93.67 95.29 94.47 CoCoOp 95.20 97.69 96.43 MaPLe 95.43 97.76 96.58 ProGrad 95.07 97.63 96.33 KgCoOp 94.65 97.76 96.18 Ours 95.86 98.21 97.02 +0.43 +0.45 +0.44 (e) StanfordCars. Method Base Novel HM CLIP 63.37 74.89 68.65 CoOp 78.12 60.40 68.13 CoCoOp 70.49 73.59 72.01 MaPLe 72.94 74.00 73.47 ProGrad 77.68 68.63 72.88 KgCoOp 71.76 75.04 73.36 Ours 79.31 76.65 77.96 +1.19 +1.61 +4.49 (f) Flowers102. Method Base Novel HM CLIP 72.08 77.80 74.83 CoOp 97.60 59.67 74.06 CoCoOp 94.87 71.75 81.71 MaPLe 95.92 72.46 82.56 ProGrad 95.54 71.87 82.03 KgCoOp 95.00 74.73 83.65 Ours 98.07 80.43 88.38 +0.47 +2.63 +4.73 (g) Food101. Method Base Novel HM CLIP 90.10 91.22 90.66 CoOp 88.33 82.26 85.19 CoCoOp 90.70 91.29 90.99 MaPLe 90.71 92.05 91.38 ProGrad 90.37 89.59 89.98 KgCoOp 90.05 91.70 91.09 Ours 91.92 93.87 92.88 +1.21 +1.82 +1.50 (h) FGVCAircraft. Method Base Novel HM CLIP 27.19 36.29 31.09 CoOp 40.44 22.30 28.75 CoCoOp 33.41 23.71 27.74 MaPLe 37.44 35.61 36.50 ProGrad 40.54 27.57 32.82 KgCoOp 36.21 33.55 34.83 Ours 42.27 38.85 40.49 +1.73 +2.56 +3.99 (i) DTD. Method Base Novel HM CLIP 53.24 59.90 56.37 CoOp 79.44 41.18 54.24 CoCoOp 77.01 56.00 64.85 MaPLe 80.36 59.18 68.16 ProGrad 77.35 52.35 62.45 KgCoOp 77.55 54.99 64.35 Ours 80.92 62.27 70.38 +0.56 +2.37 +2.22 (j) SUN397. Method Base Novel HM CLIP 69.36 75.35 72.23 CoOp 80.60 65.89 72.51 CoCoOp 79.74 76.86 78.27 MaPLe 80.82 78.70 79.75 ProGrad 81.26 74.17 77.55 KgCoOp 80.29 76.53 78.36 Ours 81.88 79.65 80.75 +0.62 +0.95 +1.00 (k) EuroSAT. Method Base Novel HM CLIP 56.48 64.05 60.03 CoOp 92.19 54.74 68.69 CoCoOp 87.49 60.04 71.21 MaPLe 94.07 73.23 82.35 ProGrad 90.11 60.89 72.67 KgCoOp 85.64 64.34 73.48 Ours 94.18 81.05 87.12 +0.11 +7.82 +4.77 (l) UCF101. Method Base Novel HM CLIP 70.53 77.50 73.85 CoOp 84.69 56.05 67.46 CoCoOp 82.33 73.45 77.64 MaPLe 83.00 78.66 80.77 ProGrad 84.33 74.94 79.35 KgCoOp 82.89 76.67 79.65 Ours 86.73 80.17 83.32 +2.04 +1.51 +2.55 Table 1: Comparison with state-of-the-art methods on base-to-novel generalization (on ViT-B/16 backbone). Our proposed method learns local concepts and demonstrates strong generalization results over existing methods on 11 recognition datasets. The best results are in bold and the second-best results are underlined. Implementation Details. For a fair comparison, we use the ViT-B/16 CLIP model for base-to-novel generalization and cross-dataset transfer and the ResNet-50 CLIP model for domain generalization. Throughout the training process, both the visual and textual encoders remain fixed. We adhere to the data pre-processing protocol outlined in CLIP, which involves resizing and applying random cropping operations, etc.. We conduct training for 70 epochs on the ImageNet and 50 epochs for other datasets. We designate the number of concepts K as 10. Training involves a batch size of 256 and an initial learning rate set at 10−3. We employ the AdamW optimizer with a cosine annealing scheduler and train the models on a single NVIDIA RTX 3090 GPU. Code will be available at https://github.com/rambo-coder/CPL. Generalization from Base to Novel Classes We compare our method with six baselines: zero-shot CLIP (Radford et al. 2021), CoOp (Zhou et al. 2022b), CoCoOp (Zhou et al. 2022a), ProGrad (Zhu et al. 2023a), MaPLe (Khattak et al. 2023), and KgCoOp (Yao, Zhang, and Xu 2023). Table 1 displays results regarding base-to-novel generalization across 11 datasets with 16-shot samples. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7382 Source Target ImageNet Caltech101 OxfordPets StanfordCars Flowers102 Food101 Aircraft SUN397 DTD EuroSAT UCF101 Average CoOp 71.51 93.70 89.14 64.51 68.71 85.30 18.47 64.15 41.92 46.39 66.55 63.88 CoCoOp 71.02 94.43 90.14 65.32 71.88 86.06 22.94 67.36 45.73 45.37 68.21 65.74 MaPLe 70.72 93.53 90.49 65.57 72.23 86.20 24.74 67.01 46.49 48.06 68.69 66.30 Ours 73.53 95.52 91.64 66.17 73.35 87.68 27.36 68.24 48.96 51.25 70.52 68.07 Table 2: Comparison of our method with existing approaches on cross-dataset evaluation. Overall, our method demonstrates superior generalization capabilities with the highest average accuracy on 10 datasets. Method Source Target ImageNet -V2 -Sketch -A -R CLIP 60.33 53.27 35.44 21.65 56.00 CoOp 63.33 55.40 34.67 23.06 56.60 CoCoOp 62.81 55.72 34.48 23.32 57.74 ProGrad 62.17 54.70 34.40 23.05 56.77 PLOT 63.01 55.11 33.00 21.86 55.61 DeFo 64.00 58.41 33.18 21.68 55.84 TPT 60.74 54.70 35.09 26.67 59.11 Ours 66.92 58.67 37.64 31.05 60.08 Table 3: Comparison with other methods on robustness (%) to natural distribution shifts. The best results are in bold and the second-best results are underlined. Performance Evaluation on Base Classes. CoOp demonstrates remarkable performance on base classes among previous methods. However, it exhibits an overfitting problem for its excessive dependence on a single learnable prompt component, as argued by CoCoOp. Our method remarkably surpasses CoOp by an average accuracy gain of 1.69% without a generalizability depletion, and achieves the best performance on base classes for all datasets, as illustrated in Table 1. Our method’s efficacy indicates its substantial capability to adapt to downstream tasks. Generalization to Unseen Classes. Although CoCoOp improves CoOp’s limited generalizability by conditioning prompts on image instances, it has an average degradation of -2.22% on base classes. As a comparison, MaPLe obtains balanced performance on both base and novel classes. Remarkably, our CPL method achieves the highest performance in terms of novel classes and harmonic mean (HM) on all 11 datasets, with an accuracy improvement of 2.89% and 2.53%, respectively. With visual concepts extracted from prior knowledge, our CPL can better generalize to novel categories. The exceptional performance demonstrates the enhanced generalizability of CPL to unseen classes without sacrificing performance in base classes. Method 1 2 4 8 16 CLIP 60.33 60.33 60.33 60.33 60.33 + CGP 61.06 61.59 62.65 63.17 64.38 + CGP + P 62.32 62.88 63.80 64.83 66.35 + CGP + P + TA 63.02 63.37 64.36 65.31 66.92 Table 4: Effectiveness of different components in our method. CGP and P represent concept-guided prompting and the projector, respectively, and TA is the task adapter. Cross-Dataset Transfer Cross-dataset transfer is a much more challenging generalization task compared to base-to-novel generalization, since the latter only transfers within a single dataset while the former transfers across different datasets, e.g., from object recognition to scene classification. We test the crossdataset generalization ability of our method on 1000 ImageNet classes and then transfer it directly to the remaining 10 datasets. The comparison results with CoOp, CocoOp, and MaPLe are presented in Table 2. Overall, our CPL method marks the best performance on both source and target datasets with a target average of 68.07%, and outperforms MaPLe by 1.77%. Notably, our method surpasses MaPLe by 3.2% on EuroSAT, a satellite image dataset whose fundamentals are distinctive from ImageNet. This suggests that concept-guided prompting in our method facilitates better generalization, as illustrated in Figure 1. Domain Generalization In Table 3, we provide the classification accuracy across the source domain and target domains, as well as the average accuracy within target domains (OOD Average). In addition to the methods mentioned earlier, we also compare our approach with PLOT (Chen et al. 2023), DeFo (Wang et al. 2023), TPT (Shu et al. 2022). Our approach surpasses other methods in all scenarios, indicating the remarkable robustness of our model against distribution shifts. Ablation Studies Contributions of major algorithm components. From Table 4, we can see that all three components contribute significantly to the enhanced performance. Among them, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7383 Value of K 6 8 10 12 14 Accuracy 65.33 66.28 66.92 66.56 66.31 Value of I 1000 1500 2000 2500 3000 Accuracy 63.67 65.88 66.92 66.71 66.23 Table 5: Number K of concepts selected and total size I of concept set. Experiments are conducted on 16-shot ImageNet. Here for I, we fix the number of concept categories and vary the number of concepts in each category. Method Epochs Time Accuracy Gain Zero-shot CLIP 0 0 60.33 0 Linear Probe CLIP 13m 56.13 -4.20 CoOp 200 14h 40m 62.26 +1.93 ProGrad 200 17h 63.45 +3.12 Ours 70 50min 66.92 +6.59 Table 6: Comparison on the number of training epochs and time on 16-shot ImageNet. concept-guided prompting brings the largest performance improvement, for example, a 4.05% improvement in 16-shot accuracy. This shows that a more accurate and specific text description leads to better classification results. The number K of concepts selected and the size I of text concepts set. We investigate the impact of K by varying the number of concepts selected and show the results in Table 5. We find that our method achieves the best performance when K = 10. The results also show that our method achieves the best performance when I = 2000. When the size is too large, the performance decreases since the different text concepts might match the same visual concept. Comparison on the number of training epochs and time. As shown in Table 6, our proposed CPL outperforms other methods by a large margin with only 50 minutes, while CoOp and ProGrad need more than 14 hours. This demonstrates the remarkable efficiency of our method. Conclusion In this work, we introduce Concept-Guided Prompt Learning (CPL) for vision-language models. By utilizing the profound knowledge embedded in CLIP, we form a visual concept cache that facilitates concept-guided prompting. To further refine text features, we design a projector that projects multi-level visual features into corresponding textual features. Our proposed CPL method exhibits great effectiveness in diverse applications such as base-to-novel generalization, cross-dataset transfer, and domain generalization tasks. Supported by thorough experimental analysis, we demonstrate that our proposed CPL achieves remarkable performance improvements, and also surpasses existing leadingedge methods by substantial margins. References Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022. Flamingo: a visual language model for fewshot learning. In Advances in Neural Information Processing Systems, volume 35, 23716–23736. Bossard, L.; Guillaumin, M.; and Van Gool, L. 2014. Food-101–mining discriminative components with random forests. In European Conference on Computer Vision, 446– 461. Chen, G.; Yao, W.; Song, X.; Li, X.; Rao, Y.; and Zhang, K. 2023. PLOT: Prompt learning with optimal transport for vision-language models. In International Conference on Learning Representations. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; and Vedaldi, A. 2014. Describing textures in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3606–3613. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 248–255. Duan, J.; Chen, L.; Tran, S.; Yang, J.; Xu, Y.; Zeng, B.; and Chilimbi, T. 2022. Multi-modal alignment using representation codebook. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15651–15660. Fei-Fei, L.; Fergus, R.; and Perona, P. 2004. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 178. Fei-Fei, L.; and Perona, P. 2005. A bayesian hierarchical model for learning natural scene categories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, volume 2, 524–531. Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; and Qiao, Y. 2023. Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision, 1–15. Helber, P.; Bischke, B.; Dengel, A.; and Borth, D. 2019. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7): 2217–2226. Hendrycks, D.; Basart, S.; Mu, N.; Kadavath, S.; Wang, F.; Dorundo, E.; Desai, R.; Zhu, T.; Parajuli, S.; Guo, M.; et al. 2021a. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8340–8349. Hendrycks, D.; Zhao, K.; Basart, S.; Steinhardt, J.; and Song, D. 2021b. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15262–15271. Hu, X.; Zhang, C.; Zhang, Y.; Hai, B.; Yu, K.; and He, Z. 2023. Learning to adapt CLIP for few-shot monocular depth estimation. arXiv preprint arXiv:2311.01034. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7384 Huang, C.; Loy, C. C.; and Tang, X. 2016. Unsupervised learning of discriminative attributes and visual representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5175–5184. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, 4904–4916. Khattak, M. U.; Rasheed, H.; Maaz, M.; Khan, S.; and Khan, F. S. 2023. Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19113–19122. Krause, J.; Stark, M.; Deng, J.; and Fei-Fei, L. 2013. 3D object representations for fine-grained categorization. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 554–561. Li, L. H.; Zhang, P.; Zhang, H.; Yang, J.; Li, C.; Zhong, Y.; Wang, L.; Yuan, L.; Zhang, L.; Hwang, J.-N.; et al. 2022. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10965–10975. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017. Feature pyramid networks for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2117–2125. Liu, J.; Kuipers, B.; and Savarese, S. 2011. Recognizing human actions by attributes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3337–3344. Lu, Z.; He, S.; Zhu, X.; Zhang, L.; Song, Y.-Z.; and Xiang, T. 2021. Simpler is better: Few-shot semantic segmentation with classifier weight transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8741–8750. Maji, S.; Rahtu, E.; Kannala, J.; Blaschko, M.; and Vedaldi, A. 2013. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151. Nilsback, M.-E.; and Zisserman, A. 2008. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics & Image Processing, 722–729. IEEE. Parkhi, O. M.; Vedaldi, A.; Zisserman, A.; and Jawahar, C. 2012. Cats and dogs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3498– 3505. Patterson, G.; and Hays, J. 2012. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2751–2758. Patterson, G.; and Hays, J. 2016. Coco attributes: Attributes for people, animals, and objects. In European Conference on Computer Vision, 85–100. Pham, K.; Kafle, K.; Lin, Z.; Ding, Z.; Cohen, S.; Tran, Q.; and Shrivastava, A. 2021. Learning to predict visual attributes in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13018–13028. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 8748–8763. PMLR. Rao, Y.; Zhao, W.; Chen, G.; Tang, Y.; Zhu, Z.; Huang, G.; Zhou, J.; and Lu, J. 2022. Denseclip: Language-guided dense prediction with context-aware prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18082–18091. Recht, B.; Roelofs, R.; Schmidt, L.; and Shankar, V. 2019. Do imagenet classifiers generalize to imagenet? In International Conference on Machine Learning, 5389–5400. PMLR. Shu, M.; Nie, W.; Huang, D.-A.; Yu, Z.; Goldstein, T.; Anandkumar, A.; and Xiao, C. 2022. Test-time prompt tuning for zero-shot generalization in vision-language models. In Advances in Neural Information Processing Systems, volume 35, 14274–14289. Singha, M.; Jha, A.; Solanki, B.; Bose, S.; and Banerjee, B. 2023. APPLeNet: Visual attention parameterized promptlearning for few-Shot remote sensing image generalization using clip. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024–2034. Soomro, K.; Zamir, A. R.; and Shah, M. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, 6000–6010. Wang, F.; Li, M.; Lin, X.; Lv, H.; Schwing, A.; and Ji, H. 2023. Learning to decompose visual features with latent textual prompts. In International Conference on Learning Representations. Wang, H.; Ge, S.; Lipton, Z.; and Xing, E. P. 2019. Learning robust global representations by penalizing local predictive power. In Advances in Neural Information Processing Systems, volume 32, 10506–10518. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 568–578. Xiao, J.; Hays, J.; Ehinger, K. A.; Oliva, A.; and Torralba, A. 2010. Sun database: Large-scale scene recognition from abbey to zoo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3485–3492. Yao, H.; Zhang, R.; and Xu, C. 2023. Visual-language prompt tuning with knowledge-guided context optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6757–6767. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7385 Yu, J.; Wang, Z.; Vasudevan, V.; Yeung, L.; Seyedhosseini, M.; and Wu, Y. 2022. CoCa: Contrastive captioners are image-text foundation models. Transactions on Machine Learning Research. Yu, T.; Lu, Z.; Jin, X.; Chen, Z.; and Wang, X. 2023. Task residual for tuning vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10899–10909. Zhang, R.; Qiu, L.; Zhang, W.; and Zeng, Z. 2021. Vt-clip: Enhancing vision-language models with visual-guided texts. arXiv preprint arXiv:2112.02399. Zhang, R.; Zhang, W.; Fang, R.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; and Li, H. 2022. Tip-adapter: Training-free adaption of clip for few-shot classification. In European Conference on Computer Vision, 493–510. Springer. Zhang, Y.; Zhang, C.; Liao, Z.; Tang, Y.; and He, Z. 2023a. BDC-Adapter: Brownian distance covariance for better vision-language reasoning. In British Machine Vision Conference. Zhang, Y.; Zhang, C.; Tang, Y.; and He, Z. 2023b. Cross-Modal Concept Learning and Inference for VisionLanguage Models. arXiv preprint arXiv:2307.15460. Zhao, B.; Fu, Y.; Liang, R.; Wu, J.; Wang, Y.; and Wang, Y. 2019. A large-scale attribute dataset for zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 398–407. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhu, B.; Niu, Y.; Han, Y.; Wu, Y.; and Zhang, H. 2023a. Prompt-aligned gradient for prompt tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15659–15669. Zhu, X.; Zhang, R.; He, B.; Zhou, A.; Wang, D.; Zhao, B.; and Gao, P. 2023b. Not all features matter: Enhancing fewshot clip with adaptive prior refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2605–2615. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7386
2024
820
18,652
ISP-Teacher:Image Signal Process with Disentanglement Regularization for Unsupervised Domain Adaptive Dark Object Detection Yin Zhang*, Yongqiang Zhang*, Zian Zhang, Man Zhang, Rui Tian, Mingli Ding School of Instrument Science and Engineering, Harbin Institute of Technology {yin.zhang.hit, yongqiang.zhang.hit, sieann.hit, man.zhang.hit, rui.tian.hit, mingli.ding.hit}@gmail.com Abstract Object detection in dark conditions has always been a great challenge due to the complex formation process of low-light images. Currently, the mainstream methods usually adopt domain adaptation with Teacher-Student architecture to solve the dark object detection problem, and they imitate the dark conditions by using non-learnable data augmentation strategies on the annotated source daytime images. Note that these methods neglected to model the intrinsic imaging process, i.e. image signal processing (ISP), which is important for camera sensors to generate low-light images. To solve the above problems, in this paper, we propose a novel method named ISP-Teacher for dark object detection by exploring Teacher-Student architecture from a new perspective (i.e. self-supervised learning based ISP degradation). Specifically, we first design a day-to-night transformation module that consistent with the ISP pipeline of the camera sensors (ISP-DTM) to make the augmented images look more in line with the natural low-light images captured by cameras, and the ISP-related parameters are learned in a selfsupervised manner. Moreover, to avoid the conflict between the ISP degradation and detection tasks in a shared encoder, we propose a disentanglement regularization (DR) that minimizes the absolute value of cosine similarity to disentangle two tasks and push two gradients vectors as orthogonal as possible. Extensive experiments conducted on two benchmarks show the effectiveness of our method in dark object detection. In particular, ISP-Teacher achieves an improvement of +2.4% AP and +3.3% AP over the SOTA method on BDD100k and SHIFT datasets, respectively. The code can be found at https://github.com/zhangyin1996/ISP-Teacher. Introduction Object detection has achieved remarkable success and widely used in various fields such as security monitoring and autonomous driving. However, these object detection models trained on high-quality daytime images often perform poorly on low-light images, because these images taken under dark conditions suffer from various types of light and undesirable noise (Cui et al. 2022a). Furthermore, annotating low-light images is also difficult, so it is impossible to obtain high-quality annotation information of low-light im*These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Results of some Teacher-Student architecture UDA methods on BDD100k. We found that AT and TDD get worse results on day-to-night conditions, which even lower than the baseline detector Faster-RCNN (41.1% AP). Our proposed method outperforms other counterparts by a large margin and always higher than SOTA method (Kennerley et al. 2023) in any iteration. ages like daytime images. At present, dark object detection is still an urgent problem to be solved. A simple way to solve this problem is to perform dark enhancement on low-light images firstly, and then send them to an off-the-shelf detector for object classification and regression. Unfortunately, the enhanced images are visually comfortable for humans but do not benefit to the high-level task for machine vision (Cui et al. 2022b, 2021). To this end, unsupervised domain adaptive (UDA) has been proposed to address this problem. Recently, Teacher-Student architecture (Sohn et al. 2020) has attracted lots of attention in semi-supervised object detection (Wang et al. 2023; Mi et al. 2022; Liu et al. 2021) and has also achieved excellent results in the field of domain adaptation object detection (Li et al. 2022; He et al. 2022; Kennerley et al. 2023). However, as shown in Figure 1, we found that the best Teacher-Student UDA methods like AT (Li et al. 2022), TDD (He et al. 2022) achieve good results on regular domain adaptive datasets (e.g. Cityscapes to Foggy Cityscapes) but get poor results on day-to-night conditions. They are even lower than the baseline detector The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7387 (i.e. Faster-RCNN) that trained on daytime images and directly applied to nighttime images (41.1% AP). We think there are some special difficulties for object detection in dark conditions: i) Low-light images from camera sensors suffer from imbalanced noise, exposure, lighting and blur etc. , which are not found in daytime images, thus training the student network with these daytime images inevitably produce some domain bias. ii) Most of these Teacher-Student based methods solve the domain bias problem by optimizing the framework (He et al. 2022), selecting useful ground-truth labels information (Mi et al. 2022) or revising the score threshold of pseudo-bboxes dynamically (Wang et al. 2023). They usually imitate the dark conditions by using traditional non-learnable data augmentation strategies on the available annotated source daytime images. However, these methods neglected to model the intrinsic imaging process, i.e. image signal processing (ISP), which is important for camera sensors to generate low-light images. In this paper, we solve the above problems by exploring Teacher-Student architecture from a novel perspective of self-supervised learning based ISP degradation for dark object detection. More specifically, we study how to use selfsupervised learning to capture the intrinsic visual information that is not affected by lighting changes, which can address the domain bias of student network. First, inspired by image signal process (ISP) pipeline, which is a crucial component in cameras that transforms RAW data into RGB images for person visualization (Yu et al. 2021; Cui et al. 2021). We replace traditional nonlearnable data augmentation with self-supervised learning based ISP degradation, where a day-to-night transformation module that consistent with the ISP pipeline of the camera sensors (ISP-DTM) is proposed to obtain the low-light images from daytime images. Then, an Encoder-Decoder structure is utilized to encode the pair of daytime and nighttime images and decodes them into some parameters, such as gamma, light intensity, etc. through a self-supervised learning manner. Thus, the intrinsic visual information can be learned under the supervision of L1 loss. However, joint training of the self-supervised learning based ISP degradation and object detection task in a shared encoder may cause over-entanglement problem (i.e. gradient conflict problem). We found these two tasks have a negative cosine similarity that will hurt the final performance. To this end, we propose a disentanglement regularization by minimizing the gradients of cosine similarity of self-supervised learning based ISP degradation and object detection while maximizing cosine similarity of the same tasks. This simply implement can push two gradients vectors as orthogonal as possible and make the two tasks not affect each other. To sum up, the contributions of this paper are as follows: • A novel dark object detection method named ISPTeacher is proposed from a new perspective to explore a self-supervised learning based ISP degradation in a Teacher-Student architecture, which could adapt to challenging low-light conditions in the real world. • We design a day-to-night transformation (ISP-DTM) module inspired by the image signal processing pipeline of camera sensors to generate dark images from daytime images, and the obtained dark images are compatible with the natural low-light images captured by the camera which can address the domain bias of student network. • Moreover, a disentanglement regularization is imposed by minimizing the gradients of cosine similarity of two different tasks (i.e. self-supervised learning based ISP degradation and object detection) while maximizing cosine similarity of the same tasks. This simply implement could decouple these two tasks in a shared encoder. • Extensive experiments conducted on BDD100k and SHIFT datasets show the effectiveness of our proposed method. In particular, ISP-Teacher achieves the new best performance on BDD100k and SHIFT datasets by improving +2.4% and +3.3% in AP over the state-of-the-art method, respectively. Related Work Object Detection in Dark Conditions To tackle the problem of object detection in low-light conditions, a direct way is use low-light enhancement methods (Guo et al. 2020; Jin et al. 2023; Wu et al. 2023) to process the dark images and then send the de-dimming images to the mainstream object detection methods (Ren et al. 2015; Redmon and Farhadi 2018; Carion et al. 2020) for inference. However, the detection performance of these methods is unsatisfactory on some natural dark images. As a result, some end-to-end methods that train the low-light enhancement and object detection tasks jointly. For example, IAYOLO (Liu et al. 2022) designs a filter module with a learnable parameter trained jointly with YOLOv3 in an end-toend fashion to balance the tasks of image enhancement and object detection. MAET (Cui et al. 2021) introduces a multitask auto encoding transformation model to decode lowlight degrade transformation by considering noise and ISP pipeline in cameras. The main difference between MAET and our work is that we regard the ISP degradation as a selfsupervised learning task for Teacher-Student domain adaptive object detection. Furthermore, although 2PCNet (Kennerley et al. 2023) is a nighttime domain adaptive object detection method, it proposes a non-learnable data augmentation while our method is self-supervised and we consider the principle of the camera sensor in ISP-DTM. Disentanglement Regularization Multi-task networks usually contain an encoder and several decoders for specific tasks. However, these approaches face an optimization problem which sometimes leading worse performance than training each task independently. At present, scholars generally believe the main reason for this phenomenon is gradient conflict, and some methods have been proposed to solve this problem. For instance, (Yu et al. 2020) alters the gradients by projecting the gradient of one task onto the normal plane of the gradient of the other task when the values of cosine similarity are negative. (Suteu and Guo 2019) finds nearly orthogonal gradients would not interfering with each other tasks, and proposes that regularizing the angle between gradients to solve the negative transThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7388 Detector Detector ISP-DTM Lsup +λLunsup Encoder Encoder Teacher Student Target Source LDR g1 Decoder LDTM Ground Truth Self-supervised Learning Based ISP Degradation Unsupervised Flow: & Supervised Flow: Self-supervised Flow: & & DR Flow: & Disentanglement Regularization g1 g2 g1 g2 g1 g1 g2 g2 g2 EMA Figure 2: The architecture of our ISP-Teacher. Our pipeline is based on the Teacher-Student architecture, the green area (left side) is the proposed self-supervised learning based ISP degradation and the blue area (right side) is the illustration of disentanglement regularization. fer problem. The above methods are focus on classification and regression tasks, and our work is inspired by recent research (Cui et al. 2021) that minimizes the absolute value of cosine similarity to disentangle the object detection and degrade transformation tasks. Different from (Cui et al. 2021), we design a simple disentanglement regularization to decouple our self-supervised learning based ISP degradation and detection tasks in the Teacher-Student architecture by minimizing the cosine similarity of different tasks while maximizing the cosine similarity of the same tasks. Proposed Method Overview of ISP-Teacher Let Dday = {Xl, Yl} denotes the daytime dataset, which contains Xl daytime images with Yl labels in source domain. Dnight = {Xu} denotes the nighttime dataset, and it only contains Xu nighttime images without labels in target domain. Subscript l and u indicate the labeled and unlabeled data, respectively. As shown in Figure 2, our ISP-Teacher consists of a student network and a teacher network. Similar to prior works (Kennerley et al. 2023; Liu et al. 2021), both student and teacher are the Faster-RCNN (Ren et al. 2015) structure, and the detection loss Ldet is as following: Ldet = Lsup + λLunsup (1) where Lsup and Lunsup denote the supervised learning loss and unsupervised learning loss, respectively. The training process of our method is that the teacher network generates pseudo-labels Yp to train the student while the student updates the teacher network with exponential moving average (EMA). First, the student network is burned up on daytime images (source domain) under a supervised manner, and the supervised loss is formulated as: Lsup = 1 Nl Nl X i=1  Lcls(Xi l , Y i l ) + Lreg(Xi l , Y i l )  (2) where Lcls is the classification loss of RPN and ROI head in Faster-RCNN and Lreg is the Smooth L1 loss for bounding box regression. After the burn up stage, all the weights of student are transferred to the teacher. The teacher network only takes nighttime images (target domain) as input, and it is used to produce pseudo-labels for the student with an unsupervised loss: Lunsup = 1 Nu Nu X i=1 Lcls(Xi u, Y i p) (3) where Yp denotes pseudo-labels. Noted that the unsupervised loss is only applied in the classification while not used in the bounding box regression. Furthermore, there are two components in the proposed ISP-Teacher. The first component is the self-supervised learning based ISP degradation (green area on the left of Figure 2), which is used to capture the intrinsic visual information that is not affected by lighting changes. The second component is the disentanglement regularization (DR, blue area on the right of Figure 2), which disentangles dark object detection and self-supervised learning based ISP degradation to mitigate the impact between each other. In the next subsection, we will illustrate our proposed self-supervised learning based ISP degradation and DR in details. Self-supervised Learning Based ISP Degradation Self-supervised learning based ISP degradation contains a day-to-night transformation module that consistent with the ISP pipeline of the camera sensors (ISP-DTM) and a selfsupervised learning strategy. ISP-DTM As shown in Figure 3, the ISP-DTM consists of three steps: i) Invert Processing step, ii) Noise Modeling step and iii) ISP Pipeline step. Specifically, the Invert Processing step contains (1) Invert Tone Mapping, (2) Invert Gamma Correction, (3) Color Transformation ys→c and (4) Invert White Balance. Base on this step, the realistic RAW format (i.e. cRGB) images are generated and we denote (1), (2), (3), (4) together as Tinvert. Then, considering the physical noise of camera sensors, we model two common noises in camera (i.e. ‘shot’ and ‘read’ noise) in the Noise Modeling step and we denote the output of this step as ynm. Finally, the cRGB with shot and read noises are restored back to sRGB by the ISP Pipeline step for dark object detection. As shown in green area of Figure 3, the ISP Pipeline step contains (5) Signal Quantization, (6) White Balance, (7) color transformation yc→s and (8) Gamma Correction, and we define (5), (6), (7), (8) as TISP . Next, we will describe each process of ISP-DTM in details: White Balance. The human eyes have color constancy, i.e. human perception of the color tends to be stable under the change of illumination condition. However, the camera sensor does not have this characteristic resulting in color shift, and the white balance algorithm is proposed to correct this color deviation. Specifically, this algorithm balances the channel gain of red gr and blue gb to make images appearing to be lit under the neutral illumination (Cui et al. 2021). The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7389 Invert Processing Noise Modeling ISP Pipeline Prediction Share Weight L1 Loss Encoder Encoder sRGB (1) Invert TM (2) Invert GC (3) CT s c (4) Invert WB (5) Quantization (6) WB (7) CT c s (8) GC Invert Processing ISP Pipeline Noise Modeling Shot Read ynm 4096 2048 ISP-DTM Decoder k 1 / gr 1 /γ 1 / gb Ground Truth Figure 3: The structure of self-supervised learning based ISP degradation. detailed process is as follows:   ˆIr ˆIg ˆIb  = " gr 0 0 0 1 0 0 0 gb # · " Ir Ig Ib # (4) where I and ˆI denote the image before and after white balance respectively, and the subscripts r, g, b represent the three channels of the RGB image. The channel gain of red gr and blue gb are random sampled from (1.9, 2.4) and (1.5, 1.9) uniformly and independently, and set 1/gr and 1/gb in invert process based on (Brooks et al. 2019). Color Transformation. Because the data format of standard color space (sRGB) do not match the camera internal color space (cRGB), we use a 3 × 3 color correction matrix Tccm to achieve this color transformation: yc→s = Tccm · IcRGB (5) ys→c = T −1 ccm · IsRGB (6) where yc→s denotes the color transformation from the camera internal color space (IcRGB) to the final standard color space (sRGB) and ys→c denotes the invert process. Gamma Correction. The purpose of gamma correction is to adjust the overall light and dark values of images, where the dark areas of the pixels have a larger change rate and the light areas of the pixels have a smaller change rate. If the original image collected by the camera sensor is not processed by the gamma correction, it will adversely affect the results of dark object detection due to problems of illumination and shadows. Gamma correction controls the overall brightness of the image through two parameters: Iout = αI1/γ in (7) where Iin and Iout denotes input and output images, α and γ are used to adjust the shape of gamma correction curve. When γ is less than 1, the overall image will be stretched in the direction of strong illumination, and when γ value is greater than 1, it will be stretched in the direction of weak illumination. In this paper, γ is sampled from an uniform distribution γ ∼U(2, 3.5) and α is set to 1. The invert process of gamma correction is to replace 1/γ in Eq.7 with γ: Iout = αIγ in (8) Tone Mapping. High dynamic range images in real scenes require tone mapping operation to suit the dynamic range of camera sensors (Debevec and Malik 2023). Usually, the tone mapping process includes three steps: first calculating the average brightness of current scenes, then selecting a suitable brightness area according to the average brightness, and finally mapping the entire scene to this brightness area to get a correct result. Here, we simplify the tone mapping to a simple ‘smoothstep’ curve: Ftm(x) = 3x2 −2x3 (9) and it invert process is: F −1 tm (y) = 1 2 −sin(sin−1(1 −2y) 3 ) (10) Noise Modeling. Noises of camera sensors primarily comes from two sources: ‘shot’ noise leads to fluctuations in the gray value of the images and ‘read’ noise generated by the electronics in the readout the cameras. Mathematically, shot noise is a Poisson random variable whose mean is the light intensity (i.e. parameter k in Eq.11 and Eq.14) and read noise is a Gaussian random variable with zero mean and fixed variance (Brooks et al. 2019). We model both of them as xnoise: xnoise ∼N(µ = 0, σ2 = k · I · λshot + λread) (11) where I is the output image from Invert Processing step, λshot and λread are digital and analog gains of camera sensors, which could be sampled from the joint distribution of different shot/read noise parameter pairs in RAW images (Brooks et al. 2019). The details of the sampling process are as follows: log λshot ∼U(a = log(0.0001), b = log(0.012)) (12) log λread ∼N(µ = 2.18 log λshot, σ = 0.26) (13) Moreover, parameter k in Eq.11 is the light intensity (between 0.01 and 1.0), and it follows a truncated Gaussian distribution (Cui et al. 2021): k ∼N(µ = 0.1, σ = 0.08) (14) Finally, the outputs ynm of Noise Modeling step can be formulated as: ynm = k · I + xnoise (15) which is then sent to ISP Pipeline step for subsequent transformation processing. Signal Quantization. The first process in ISP Pipeline step is to quantize ynm by an analog-to-digital converter (ADC). In this paper, we simulate this process as: ˆy ∼(−1 2B , 1 2B ) (16) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7390 yquant = ynm + ˆy (17) where B is randomly selected from 12, 14 and 16 as in (Cui et al. 2021). Moreover, during the process of ISP-DTM, we calculate four parameters, i.e. light intensity k in Eq.14, 1/gamma in (2) Invert Gamma Correction, channel gain 1/gr and 1/gb in (4) Invert White Balance, which are used as the ground truth in the following self-supervised learning strategy. In summary, for a daytime image I, we obtain low-light image Il and four ground truth pi(i = 1, 2, 3, 4) by ISPDTM, and the whole process can be expressed by: Il + pi = TISP [Tinvert(I) + ynm] (18) Self-supervised Learning Strategy After obtaining lowlight images, we compose low-light images and daytime images into image pairs. Then, we utilize an Encoder-Decoder to encode the pair of image into high-level features by a weight-shared Encoder, and then to decode four parameters ˜pi(i = 1, 2, 3, 4) as the degradation predictions. The loss of self-supervised learning strategy Lself is a L1 loss: Lself = 1 4 4 X i=1 L1(pi, ˜pi) (19) where p and ˜p denote the ground truth and prediction of the parameters k, 1/gamma, 1/gr, 1/gb respectively. The weight of k, 1/gamma, 1/gr, 1/gb are set to 5:1:1:1 in our implementation. Disentanglement Regularization (DR) As shown in Figure 2, the encoder in our model has two functions: i) encode the pair of daytime and nighttime images for learning the parameters of ISP, ii) extract the feature for training the detector. The task-specific decoders are used to output two different aspects, i.e. ISP-related parameters for self-supervised learning and bounding boxes and classes of object detection. However, this multi-task learning framework may cause the problem of conflicting gradient. To overcome this issue, we propose a regularization to disentangle these two tasks (i.e. ISP degradation and object detection) in the training process. The goal of our disentanglement regularization is that the gradients g1 and g2 of two different tasks have the minimum cosine similarity, i.e. the angle between two vectors tends to 90 degrees and the value of cosine closes to 0, while the gradient of the same tasks has the maximize cosine similarity. Specifically, as shown in the bottom part of Figure 2, the red arrow g1 and blue arrow g2 denote gradients vectors of the task of object detection and self-supervised learning based ISP degradation respectively. For different tasks, we minimum cosine similarity by pushing the g1 or g2 close to dotted line arrow under the supervision of DR loss LDR. For the same tasks, we make the gradients vectors as coincident as possible. Mathematically, DR can be expressed as: LDR =ω1 |cos(g1, g2)| + ω2(|1 −cos(g1, g1)|)+ ω3(|1 −cos(g2, g2)|) (20) where ω1, ω2 and ω3 are the parameters to balance these three terms. In this paper, we set ω1 = 5 and ω2 = ω3 = 0.5. Total Loss The total loss function includes Lself loss that makes the encoder to capture the intrinsic visual information, LDR pushes two gradients vectors at different tasks as orthogonal as possible, and Lsup and Lunsup are the original detection losses Ldet in the Teacher-Student architecture, which can be formulated as: Ltotal = βLself + LDR + Ldet (21) where β is the weight of self-supervised learning loss in Eq.19. Experiments Datasets and Metrics BDD100k (The Berkeley Deep Drive 100k) is a widely used autonomous driving dataset (Yu et al. 2018), which consists 70k training images, 20k test images and 10k validation images. It includes 10 common classes and covers various weather scenarios, e.g. rainy, snowy, foggy, overcast and etc. . Following (Kennerley et al. 2023), we split BDD100k dataset into two parts using labels ‘day’ and ‘night’. Specifically, daytime images and nighttime images are used as source and target data for training respectively, and only nighttime images in validation dataset are used for validation. After splitting, there are 36728 daytime images and 32998 nighttime images in the training set and 4707 nighttime images in the validation set. SHIFT is also an autonomous driving dataset (Sun et al. 2022), and it includes discrete shifts (e.g. urban, village and rural) and continuous shifts (e.g. daytime to night) in cloudiness, rain and fog weather. SHIFT has the same 6 classes as BDD100k with bounding box annotations. similar to BDD100k, we also split it into 19452 daytime images and 8497 nighttime images for training and 1200 nighttime images for validation. As for the metrics, following the method of (Kennerley et al. 2023), we adopt AP (i.e.AP50, IoU@0.5), APS (smallsized object), APM (medium-sized object) and APL (largesized object) to evaluate our model. Implementation Details Following previous Teacher-Student architecture based domain adaption methods, we use Faster-RCNN (Ren et al. 2015) with ResNet50 (He et al. 2016) as our baseline detector. SGD is used as the optimizer with a base learning rate of 0.01 and the momentum is set to 0.9. Loss hyperparameter λ = 0.3 and β = 1, and the rate smooth coefficient parameter of EMA is set to 0.9996. The batch size is 4, which includes 2 daytime images in source domain and 2 nighttime images in target, and all images are proportionally scaled to a minimum side of 600. For the burn up stage, we train the student network under a supervised manner on source domain for 50k and 20k iterations for BDD100k and SHIFT datasets, respectively. And the total iterations on BDD100k and SHIFT is 90k and 70k iterations. Our method is implemented based on detectron2 (Wu et al. 2019) with 4 RTX6000 GPUs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7391 Method AP Ped. Rid. Car Tru. Bus Mot. Bic. T-Light T-Sign Source (Lower-Bound) 41.1 50.0 28.9 66.6 47.8 47.5 32.8 39.5 41.0 56.5 Oracle (Upper-Bound) 46.2 52.1 35.0 73.6 53.5 54.8 36.0 41.8 52.2 63.3 UMT (Deng et al. 2021) 36.2 46.5 26.1 46.8 44.0 46.3 28.2 40.2 31.6 52.7 TDD (He et al. 2022) 34.6 43.1 20.7 68.4 33.3 35.6 16.5 25.9 43.1 59.5 AT (Li et al. 2022) 38.5 42.3 30.4 60.8 48.9 52.1 34.5 42.7 29.1 43.9 2PCNet (Kennerley et al. 2023) 46.4 54.4 30.8 73.1 53.8 55.2 37.5 44.5 49.4 65.2 ISP-Teacher (Ours) 48.8 57.8 39.4 72.9 54.6 55.9 43.8 48.1 49.6 66.3 Table 1: Main results of our proposed method on BDD100k dataset. We show the average precision (AP) of each class. The full classes name from left to right are Pedestrian, Rider, Car, Trunk, Bus, Motorcycle, Bicycle, Traffic Light and Traffic Sign. Main Results In order to verify the effectiveness of our ISP-Teacher for dark object detection, we compare our method with some SOTA methods, i.e. UMT (Deng et al. 2021), TDD (He et al. 2022), AT (Li et al. 2022) and 2PCNet (Kennerley et al. 2023). It should be emphasized that 2PCNet (Kennerley et al. 2023) is an object detection method specially designed for low-light images and it achieves SOTA performance on BDD100k and SHIFT datasets. For the fairness of comparison, all of methods use ResNet50 (He et al. 2016) as the backbone, and the results in our experiment are shown in Table 1. In addition, we report the results that training Faster-RCNN with only daytime images and test on nighttime images denotes as ‘Source’ (Lower-Bound). On the other hand, we also show the results of training FasterRCNN on nighttime images with ground-truth and test on nighttime images denotes as ‘Oracle’ (Upper-Bound). Experiments on BDD100k. On BDD100k dataset, compared to other Teacher-Student architecture based domain adaptive methods for dark object detection, ISP-Teacher achieves a better performance owning to the proposed selfsupervised learning based ISP degradation and disentanglement regularization strategy. As shown in Table 1, the previous Teacher-Student architecture methods achieve terrible results on night scenes and even much lower than the Lower-Bound. By elaborately designing self-supervised learning based ISP degradation and disentanglement regularization strategy, our ISP-Teacher outperforms all the Teacher-Student architecture methods by a large margin. Specifically, compared to the method of only training on daytime source images and testing on nighttime (‘Source’ in the first row), our method brings a +7.7% AP improvement (i.e. 48.8% vs. 41.1%). Furthermore, our unsupervised approach even outperforms the supervised method ‘Oracle’ (the second row) that trains on nighttime images with annotations by +2.6% in terms of AP. Compared with 2PCNet (Kennerley et al. 2023), which is a SOTA night-specific algorithm for dark object detection, our method also obtains an impressive improvement in AP (from 46.4% to 48.8%, +2.4%) and brings the best results in eight out of nine categories , where ‘Car’ is also only 0.2% lower. Experiments on SHIFT. To further verify the effectiveness of our proposed method, we conduct experiments on SHIFT dataset and the results are shown in Table 2. We Method AP Ped. Car Tru. Bus Mot. Bic. Lower-B 41.6 40.4 44.5 49.9 53.7 14.3 46.7 Upper-B 47.0 49.7 51.5 56.0 53.6 19.2 52.4 UMT 31.1 7.7 47.5 18.4 46.8 16.6 49.2 AT 38.9 25.8 33.0 54.7 49.5 20.7 52.3 2PCNet 49.1 51.4 54.6 54.8 56.6 23.9 54.2 Ours 52.4 51.6 59.1 58.7 62.3 24.1 58.3 Table 2: Main results of our proposed method on SHIFT dataset. Lower-B and Upper-B denote Lower-Bound and Upper-Bound, respectively. can see that other Teacher-Student architecture based methods also perform worse than Lower-Bound. ISP-Teacher achieves an improvement of +3.3% AP compared to SOTA method 2PCNet, and +5.4% AP for ‘Oracle’. Furthermore, our method outperforms 2PCNet on all categories. Ablation Study To validate the effectiveness of each component in our proposed method, we conduct some ablation experiments on BDD100k dataset. Moreover, some analyses of the hyperparameters are also shown in this section. Effectiveness of Self-supervised Learning Based ISP Degradation. We compare our self-supervised learning based ISP degradation (the third row of Table 3) with other methods: i) traditional non-learnable data augmentations like randomly color jittering, gray scaling, Gaussian blurring (the first row of Table 3), and ii) nighttime specific augmentations NightAug in 2PCNet (Kennerley et al. 2023) (the second row in Table 3). As shown in Table 3, we can see that the method of traditional non-learnable data augmentation conducted on daytime images obtains poor performance (42.2% AP) in low-light conditions. NightAug method (in the second row) is a nighttime specific data augmentation that aims to reduce the bias between daytime and nighttime images, and it brings +3.6% AP improvement compared to the non-learnable data augmentation method (42.2% vs. 45.8%). Our proposed self-supervised learning based ISP degradation not only addresses the domain bias of student network but also explore how to learn intrinsic visual information of dark images, which achieves a huge improveThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7392 NightAug Self. DR AP APS APM APL 42.2 7.9 23.0 38.8 ✓ 45.8 8.6 25.7 42.2 ✓ 48.5 9.2 27.1 45.2 ✓ ✓ 48.8 9.2 27.2 45.7 Table 3: Ablation study of each component in our ISPTeacher on BDD100k dataset. ‘NightAug’ denotes a nonlearnable nighttime specific augmentation in 2PCNet. ‘Self.’ and ‘DR’ denote the self-supervised learning based ISP degradation and disentanglement regularization. β AP APS APM APL 1 48.8 9.2 27.2 45.7 2 48.4 8.9 26.7 45.7 5 44.9 7.9 22.9 39.1 Table 4: The influence of different weight β in the selfsupervised learning based ISP degradation loss. ment in AP (from 42.2% to 48.5%, +6.3%). The above experiments can prove the effectiveness of our proposed selfsupervised learning based ISP degradation on low-light images object detection. Effectiveness of DR. As shown in the last row of Table 3, when adding disentanglement regularization (DR) in our framework, the detection performance can further improve to 48.8% from 48.5%. This is thanks to the disentangle of the object detection and self-supervised learning based ISP degradation. Analysis of the weights β of self-supervised learning based ISP degradation loss. From Eq.21, we add the self-supervised learning based ISP degradation loss Lself into the original detection loss Ldet. To explore the influence of the weights β of Lself, we set different values of β = 1, 2, 5 to conduct experiments on BDD100k dataset. As shown in Table 4, we get the best performance of 48.8% in AP when β = 1. However, when β = 2, the performance declines slightly, and there is a significant decrease in AP performance when β = 5, i.e. 44.9% which is even lower than the non-learnable method NightAug (45.8% in AP). The above experiments indicate that the weight of selfsupervised learning based ISP degradation loss is sensitive to the performance of object detection, and we set β = 1 by default in this paper. Visualization Results Visualization of ISP-DTM Pipeline. As shown in the green area of Figure 3, we show an example of ISP-DTM pipeline on BDD100k dataset. First, sRGB daytime images are converted to cRGB images by the reversing process. Then, shot and read noises are added to cRGB images, and cRGB is transformed into sRGB through the ISP Pipeline step for dark object detection. The augmented images look more in line with the natural low-light images captured by cameras. Detection Results on BDD100k Dataset. Furthermore, to further show the effectiveness of our ISP-Teacher, we also Figure 4: Examples of detection results on BDD100k dataset. From left to right: general Teacher-Student architecture UDA method AT (Li et al. 2022), SOTA method 2PCNet (Kennerley et al. 2023), our ISP-Teacher and Ground Truth. Different colored boxes denote different classes, i.e. red box denotes ‘Car’, blue box denotes ‘Pedestrian’, yellow box denotes ‘Traffic Sign’ and green box denotes ‘Traffic Light’. Best seen on computer, in color and zoomed in. present some visualization results on BDD100k val datasets. As shown in Figure 4, we can see that our ISP-Teacher could detect all objects accurately. However, AT (Li et al. 2022) mistakenly detects something as a traffic sign, i.e. an extra yellow box, and 2PCNet (Kennerley et al. 2023) misses a car (i.e. red box) in the first row. Moreover, as shown the second and third rows of Figure 4, our method also gets satisfactory results on complex scenes while other methods always have detection errors. For example, AT and 2PCNet miss some traffic light and cars in the second row. Conclusion In this paper, we propose a novel dark object detection method named ISP-Teacher for the challenging low-light scenes without annotations. To overcome the problem that mainstream Teacher-Student architecture based UDA methods have poor results on the day-to-night condition, we design a day-to-night transformation module that consistent with the ISP pipeline of the camera sensors (ISP-DTM) to make the augmented images look more in line with the natural low-light images captured by the cameras . Moreover, a self-supervised learning strategy is used to capture the intrinsic visual information of images under different light changes. In order to avoid self-supervised learning based ISP degradation affecting the training process of object detection, a disentanglement regularization is introduced in our method by minimizing the cosine similarity of the gradients of different tasks while maximizing the gradients of the same tasks. Experimental results on two benchmarks show that our method outperforms previous Teacher-Student architecture methods in dark scenes by a large margin. However, object detection on low-light images is still a challenging task, e.g. the results on small object like traffic light need to be improved, and we plan to use Fourier-based mix strategy to learn more robust features for the student network in the future. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7393 Acknowledgments This work is supported by the China Postdoctoral Science Foundation (Grant No. 259822), the National Postdoctoral program for Innovative Talents (Grant No. BX20200108), and the National Science Foundation of China (Grant No. 62206077). References Brooks, T.; Mildenhall, B.; Xue, T.; Chen, J.; Sharlet, D.; and Barron, J. T. 2019. Unprocessing images for learned raw denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11036–11045. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European conference on computer vision, 213–229. Springer. Cui, Z.; Li, K.; Gu, L.; Su, S.; Gao, P.; Jiang, Z.; Qiao, Y.; and Harada, T. 2022a. You Only Need 90K Parameters to Adapt Light: a Light Weight Transformer for Image Enhancement and Exposure Correction. In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press. Cui, Z.; Qi, G.-J.; Gu, L.; You, S.; Zhang, Z.; and Harada, T. 2021. Multitask AET With Orthogonal Tangent Regularity for Dark Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2553–2562. Cui, Z.; Zhu, Y.; Gu, L.; Qi, G.-J.; Li, X.; Zhang, R.; Zhang, Z.; and Harada, T. 2022b. Exploring Resolution and Degradation Clues as Self-supervised Signal for Low Quality Object Detection. In European Conference on Computer Vision, 473–491. Springer. Debevec, P. E.; and Malik, J. 2023. Recovering high dynamic range radiance maps from photographs. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2, 643– 652. Deng, J.; Li, W.; Chen, Y.; and Duan, L. 2021. Unbiased mean teacher for cross-domain object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4091–4101. Guo, C.; Li, C.; Guo, J.; Loy, C. C.; Hou, J.; Kwong, S.; and Cong, R. 2020. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1780–1789. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. He, M.; Wang, Y.; Wu, J.; Wang, Y.; Li, H.; Li, B.; Gan, W.; Wu, W.; and Qiao, Y. 2022. Cross domain object detection by target-perceived dual branch distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9570–9580. Jin, X.; Han, L.-H.; Li, Z.; Guo, C.-L.; Chai, Z.; and Li, C. 2023. DNF: Decouple and Feedback Network for Seeing in the Dark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18135–18144. Kennerley, M.; Wang, J.-G.; Veeravalli, B.; and Tan, R. T. 2023. 2PCNet: Two-Phase Consistency Training for Dayto-Night Unsupervised Domain Adaptive Object Detection. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Li, Y.-J.; Dai, X.; Ma, C.-Y.; Liu, Y.-C.; Chen, K.; Wu, B.; He, Z.; Kitani, K.; and Vajda, P. 2022. Cross-Domain Adaptive Teacher for Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Liu, W.; Ren, G.; Yu, R.; Guo, S.; Zhu, J.; and Zhang, L. 2022. Image-adaptive YOLO for object detection in adverse weather conditions. In Proceedings of the AAAI Conference on Artificial Intelligence, 1792–1800. Liu, Y.-C.; Ma, C.-Y.; He, Z.; Kuo, C.-W.; Chen, K.; Zhang, P.; Wu, B.; Kira, Z.; and Vajda, P. 2021. Unbiased Teacher for Semi-Supervised Object Detection. In Proceedings of the International Conference on Learning Representations (ICLR). Mi, P.; Lin, J.; Zhou, Y.; Shen, Y.; Luo, G.; Sun, X.; Cao, L.; Fu, R.; Xu, Q.; and Ji, R. 2022. Active Teacher for SemiSupervised Object Detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Redmon, J.; and Farhadi, A. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. Sohn, K.; Zhang, Z.; Li, C.-L.; Zhang, H.; Lee, C.-Y.; and Pfister, T. 2020. A simple semi-supervised learning framework for object detection. arXiv preprint arXiv:2005.04757. Sun, T.; Segu, M.; Postels, J.; Wang, Y.; Van Gool, L.; Schiele, B.; Tombari, F.; and Yu, F. 2022. SHIFT: a synthetic driving dataset for continuous multi-task domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 21371–21382. Suteu, M.; and Guo, Y. 2019. Regularizing deep multitask networks using orthogonal gradients. arXiv preprint arXiv:1912.06844. Wang, X.; Yang, X.; Zhang, S.; Li, Y.; Feng, L.; Fang, S.; Lyu, C.; Chen, K.; and Zhang, W. 2023. ConsistentTeacher: Towards Reducing Inconsistent Pseudo-targets in Semi-supervised Object Detection. The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR). Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.-Y.; and Girshick, R. 2019. Detectron2. https://github.com/facebookresearch/ detectron2. Wu, Y.; Pan, C.; Wang, G.; Yang, Y.; Wei, J.; Li, C.; and Shen, H. T. 2023. Learning Semantic-Aware Knowledge Guidance for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1662–1671. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7394 Yu, F.; Xian, W.; Chen, Y.; Liu, F.; Liao, M.; Madhavan, V.; Darrell, T.; et al. 2018. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2(5): 6. Yu, K.; Li, Z.; Peng, Y.; Loy, C. C.; and Gu, J. 2021. Reconfigisp: Reconfigurable camera image processing pipeline. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4248–4257. Yu, T.; Kumar, S.; Gupta, A.; Levine, S.; Hausman, K.; and Finn, C. 2020. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33: 5824–5836. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7395
2024
821
18,653
ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt Bank Zhanjie Zhang∗, Quanwei Zhang∗, Wei Xing†, Guangyuan Li, Lei Zhao†, Jiakai Sun, Zehua Lan, Junsheng Luan, Yiling Huang, Huaizhong Lin† Intelligent Vision Lab, Zhejiang University {cszzj,cszqw,cslgy,wxing,cszhl,csjk,zjucslzh,l.junsheng121,linhz}@zju.edu.cn, huangyiling@hotmail.com Abstract Artistic style transfer aims to repaint the content image with the learned artistic style. Existing artistic style transfer methods can be divided into two categories: small modelbased approaches and pre-trained large-scale model-based approaches. Small model-based approaches can preserve the content strucuture, but fail to produce highly realistic stylized images and introduce artifacts and disharmonious patterns; Pre-trained large-scale model-based approaches can generate highly realistic stylized images but struggle with preserving the content structure. To address the above issues, we propose ArtBank, a novel artistic style transfer framework, to generate highly realistic stylized images while preserving the content structure of the content images. Specifically, to sufficiently dig out the knowledge embedded in pre-trained largescale models, an Implicit Style Prompt Bank (ISPB), a set of trainable parameter matrices, is designed to learn and store knowledge from the collection of artworks and behave as a visual prompt to guide pre-trained large-scale models to generate highly realistic stylized images while preserving content structure. Besides, to accelerate training the above ISPB, we propose a novel Spatial-Statistical-based self-Attention Module (SSAM). The qualitative and quantitative experiments demonstrate the superiority of our proposed method over state-of-the-art artistic style transfer methods. Code is available at https://github.com/Jamie-Cheung/ArtBank. Introduction Artistic style transfer aims to transfer the learned styles onto arbitrary content images to create a new artistic image. Existing artistic style transfer methods can be classified into small model-based methods and pre-trained large-scale model-based methods. More specifically, small model-based methods (Zhang et al. 2021; Sanakoyeu et al. 2018; Kim et al. 2019; Park et al. 2020; Wang et al. 2022; Zhang et al. 2021; Sun et al. 2023; Yang et al. 2022; Zhang et al. 2023b; Chen et al. 2023; Zuo et al. 2023; Zhao et al. 2020; Chen et al. 2021a,b,c; Zhang et al. 2024) focus on training a well-designed forward Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ∗Both authors contributed equally to this research. † Corresponding authors. Van Gogh Content Monet Ours InST CycleGAN Figure 1: Stylized examples. (a) The 1st column shows the content image. The 2nd column shows the stylized image by our method. The other two columns show the stylized images produced by the pre-trained large-scale models (e.g., InST (Zhang et al. 2023a)) and small model-based methods (e.g., CycleGAN (Zhu et al. 2017)). network to learn style information from the collection of artworks. To train such forward networks, Zhu et al. (Zhu et al. 2017) first employed a cycle consistency loss to realize the mapping between the style domain and the content domain in the RGB pixel space. AST (Sanakoyeu et al. 2018) proposed a style-aware content loss to learn style from the collection of artworks for real-time and high-resolution artistic style transfer. GcGAN (Fu et al. 2019) designed a predefined geometric transformation to ensure that the stylized image maintains geometric consistency with the input content image. CUT (Park et al. 2020) used contrastive learning to push the stylized patch to appear closer to its corresponding input content patch and keep a better content structure. Based on CUT, LseSim (Zheng, Cham, and Cai 2021) introduced a more general-spatially-correlative map in contrastive learning which encourages homologous structures to be closer. ITTR (Zheng et al. 2022) utilized a transformerbased architecture to capture contextual information from different perceptions locally and globally. While these methods are capable of learning style information from a collection of artworks and preserving the content structure, they fail to generate highly realistic stylized images, and introduce disharmonious patterns and evident artifacts (e.g., 4th The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7396 A painting by Morisot A painting by Monet A painting by Cezanne A painting by Van Gogh Stylized image Content Stylized image Content Stylized image Content Stylized image Content Figure 2: Stylized images generated by our proposed ArtBank. With a simple text prompt and a content image, ArtBank can generate highly realistic stylized images with preserving the structure of original content image. col in Fig. 1). The pre-trained large-scale model-based methods can generate highly realistic image since they are trained on large amounts of data and possess large-scale neural network parameters, which opens up the possibility to generate highly realistic stylized images. Recently, some methods (Dhariwal and Nichol 2021; Huang et al. 2022; Nichol et al. 2021; Wu 2022; Ho, Jain, and Abbeel 2020; Xie et al. 2023b,a) utilized a text prompt to synthesize highly realistic artistic images based on pre-trained large-scale model. The most representative method is Stable Diffusion (SD) (Rombach et al. 2022), which uses text prompts as guidance to generate stylized images. However, they struggle with preseving content structure. To this end, Ge et al. (Ge 2022) proposed to use a rich text editor to solve how to provide a detailed text prompt to constrain content structure; DiffuseIT (Kwon and Ye 2023) utilized a pre-trained ViT model (Tumanyan et al. 2022) to guide the generation process of DDPM models (Ho, Jain, and Abbeel 2020) in terms of preserving content structure. Zhang et al. (Zhang et al. 2023a) proposed a novel example-guided artistic image generation framework called InST related to artistic style transfer. Although these pre-trained large-scale model-based methods can generate highly realistic stylized images and attempt to preserve the content strcutre, they struggle with maintaining the structure of the original content image (e.g., 3rd col in Fig. 1). To address these problems, we focus on how to propose a more effective method that not only can generate highly realistic stylized images but also preserve the structure of the content image. Recently, the pre-trained SD has possessed the massive prior knowledge to generate highly realisc images. To exploit the prior knowledge in pre-trained SD, we first design a simple text prompt template with the artist’s name (e.g., a painting by Van Gogh). Then, we use CLIP (Radford et al. 2021) to encode the text prompt template and obtain a text embedding space vt. Next, we design an Implicit Style Prompt Bank (i.e., multiple learnable parameter matrices) that can learn and store style information from different collections of artworks. Besides, we propose Spatial and Statistical-based Self-Attention Module (SSAM) to project the learnable parameter matrix into the embedding tensor vm. Then, vt and vm are concatenated, obtaining condition tensor cθ(y). With condition tensor cθ(y), our proposed ArtBank can fully use the prior knowledge in pre-trained large-scale models and the knowledge from the collection of artworks, generating highly realistic stylized images with preserving content structure (Please see Fig. 2 and Fig. 1). To demonstrate the effectiveness of proposed ArtBank, we build extensive experiments on different collections of artworks. All the experiments show our method outperforms the state-of-the-art artistic style transfer methods, including small model-based methods and pre-trained large-scale model-based methods. To summarize, our contributions are listed as follows: • We propose a novel framework called ArtBank, which can generate highly realistic stylized images while preserving the content structure. This is realized by the Implicit Style Prompt Bank (ISPB), a set of trainable parameter matrices, which can learn and store the style information of multiple collections of artworks and dig out the prior knowledge of pre-trained large-scale models. • We propose the Spatial-Statistical Self-Attention Module (SSAM), which focuses on spatial and statistical aspects, to accelerate the training of proposed ISPB. • We have conducted extensive experiments on multiple collections of artworks and synthesized highly realistic stylized images compared to state-of-the-art methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7397 Related Work Small Model-based Methods. The small model-based method refers to training a small-scale forward neural network on a small amount of data. For example, Huang et al. (Huang and Belongie 2017) proposed an arbitrary artistic style transfer method that can transfer the style of a style image onto a content image. Li et al. (Li et al. 2017) conducted the whitening and coloring transforms (WCT) to endow the content features with the same statistical characteristics as the style features. However, these methods need a reference style image and fail to learn style information from the collection of artworks. To this end, CycleGAN (Zhu et al. 2017), DiscoGAN (Kim et al. 2017), and U-GAT-IT (Kim et al. 2019) adopt generative adversarial networks and a cycle consistency loss to realize the mapping between the style domain and content domain in the RGB pixel space. These methods can learn style information from the collection of artworks and preserve the structure of the content image, but the cycle consistency loss adds an extra computational burden. Some researchers (Sanakoyeu et al. 2018; Kim et al. 2019; Park et al. 2020) proposed to leverage the geometry consistency to preserve the structure of the content image. Although the aforementioned small model-based methods can generate stylized images with preserving the content structure, they fail to synthesize highly realistic stylized images. Pre-trained Large-scale Model-based Methods. Largescale models are trained on large amounts of data and can generate highly realistic images. For example, Stable Diffusion (Rombach et al. 2022) is a large-scale text-image generation model which can generate a new highly realistic image corresponding to a text prompt. Pix2pix-zero (Parmar et al. 2023) first proposed to automatically discover editing directions that reflect desired edits in the text embedding space, and condition diffusion model to generate expired image. Ramesh et al. (Ramesh et al. 2022) solve the problem of text-conditional image generation by inverted CLIP text embeddings. Zhang et al. (Zhang et al. 2023a) proposed an inversion-based artistic style transfer method to learn the corresponding textual embedding from a single image and use it as a condition to guide the generation of artistic images. DiffuseIT (Kwon and Ye 2023) utilized a pre-trained ViT model to guide the generation process of DDPM models (Ho, Jain, and Abbeel 2020) in terms of preserving content structure. Yang et al. (Yang, Hwang, and Ye 2023) proposed a zero-shot contrstive loss for diffusion models that doesn’t require additional fine-tuning or auxiliary networks. These methods can perform artistic style transfer from accurate text description or examplar style image but fail to learn and store style information from the collection of artworks. Unlike these methods, our proposed approach learns style information from the collection of artworks based on the proposed ISPB. Our proposed approach does not require explicit text or images as a condition (See in Fig. 7) and can synthesize highly realistic artistic images while preserving the structure of the content image. Method Overview Our proposed ArtBank includes an untrainable part (pretrained large-scale models) and a trainable part (Implicit Style Prompt Bank, i.e., a set of learnable parameter matrices). The untrainable part utilizes a pre-trained large-scale model (Stable Diffusion, version 1.4) as a backbone, which can generate highly realistic images. The trainable part can learn and store the style information from the collection of artworks and condition pre-trained large-scale model to generate highly realistic stylized images while preserving the structure of the content image. Meanwhile, we propose the Spatial- Statistical Self-Attention Module (SSAM) to accelerate the training of ISPB. Once the training is completed, ArtBank can render arbitrary content images into highly realistic artistic stylized images while preserving the structure of the content image. Implicit Style Prompt Bank In order to learn the knowledge from a collection of artworks, an intuitive way is to unfreeze the parameter of pretrained large-scale model with the following loss (Nichol and Dhariwal 2021): Ldiff = Ez,x,t h ∥ϵ −ϵθ (zt, t)∥2 2 i , (1) where z ∼E(x), ϵ ∼N(0, 1). Once the loss function converges, the trained model can render an arbitrary content image into artistic style image. However, this naive approach will weaken pre-trained large-scale models’ ability, learned from previous massive data, to generate highly realistic stylized images. To dig out massive prior knowledge in pre-trained largescale model and extract the knowledge from the collection of artworks, we freeze the parameter of pre-trained largescale model and train an ISPB. ISPB comprises a series of trainable parameter matrices and each trainable parameter corresponds to a collection of artworks. The problem we need to solve is how to teach these learnable parameters to learn and store the style information from the collection of artworks and how to use these trainable parameters to condition the pre-trained large-scale model to generate highly realistic stylized images while preserving content structure. In this paper, we use pre-trained large-scale Stable Diffusion (SD) as backbone. We argue that SD relies on using CLIP-based codes, encoded by CLIP text/image encoder, to guide the denoising process and guide SD to generate desired image. CLIP text encoder can be divided into tokenizer and text transformer modules. If using the text encoder as an example, a text prompt is converted into continuous vector representations vt. Although such vt is effective in guiding SD to generate the desired image, it cannot fully dig out the knowledge of SD in style transfer. Based on the above analysis, we first design some coarse text prompt templates (e.g., a painting by Van Gogh *, * is only a meaningless placeholder). The coarse text prompt template is then converted into continuous vector representations: vt and v∗(i.e., coarse text prompt is converted into vt and ∗is converted into v∗). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7398 12 3 6 11 11 7 9 34 8 10 18 18 16 32 24 13 A painting by XXX * ? ? ? ? Tokenizer Text Transformer Decoder 3 1 6 12 1 7 9 31 8 1 1 16 1 3 4 22 Switch Q KV (T-1) Q KV Replace Switch ×…×× … 16 8 18 22 11 7 9 13 8 10 18 15 16 32 24 27 13 23 16 17 11 7 9 15 8 10 18 23 16 32 24 8 Van Gogh Ukiyoe Monet Peploe Implicit Style Prompt Bank (ISPB) A coarse text prompt template Replace Van Gogh √ Ukiyoe Monet Peploe Van Gogh Style images Diffusion Process T steps Content image Q KV Q KV Denoising T steps 13 23 16 17 11 7 9 15 8 10 18 23 16 32 24 8 Text Prompt (A painting by Van Gogh *) Inference Training Content image Stochastic Inversion Noisy Version I Predicted Noise I Content image Noisy version II Stylized image Implict Style Prompt (Van Gogh) Stylized image Result Diffusion Process T steps Diffusion Process Add random noise Diffusion Process Add random noise Diffusion Process Predict noise Diffusion Process Predict noise Figure 3: The overview of our proposed ArtBank which consists of two parts: an un-trainable module and a trainable module. The untrainable module is a pre-trained large-scale diffusion model (the model used in this paper is SD 1.4), which can generate highly realistic image. The trainable module is Implicit Style Prompt Bank (ISPB) which can learn and store style information from the collection of artworks. The stochastic inversion (bottom) is used in the diffusion process of inference stage (upper right). In the meantime, the learnable parameter matrix Im of ISPB is also projected into a continuous style representation vector vm by our proposed SSAM (i.e., vm = SSAM(Im). The SSAM will be illustrated in Fig. 4). Then, we replace embedding vector v∗with style representation vm. Finally, the embedding vectors (vt and vm) are transformed into a single conditioning code cθ(y). In the above process, only Im and SSAM need to be trained, and each collection of artworks need the corresponding Imn and SSAMn. The SSAMn is primarily responsible for accelerating training Imn. We use the following loss function for training. Ldiff = Ez,x,y,t h ∥ϵ −ϵθ (zt, t, SSAM (Im), vt)∥2 2 i , (2) where z ∼E(x), ϵ ∼N(0, 1) and vt denotes text prompt. Once the Im and SSAM are trained, our proposed ArtBank supports arbitrary content images to generate highly realistic stylized images while preserving content structure. Spatial-Statistical-Wise Self-Attention Module Fig. 4 illustrates the architecture of our proposed SpatialStatistical Self-Attention Module (SSAM), which differs from previous self-attention approaches (Park and Lee 2019; Liu et al. 2021; Li et al. 2023b,a,c; Cui et al. 2022). Our novel SSAM can learn and evaluate the value change of the parameter matrix from both spatial and statistical perspective. Specifically, we use row-column-wise attention from spatial aspects and mean-variance-wise attention from statistical aspects to extract parameter information. This approach can accelerate the convergence rate, reduce the volatility of parameter matrix updates, and dig out knowledge in SD. The SSAM starts with a trainable parameter matrix Im, which is encoded into a query (Q), key (K), and value (V ). Q = WQ · Im, K = WK · Im, V = WV · Im (3) where WQ, WK, WV are learnable convolution layer. The attention map A can be calculated as: A = Softmax Q⊤⊗K  (4) where ⊗denotes matrix multiplication For attention map A, we build col-wise weight matrix Wcol ∈RHcWc×1 and row-wise weight matrix Wrow ∈ R1×HcWc. And to make it easier to calculate, we replicate Wcol and Wrow along with col and row, respectively. Then The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7399 Norm 1X1 conv Norm 1X1 conv 1X1 conv Norm Norm Norm Norm Norm Norm (a) SANet (b) AdaAttN (c) SSAM (Ours) 1X1 Conv 1X1 Conv 1X1 Conv 1X1 Conv 1X1 Conv 1X1 Conv 1X1 Conv Figure 4: (a) The structure of SANet (Park and Lee 2019); (b) The structure of AdaAttN (Liu et al. 2021); (c) The structure of our proposed SSAM. Norm here denotes the mean-variance channel-wise normalization. we can obtain col-wise and row-wise attention maps as below. Acol = A · Wcol , Arow = A · Wrow (5) Then, ˆA = α · Acol + (1 −α) · Arow (i.e., α is learnable weight.). Furthermore, ˆ M = V · ˆA (6) The attention-weighted standard deviation ˆS ∈RC×HcWc as: ˆS = q (V · V ) ⊗ˆA⊤−ˆ M · ˆ M (7) where · represents the element-wise product. Finally, corresponding scale ˆS and shift ˆ M are used to generate a transformed parameter matrix as: vm = ˆS · Norm (Im) + ˆ M (8) we redefine this process as: vm = SSAM(Im). Stochastic Inversion In pre-trained large-scale model, random noise plays a crucial role in preserving the content structure of stylized images (Hertz et al. 2022). However, random noise is hard to predict, and incorrectly predicted noise can cause a content mismatch between the stylized image and the content image. To this end, we first add random noise to the content image and use the denoising U-Net in the diffusion model to predict the noise in the image. The predicted noise is used as the initial input noise during inference to preserve content structure (called stochastic inversion, as shown in bottom of Fig. 3). Based on this strategy, our proposed ArtBank can generate highly realistic stylized images while preserving better content structure. Experiments Implementation Details We use pre-trained large-scale diffusion model (SD version 1.4) as our backbone. We train our proposed module for each collection of artworks using two NVIDIA GeForce RTX3090 GPUs. The training process requires about 200,000 iterations with a batch size of 1 and takes about two days to complete for each collection. We use a base learning rate of 0.001. The art images are chosen from the Wikiart (Nichol 2016) dataset and scaled to 512×512 pixels. The training set size varies for each class: 401 for Van Gogh, 130 for Morisot, 1433 for Ukiyoe, 1072 for Monet, 584 for Cezanne, 292 for Gauguin, and 204 for Peploe. During inference, we randomly select some content images from DIV2K (Agustsson and Timofte 2017) as the initial input images. Qualitative Comparisons Comparison With SOTA Style Transfer Methods. We compare our method with the sate-of-the-art artistic style transfer methods, including InST (Zhang et al. 2023a), DiffuseIT (Kwon and Ye 2023), SD (Rombach et al. 2022), AST (Sanakoyeu et al. 2018), CycleGAN (Zhu et al. 2017), LSeSim (Gao, Zhang, and Tian 2022) and CUT (Park et al. 2020). As the representative of small model-based methods, AST, CycleGAN, LSeSim and CUT can generate stylized images with better content structure; they also introduce artifacts and disharmonious patterns into stylized images. As shown in Fig. 5, as the representative of a pre-trained largescale model, the InST is trained based on the diffusion model and can learn style information from a single style image. To make a fair comparison, we retrained InST and used the same collection of artworks and text prompts with our proposed method. In the inference, InST used the content image as the initial input image, the text prompt and the content image are used as conditional input. Fig. 5 shows that InST still has limiations in preserving content structure compared to our approach. DiffuseIT and SD have more limitations in preserving content structure. Compared to the above methods, our proposed ArtBank can not only fully mine the knowledge in the pre-trained large-scale model but also learn and store the style information from the collection of artworks to generate highly realistic stylized images while preserving better content structure. Quantitative Comparisons To better demonstrate the superiority of our proposed method in artistic style transfer. We also compare our proposed method with other methods in the terms of CLIP The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7400 Van Gogh Morisot Ukiyoe Monet Ours AST Content Cezanne CycleGAN InST Gauguin Peploe LSeSim CUT DiffuseIT SD Figure 5: Qualitative comparisons with SOTA artistic style transfer methods. Score, Preference Score and Timing Information. CLIP Score. CLIP (Radford et al. 2021) is a cross-modal model pre-trained on 400M image-caption pairs and can be used for robust automatic evaluation of the accuracy between images and text prompt (Hessel et al. 2021). CLIP Score can measure the similarity between text prompt and the artistic style images. As shown in Tab. 1, “Ground Truth” denotes the similarity between text prompt and the collection of artworks. Taking the collection of artworks from Van Gogh as an example, we calculated the mean of similarity between 401 artistic images and a text (a painting by Van Gogh). We also calculate the mean of similarity between the 1,000 stylized image and a text prompt. We employed the same strategy to calculate the CLIP score for the other collection of artworks, such as Morisot, Ukiyoe, Monet, etc. As shown in Tab. 1, our method achieves a higher CLIP score compared to other state-of-the-art methods, and is even close to the ground truth score. Timing Information. The 9th row of Tab. 1 shows the run time comparisons on images with a scale of 512×512 pixels. Although our proposed method does not have the advantage of inference efficiency compared with the small method-based methods, it is significantly faster than the pretrained large-scale model-based model. Preference Score. (Chen et al. 2021a; Zhang et al. 2023a).To measure the popularity of stylized images generated by two artistic style transfer methods, preference score is commonly used. In this section, we randomly selected 100 content images as input for our proposed method and existing artistic style transfer methods, generating 100 stylized images. To ensure a fair and efficient calculation of the preference score, we asked each participant to select their preferred stylized image one at a time from a set of 10 images generated by our method and 10 images generated by one of the other methods. Participants were instructed to prioritize artistic authenticity and content continuity between stylized The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7401 Ground truth Ours InST SD DiffuseIT AST CycleGAN LSeSim CUT Van Gogh 0.7588 0.7321 0.7244 0.5440 0.6632 0.6736 0.6875 0.6727 0.7124 Morisot 0.8024 0.7447 0.6983 0.5013 0.6871 0.6659 0.7063 0.6730 0.7389 Ukiyoe 0.7495 0.7384 0.7272 0.5235 0.6953 0.6546 0.6504 0.6403 0.6553 Monet 0.7910 0.7556 0.7319 0.5031 0.6984 0.7249 0.7351 0.7125 0.7266 Cezanne 0.7760 0.7646 0.7332 0.5440 0.7216 0.7143 0.7363 0.7250 0.7563 Gauguin 0.8248 0.8190 0.7875 0.6231 0.7528 0.6839 0.7139 0.6730 0.7362 Peploe 0.7475 0.7355 0.7032 0.5396 0.6807 0.6677 0.6846 0.6327 0.6982 Time/sec 3.5725 4.0485 3.7547 32.352 0.0312 0.0312 0.0365 0.0312 Preference 0.679 * 0.572/0.428 0.708/0.292 0.664/0.336 0.685/0.315 0.683/0.317 0.672/0.328 0.769/0.231 Table 1: Quantitative comparisons with state-of-the-art methods. * denotes the average user preference. w/ SSAM 400 iters 1200 iters 800 iters 400 iters 200 iters w/ SANet Style image Figure 6: The optimization effiency comparison with SANet (Park and Lee 2019) and our proposed SSAM. Full Model w/o Text w/ SANet w/ AdaAttN Van Gogh 0.7321 0.7248 0.7267 0.7310 Morisot 0.7447 0.7383 0.7395 0.7425 Ukiyoe 0.7384 0.7256 0.7286 0.7288 Monet 0.7556 0.7419 0.7456 0.7462 Cezanne 0.7646 0.7532 0.7569 0.7572 Gauguin 0.8190 0.8025 0.8036 0.8139 Peploe 0.7355 0.7232 0.7285 0.7325 Table 2: The CLIP score between text prompt and stylized images. images and content images. We recruited 200 participants to repeat the above process, and collected a total of 2,000 votes. Tab. 1 shows the percentage of votes for each artistic style transfer method, indicating that our proposed method was the most popular. Ablation Study Self-attention can faster optimization efficiency of diffusion model (Hessel et al. 2021). To demonstrate that our proposed method with the proposed SSAM can optimize the target style image faster, we show the optimization process (see in in Fig. 6) using our proposed SSAM or SANet (Park and Lee 2019). While using SANet requires 1400 iters to achieve incomplete convergence, SSAM only requires 400 iters. Besides, we retrained ISPB using SANet and AdaAttN with the same iterations. As shown in Fig. 7, we observe that stylized images with other attention mechanisms are less creative and exhibit fewer brushstroke details. The quality of stylized images also degraded in detail when the text A painting by Monet A painting by Van Gogh Full Model w/o Text Full Model w/o Text Content Content w/ SANet w/ AdaAttN w/ SANet w/ AdaAttN Figure 7: The ablation study of attention module and text prompt. prompt is removed. To further validate the effectiveness of our proposed module from quantitative evaluation, we also calculate CLIP score, as shown in Tab. 2. Conclusion We introduce a novel artistic style transfer framework, called as ArtBank, which can addresses the challenge of digging out the knowledge from pre-trained large models to generate highly realistic stylized images while preserving the content structure of the original content images. Extensive experiments demonstrate that our proposed method achieves stateof-the-art performance in artistic style transfer compared to existing SOTA methods. In the future, we lookforward to designing a more effective visual prompt to fully dig out the prior knowledge of pre-trained large-scale model in generating highly realisc stylized images. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7402 Acknowledgments This work was supported in part by Zhejiang Province Program (2022C01222, 2023C03199, 2023C03201, 2019007, 2021009), the National Program of China (62172365, 2021YFF0900604, 19ZDA197), Ningbo Program (022Z167), and MOE Frontier Science Center for Brain Science & Brain-Machine Integration (Zhejiang University). References Agustsson, E.; and Timofte, R. 2017. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 126–135. Chen, H.; Wang, Z.; Zhang, H.; Zuo, Z.; Li, A.; Xing, W.; Lu, D.; et al. 2021a. Artistic style transfer with internalexternal learning and contrastive learning. Advances in Neural Information Processing Systems, 34: 26561–26573. Chen, H.; Zhao, L.; Wang, Z.; Zhang, H.; Zuo, Z.; Li, A.; Xing, W.; and Lu, D. 2021b. Dualast: Dual style-learning networks for artistic style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 872–881. Chen, H.; Zhao, L.; Zhang, H.; Wang, Z.; Zuo, Z.; Li, A.; Xing, W.; and Lu, D. 2021c. Diverse image style transfer via invertible cross-space mapping. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 14860– 14869. IEEE Computer Society. Chen, J.; Ji, B.; Zhang, Z.; Chu, T.; Zuo, Z.; Zhao, L.; Xing, W.; and Lu, D. 2023. TeSTNeRF: text-driven 3D style transfer via cross-modal learning. In Proceedings of the ThirtySecond International Joint Conference on Artificial Intelligence, 5788–5796. Cui, X.; Zhang, Z.; Zhang, T.; Yang, Z.; and Yang, J. 2022. Attention graph: Learning effective visual features for largescale image classification. Journal of Algorithms & Computational Technology, 16: 17483026211065375. Dhariwal, P.; and Nichol, A. 2021. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34: 8780–8794. Fu, H.; Gong, M.; Wang, C.; Batmanghelich, K.; Zhang, K.; and Tao, D. 2019. Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2427–2436. Gao, X.; Zhang, Y.; and Tian, Y. 2022. Learning to Incorporate Texture Saliency Adaptive Attention to Image Cartoonization. arXiv preprint arXiv:2208.01587. Ge, S. 2022. Expressive Text-to-Image Generation with Rich Text. arXiv preprint arXiv:2304.06720. Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-Or, D. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626. Hessel, J.; Holtzman, A.; Forbes, M.; Le Bras, R.; and Choi, Y. 2021. CLIPScore: A Reference-free Evaluation Metric for Image Captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 7514–7528. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Huang, N.; Tang, F.; Dong, W.; and Xu, C. 2022. Draw your art dream: Diverse digital art synthesis with multimodal guided diffusion. In Proceedings of the 30th ACM International Conference on Multimedia, 1085–1094. Huang, X.; and Belongie, S. 2017. Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. In 2017 IEEE International Conference on Computer Vision (ICCV), 1510–1519. Kim, J.; Kim, M.; Kang, H.; and Lee, K. 2019. U-gat-it: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. arXiv preprint arXiv:1907.10830. Kim, T.; Cha, M.; Kim, H.; Lee, J. K.; and Kim, J. 2017. Learning to discover cross-domain relations with generative adversarial networks. In International conference on machine learning, 1857–1865. PMLR. Kwon, G.; and Ye, J. C. 2023. Diffusion-based Image Translation using disentangled style and content representation. In The Eleventh International Conference on Learning Representations. Li, G.; Xing, W.; Zhao, L.; Lan, Z.; Sun, J.; Zhang, Z.; Zhang, Q.; Lin, H.; and Lin, Z. 2023a. Self-Reference Image Super-Resolution via Pre-trained Diffusion Large Model and Window Adjustable Transformer. In Proceedings of the 31st ACM International Conference on Multimedia, 7981– 7992. Li, G.; Xing, W.; Zhao, L.; Lan, Z.; Zhang, Z.; Sun, J.; Yin, H.; Lin, H.; and Lin, Z. 2023b. DuDoINet: Dual-Domain Implicit Network for Multi-Modality MR Image Arbitraryscale Super-Resolution. In Proceedings of the 31st ACM International Conference on Multimedia, 7335–7344. Li, G.; Zhao, L.; Sun, J.; Lan, Z.; Zhang, Z.; Chen, J.; Lin, Z.; Lin, H.; and Xing, W. 2023c. Rethinking Multi-Contrast MRI Super-Resolution: Rectangle-Window Cross-Attention Transformer and Arbitrary-Scale Upsampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 21230–21240. Li, Y.; Fang, C.; Yang, J.; Wang, Z.; Lu, X.; and Yang, M.-H. 2017. Universal style transfer via feature transforms. In Advances in neural information processing systems, 386–396. Liu, S.; Lin, T.; He, D.; Li, F.; Wang, M.; Li, X.; Sun, Z.; Li, Q.; and Ding, E. 2021. Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In Proceedings of the IEEE/CVF international conference on computer vision, 6649–6658. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7403 Nichol, A. Q.; and Dhariwal, P. 2021. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, 8162–8171. PMLR. Nichol, K. 2016. Painter by numbers, wikiart. https://www. kaggle.com/c/painter-by-numbers. Accessed: 2016-5. Park, D. Y.; and Lee, K. H. 2019. Arbitrary style transfer with style-attentional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5880–5888. Park, T.; Efros, A. A.; Zhang, R.; and Zhu, J.-Y. 2020. Contrastive learning for unpaired image-to-image translation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16, 319–345. Springer. Parmar, G.; Singh, K. K.; Zhang, R.; Li, Y.; Lu, J.; and Zhu, J.-Y. 2023. Zero-shot image-to-image translation. arXiv preprint arXiv:2302.03027. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 8748–8763. PMLR. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684– 10695. Sanakoyeu, A.; Kotovenko, D.; Lang, S.; and Ommer, B. 2018. A style-aware content loss for real-time hd style transfer. In proceedings of the European conference on computer vision (ECCV), 698–714. Sun, J.; Zhang, Z.; Chen, J.; Li, G.; Ji, B.; Zhao, L.; and Xing, W. 2023. VGOS: Voxel Grid Optimization for View Synthesis from Sparse Inputs. In Elkind, E., ed., Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, 1414–1422. International Joint Conferences on Artificial Intelligence Organization. Main Track. Tumanyan, N.; Bar-Tal, O.; Bagon, S.; and Dekel, T. 2022. Splicing vit features for semantic appearance transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10748–10757. Wang, Z.; Zhang, Z.; Zhao, L.; Zuo, Z.; Li, A.; Xing, W.; and Lu, D. 2022. AesUST: Towards Aesthetic-Enhanced Universal Style Transfer. In Proceedings of the 30th ACM International Conference on Multimedia, 1095–1106. Wu, X. 2022. Creative painting with latent diffusion models. arXiv preprint arXiv:2209.14697. Xie, J.; Li, Y.; Huang, Y.; Liu, H.; Zhang, W.; Zheng, Y.; and Shou, M. Z. 2023a. Boxdiff: Text-to-image synthesis with training-free box-constrained diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7452–7461. Xie, J.; Ye, K.; Li, Y.; Li, Y.; Lin, K. Q.; Zheng, Y.; Shen, L.; and Shou, M. Z. 2023b. Learning Visual Prior via Generative Pre-Training. In Thirty-seventh Conference on Neural Information Processing Systems. Yang, F.; Chen, H.; Zhang, Z.; Zhao, L.; and Lin, H. 2022. Gating PatternPyramid for diversified image style transfer. Journal of Electronic Imaging, 31(6): 063007. Yang, S.; Hwang, H.; and Ye, J. C. 2023. Zero-shot contrastive loss for text-guided diffusion image style transfer. Zhang, T.; Zhang, Z.; Jia, W.; He, X.; and Yang, J. 2021. Generating cartoon images from face photos with cycleconsistent adversarial networks. Computers, Materials and Continua. Zhang, Y.; Huang, N.; Tang, F.; Huang, H.; Ma, C.; Dong, W.; and Xu, C. 2023a. Inversion-Based Creativity Transfer with Diffusion Models. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhang, Z.; Sun, J.; Chen, J.; Zhao, L.; Ji, B.; Lan, Z.; Li, G.; Xing, W.; and Xu, D. 2023b. Caster: Cartoon style transfer via dynamic cartoon style casting. Neurocomputing, 556: 126654. Zhang, Z.; Sun, J.; Li, G.; Zhao, L.; Zhang, Q.; Lan, Z.; Yin, H.; Xing, W.; Lin, H.; and Zuo, Z. 2024. Rethink arbitrary style transfer with transformer and contrastive learning. Computer Vision and Image Understanding, 103951. Zhao, L.; Mo, Q.; Lin, S.; Wang, Z.; Zuo, Z.; Chen, H.; Xing, W.; and Lu, D. 2020. Uctgan: Diverse image inpainting based on unsupervised cross-space translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5741–5750. Zheng, C.; Cham, T.-J.; and Cai, J. 2021. The spatiallycorrelative loss for various image translation tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16407–16417. Zheng, W.; Li, Q.; Zhang, G.; Wan, P.; and Wang, Z. 2022. Ittr: Unpaired image-to-image translation with transformers. arXiv preprint arXiv:2203.16015. Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2223–2232. Zuo, Z.; Zhao, L.; Li, A.; Wang, Z.; Zhang, Z.; Chen, J.; Xing, W.; and Lu, D. 2023. Generative Image Inpainting with Segmentation Confusion Adversarial Training and Contrastive Learning. arXiv preprint arXiv:2303.13133. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7404
2024
822
18,654
A New Benchmark and Model for Challenging Image Manipulation Detection Zhenfei Zhang1, Mingyang Li2, Ming-Ching Chang1 1Department of Computer Science, University at Albany, State University of New York, New York, USA, 12222 2Department of Bioengineering, McGill University, Montreal, QC, Canada, H3A 0E9 zzhang45@albany.edu, mingyang.li@mail.mcgill.ca, mchang2@albany.edu Abstract The ability to detect manipulation in multimedia data is vital in digital forensics. Existing Image Manipulation Detection (IMD) methods are mainly based on detecting anomalous features arisen from image editing or double compression artifacts. All existing IMD techniques encounter challenges when it comes to detecting small tampered regions from a large image. Moreover, compression-based IMD approaches face difficulties in cases of double compression of identical quality factors. To investigate the State-of-The-Art (SoTA) IMD methods in those challenging conditions, we introduce a new Challenging Image Manipulation Detection (CIMD) benchmark dataset, which consists of two subsets, for evaluating editing-based and compression-based IMD methods, respectively. The dataset images were manually taken and tampered with high-quality annotations. In addition, we propose a new two-branch network model based on HRNet that can better detect both the image-editing and compression artifacts in those challenging conditions. Extensive experiments on the CIMD benchmark show that our model significantly outperforms SoTA IMD methods on CIMD. The dataset is available at: https://github.com/ZhenfeiZ/CIMD. Introduction With the advancement image editing and AI content generation, image editing, tampering and content synthesis are becoming common. However, the abuse of these technologies can bring in serious security and social impacts, including misinformation, disinformation, and deepfakes (Hu et al. 2021; Tolosana et al. 2020). Image Manipulation Detection (IMD) methods that can accurately detect image manipulation regions are important in media forensics. There are three general types of image manipulation operations: (1) region splicing, where the content from one image is copied and pasted onto another image, (2) region copymove, where an image regions is moved to another location within the same image, and (3) region removal, where parts of the image are erased and new contents are synthesized. To accurately detect these manipulations, some methods rely on detecting anomalous image region or texture features, while others identify double compression artifacts. While the State-of-the-Art (SoTA) IMD methods perform well on Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Sample images of our dataset and comparison of image manipulation detection results with recent mainstream methods. The first three rows show manipulation of region copy-move, splicing and removal, respectively. The last row shows double-compressed splicing with the same Quality Factor (QF). Our method achieves the new state-ofthe-art in detecting challenging manipulation cases. mainstream public IMD datasets, they still face two challenges as shown in Fig. 1. First, existing IMD methods have general difficulties in detecting relatively small tampered regions, due to the data-driven design under limited visual information. Secondly, approaches detecting double compression inconsistencies with two different quantization matrices fall apart when the compression Quality Factor (QF) remains the same. This is because the use of identical Qmatrix can significantly suppress double compression artifacts. As shown in Fig. 3, methods in this category detect tampered regions by identifying missing histogram values arisen from the two compression processes. When the same QF is used, the histogram undergoes very small changes, making it hard to detect double compression. In summary, as the image tampering techniques improve increasingly fast, forensic problems are typically ill-defined, and IMD methods in general fall behind in research for challenging cases. To address the issues and challenging conditions, we present a new two-branch IMD network incorporating both The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7405 Figure 2: Overview of the proposed two-branch architecture. RGB stream can detect anomalous features, while frequency stream is able to learn compression artifacts by feeding the image to the compression artifacts learning model, as depicted in Fig. 5. The ASPP in Fig. 6(a) is appended to each of the outputs, and channel attention and spatial attention in Fig. 6(b)(c) interactively perform between each scale output to improve the detection performance under small manipulation. the RGB and frequency streams, such that both anomaly features and compression artifacts can be detected in a single framework. Our network adopts HRNet (Wang et al. 2020) as a feature extractor, with parallel processing at four different scales as in Fig. 2. To more precisely pinpoint tiny tampering regions, we carefully designed the model by applying Atrous Spatial Pyramid Pooling (ASPP) (Chen et al. 2017; Yang et al. 2021) and attention mechanism (Vaswani et al. 2017; Hu, Shen, and Sun 2018). For the frequency stream, we feed the backbone with quantized DCT coefficients, Qmatrix, and novel residual DCT coefficients from multiple recompressions to detect double compression artifacts. This design works regardless of different or identical QF’s. To enhance the performance of the proposed two-branch model, we introduce an adaptive weighted heatmap aggregation design at the end, using soft selection to fuse the heatmaps generated by both branches. Our approach is distinct from the one used in (Cheng et al. 2020), which relies on a simple averaging operation. Datasets play a critical role in training and evaluating the performance of models. There is no publicly accessible datasets for challenging IMD cases. Existing datasets (Dong, Wang, and Tan 2013a; Wen et al. 2016; Ng, Hsu, and Chang 2009; Guan et al. 2019; Amerini et al. 2011) exhibit a significant imbalance in the distribution of tampered images or contains only one image format, leading to an unreliable measurement of the overall detection capability of models. Additionally, some datasets (Mahfoudi et al. 2019; Novozamsky, Mahdian, and Saic 2020) apply image tampering algorithms e.g., (Daisy et al. 2014) to manipulate images in standard datasets such as MSCOCO, which raises concerns, as some IMD methods can rely on MSCOCO pre-trained backbones. In order to evaluate the effectiveness of IMD methods in challenging conditions, we propose a novel Challenging Image Manipulation Detection (CIMD) dataset with new features. CIMD consists of two subsets for evaluations of image-editing-based and compression-based methods, respectively. The primary objective of the first subset is to evaluate the overall performance of image-editing-based methods in detecting small manipulation regions across all three types of manipulations. To ensure fair evaluation, we use raw images without any compression and ensure each type of manipulation contains the same number of samples. The main objective of the second subset is to assess the effectiveness of compression-based methods in detecting compression inconsistency using double-compressed images with identical QF. We created splicing manipulation images in which each double-compressed image was created using the same compression QF from 50-100. CIMD was taken and tampered with manually, ensuring high-quality image samples and annotations. We thus provide a reliable and accurate benchmark for evaluating the performance of image manipulation detection models. The availability of paired authentic and tampered images enables the comprehensive evaluation of a model’s ability to identify manipulated images. Contribution of this paper includes: • We present a two-branch architecture incorporating RGB and frequency features for challenging image manipulation detection. To our knowledge, our model is the first approach to focus on detecting small tampered regions. • We introduced the pioneering compression artifacts learning model capable of detecting double-compression artifacts, regardless of whether the quantization factors (QFs) are different or identical. • We introduce a new high-quality CIMD benchmark for evaluating the performance of SoTA IMD methods in challenging manipulations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7406 Figure 3: DCT coefficient histograms from the (0,1) position generated from a raw image under different compression processes. The range of X-axis is [-20, 20]. • Extensive experiments on CIMD demonstrate that the proposed approach outperforms the SoTA significantly in challenging image manipulation detection. Related Work Datasets for Image Manipulation Detection There are several datasets publicly available that are dedicated to image manipulation detection task. For example, the Columbia Dataset (Ng, Hsu, and Chang 2009) contains uncompressed 363 splicing images of a low average resolution (938 × 720). CASIA V1.0 and V2.0 (Dong, Wang, and Tan 2013a) were introduced for splicing and copy-move manipulation detection with no ground truth mask. Numerous datasets have been introduced only for copy-move tampering detection. For instance, the MICC (Amerini et al. 2011) features images mainly sourced from Columbia photographic image repository. Coverage (Wen et al. 2016) is another copy-move only dataset includes 100 original-forged pairs with similar-but-genuine objects. The NIST (Guan et al. 2019) has presented benchmark manipulation datasets with multiple versions. Some large benchmark datasets, such as (Mahfoudi et al. 2019) and (Novozamsky, Mahdian, and Saic 2020), apply non-realistic questionable automatically forgeries methods (Daisy et al. 2014) to generate forgery images. In addition, to detect compression artifacts, (Kwon et al. 2022) created five custom datasets that are double compressed using different unreported QFs. Most existing datasets in image manipulation detection only focus on a specific type of manipulation or exhibit a significant imbalance in the distribution of tampered types. This results in unreliable measurement of a model’s overall detection capability. Furthermore, few datasets focus on challenging tampering detection. To address these limitations, we provide a novel dataset comprise two subsets: (1) Images with small manipulation regions, where each tampering type contains an equal number of instances, and (2) Images with spliced double-compression using identical QFs. Image Manipulation Detection Current methods for detecting image manipulation can be broadly classified into two categories that are distinguished by the manipulation artifacts they are designed to identify. Many technologies (Chen et al. 2021; Liu et al. 2022; Bi et al. 2019; Wu et al. 2022; Wu, AbdAlmageed, and Natarajan 2019; Hu et al. 2020; Yang et al. 2020; Marra et al. 2020; Wang et al. 2022) operate by detecting anomalous features. To accomplish this task, most of them utilize high-pass noise filters (Bayar and Stamm 2018; Li and Huang 2019) to suppress content information. Other approaches (Kwon et al. 2022; Park et al. 2018a; Mareen et al. 2022) seek to identify compression inconsistencies in tampered images, as they assume that the compression QF’s before and after manipulation differ. In addition to these two mainstream approaches, some researchers have directed their attention to camerabased artifacts, such as model fingerprints (Cozzolino and Verdoliva 2019; Cozzolino, Poggi, and Verdoliva 2015; Huh et al. 2018; Mareen et al. 2022). In contrast to the methods mentioned above, our proposed approach employs a two-branch architecture that leverages both anomalous features and compression inconsistencies to detect image manipulation in more challenging conditions, which many current methods struggle to achieve. The Challenging Image Manipulation Detection Dataset (CIMD) In this work, we aim to build a comprehensive validation dataset (CIMD) dedicated to small region forgery (less than 1.5% on average) in both compressed and uncompressed scenarios. Our dataset are superior in image quality, image diversity, and forgery strategy. Two separate subsets have been introduced to evaluate image editing-based and compression-based methods, respectively. Collection. We captured original images using Canon RP camera, encompassing both uncompressed TIFF and compressed JPG forgery-original image pairs. These captures were taken across highly diverse multi-season settings, characterized by intricate and sophisticated lighting conditions. Our intention was to offer an impartial and all-encompassing assessment of models within a real-life context. Two Disentangled Sub-Datasets. We offer two subsets: the CIMD-Raw subset consists of pairs of original uncompressed TIFF images for the evaluation of image editingbased methods. The CIMD-Compressed subset encompasses splicing forgery and their corresponding original JPEG images with uniform quantization factors (QFs) ranging from 50 to 100. This subset evaluates the capability of compression-based models in detecting forgery under the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7407 Figure 4: Visualization of DCT coefficients for each recompression for a repeatedly compressed image under QF 80. The number below shows recompression counts. Black pixels indicate unaltered DCT coefficients. White pixels indicate the unstable region where DCT coefficients change after compression, which gradually focus on the tampered region as the count increases. same QF conditions. Processing and Tampering. We used Photoshop 2023 (PS) to process and create tampering photos due to its popularity in other datasets mentioned in the related work section and its popularity in general public. The CIMD-Raw (CIMD-R) Subset The CIMD-R benchmark provides a comprehensive evaluation of the image-editing-based models’ performance in detecting small tampered copy-move, object-removal, and splicing forgeries on uncompressed images. The use of uncompressed images eliminates undesired compression artifacts on forgery region that can be otherwise sensed by neural networks, enabling a more true performance evaluation on out-of-detection. CIMD-R comprises 600 TIFF images, with a resolution of 2048 × 1365. Ground-truth masks are also provided. In addition, CIMD-R adopts a future-oriented approach by providing 16-bit image pairs that offer up to 248 (trillions of) colors. For copy-move manipulation, a part of an image is copied and pasted within the same image, followed by five post-processing methods: scaling, rotation, level/curve increasing, illumination changing, and color redistribution. In the case of removal manipulation, forged images are synthesized by removing the selected region from the image (via Content-Aware Fill in PS). Content-Aware Fill is widely used in several datasets (Park et al. 2018b; Dong, Wang, and Tan 2013b) and represents the PS’s best guess to inpaint the object according to the surrounding region. For splicing forgery, regions from one image are copied and pasted into another source. Then, the same postprocessing methods mentioned in copy-move are applied to make the forged region harmonious with its surroundings. The CIMD-Compressed (CIMD-C) Subset The CIMD-C benchmark is designed to evaluate the capability of compressed-based models in detecting double JEPG compression artifacts, where the primary and secondary compression has the same QFs. The dataset comprises 200 JPEG images with a resolution of 2048 × 1365, wherein the QF is uniformly distributed as 50 ≤QF < 100. Forgery images are generated akin to CIMD-R’s splicing samples, with the distinction that the forged image is saved using the JPEG compression algorithm, employing the same QF as the original image. The original images were produced from RAW files ensuring that the original images are compressed for the first time, enhancing the dataset’s credibility. In the forgery images, the background is double-compressed, while the tampered regions are single-compressed. Furthermore, the dataset also comprises binary masks and QF values utilized for compression, thereby augmenting its utility for further investigations into the effects of different QFs. The Proposed IMD Method The two-branch architecture we propose enables the detection of both anomalous features and compression artifacts inspired by (Kwon et al. 2022). Furthermore, our model is effective for detecting small manipulation regions and identifying double compression traces that apply the same quantization matrix (Q-matrix). To achieve our research objectives, we adopted HR-Net (Wang et al. 2020) as the backbone of our model, based on its ability to offer three-fold benefits. Firstly, the absence of pooling layers in HR-Net ensures that the features maintain high resolutions throughout the entire process. Secondly, the model processes features from different scales in parallel with effective information exchange, which is essential for capturing information of varying scales. Finally, the input size of HR-Net is ideally suited for DCT features. Since after processing by dilated convolution with a rate of 8, the size of the DCT feature is reduced to 1/8 of the input size, which is equivalent to the second stage resolution of HR-Net. Network Architecture The network architecture comprises two branches, one for detecting anomalous features and the other for identifying compression artifacts, as in Fig. 2. For the RGB stream, the input image is fed to a full HR-Net, which learns the image editing traces from the visual content. In the frequency stream, the image is first input to the proposed compression artifact learning model shown in Fig. 5 to extract various DCT features. Subsequently, the DCT features are fed to a variant of the HR-Net, which operates at three different resolutions (1/8, 1/16, and 1/32). To precisely pinpoint small tampering regions, we carefully designed our model using both Atrous Spatial Pyramid Pooling (ASPP) shown in Fig. 6(a) and Attention Mechanism shown in Fig. 6(b)(c). The ASPP captures long-range distance information via various receptive fields and handles scale variations. It consists of three dilated convolutional layers with different rates and a Global Average Pooling The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7408 Figure 5: The compression artifact learning module. Three types (de-quantized, quantized, and residual quantized) of DCT features are fed into the backbone to learn double compression artifacts in cases whether the QFs are the same or not. (a) (b) (c) Figure 6: Detailed structure of the Atrous Spatial Pyramid Pooling (ASPP), channel attention and spatial attention. (GAP). The resulting features are concatenated and passed to a 1 × 1 convolution. The starting point for designing an attention mechanism between each resolution output of HR-Net lies in the understanding that the four scale features extracted from HRNet contain a diverse range of semantic and spatial information. Specifically, the high-resolution features contain more spatial content, whereas the low-resolution features carry more semantic responses. However, most prior methods simply do upsampling and concatenate these features for detection without adequately considering their interdependencies. The attention mechanism aims to fully leverage the information provided by each resolution and improve detection performance. Specifically, the approach utilizes channel attention from a bottom-up path and spatial attention from a top-down path, where two attention modules collaborate to enhance the features interactively. Through this approach, we seek to fully exploit the potential of each scale feature and improve detection performance. We next describe how attention works interactively in the RGB stream, where the procedure is virtually identical to the frequency stream, with a different number of output resolution branches. Given a RGB input image I with width W and height H, I ∈RH×W ×3, the HR-Net output features in four resolutions can be denoted as F1 ∈RH/4×W/4×C1, F2 ∈RH/8×W/8×C2, F3 ∈RH/16×W/16×C3 and F4 ∈ RH/32×W/32×C4, and C1 = 48, C2 = 96, C3 = 192, C4 = 384 as default setting. The bottom-up channel attention feature are calculated using: Fn = C(Fn+1) ⊙Fn, n = 1, 2, 3, (1) where C(·) denotes the channel attention block in Fig. 6(b) and ⊙represents element-wise multiplication. As F4 contains the highest level of semantic information, it remains unchanged at the channel level. For the detail of channel attention, the feature maps Fn+1 undergo an essential preliminary transformation through a 1 × 1 convolutional layer. This transformation is crucial to ensure that the number of channels between Fn+1 and Fn is consistent, thereby enabling the element-wise multiplication to be performed effectively in the channel dimension. We set the transformed channel number as C ′. The transformed features are subsequently fed to a Global Average Pooling, denoted as GAP(·), followed by the excitation process E(·) = C ′ →C ′/r →C ′, r = 4). The channel attention is calculated as C(F) = σ (E(GAP(Conv1×1(F)))) , where σ(·) is the Sigmoid activation function. Following the application of bottom-up channel attention, the feature maps F2, F3, and F4 are upsampled using the bilinear upsampling method to match the resolution of F1. The spatial attention mechanism from the top-down pathway is then applied, which is given by: Fm = S(Fm−1) ⊗Fm, m = 2, 3, 4, (2) where S(·) is the spatial-attention in Fig. 6(c). As F1 contains the richest spatial information, it remains unchanged The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7409 at the spatial level. The spatial attention is calculated using the Spatial Max Pooling Pmax and Spatial Average Pooling Pavg as S(F) = σ (Conv1×1 [Pmax(F); Pavg(F)]) , where [; ] denotes concatenation. The feature maps of each branch, after undergoing upsampling and interactive attention, have the same resolution. These features are then concatenated together to form final features for adaptive weighted heatmap aggregation in inference stage. Our model generates two final heatmaps, which are aggregated through soft selection. Specifically, we employ bilinear feature upsampling to upscale the heatmap of the frequency stream to match the resolution of the RGB stream heatmap. Following this, we apply the Softmax activation function to the heatmaps, and then use Global Max Pooling (GMP), denoted as GMP(·), to select the main heatmap and its corresponding weight. This selection is based on higher values, which indicate a stronger localization response compared to the other heatmaps. We define the main and secondary heatmap using hm and hs. Thus the weighted aggregated heatmap h can be generated using: h = GMP(hm) · hm + (1 −GMP(hm)) · hs. (3) Finally, the same as (Chen et al. 2021), we apply a nontrainable GMP over the predicted binary mask to perform image-level detection, since image-level detection is highly related to pixel-wise prediction. JPEG Compression Artifacts Learning Model Our compression learning model aims to identify compression artifacts in double-compressed images, regardless of whether the primary and secondary compressions have the same QF or not. Several approaches attempt to detect inconsistencies in the DCT histogram, as illustrated in Fig. 3(b)(c). It should be noted that when double compression is performed using the same Q-matrix, histogram-based methods are not effective since there are very few compression inconsistencies, as shown in Fig. 3(d). Fortunately, some traces can still be detected even in such conditions. It was observed in (Huang, Huang, and Shi 2010) that when a JPEG image is repeatedly compressed using the same QF, the number of different quantized DCT coefficients between two consecutive compressions decreases monotonically. Several methods (Peng et al. 2018; Yang et al. 2014; Niu et al. 2021) leverage this evidence to determine whether an image has been single or double-compressed. In contrast to previous approaches, we investigate the feasibility of leveraging this trace to localize tampered regions in an image. Fig. 4 shows that when a spliced image is created using the same QF, the manipulated region is singly compressed, however the background regions are doubly compressed. Consequently, when the image is repeatedly compressed, unstable quantized DCT coefficients gradually focus on the tampered area, while the authentic regions remain relatively stable. Based on this observation, we introduce a novel residual DCT map to guide the DCT features to better focus on the unstable regions for IMD. Our method focuses only on Y-channel DCT map, as it is more sensitive to human eyes. Given a JPEG image, it is easy to obtain the Y-channel quantized DCT coefficients Q0 Method Pixel-level F1 Image Level Best Fixed AUC Acc RRU-Net (Bi et al. 2019) 0.126 0.103 0.500 0.500 CR-CNN (Yang et al. 2020) 0.126 0.088 0.513 0.502 MantraNet (Wu et al. 2019) 0.051 0.018 0.500 0.500 SPAN (Hu et al. 2020) 0.160 0.045 0.510 0.498 HiFi IFDL (Guo et al. 2023) 0.145 0.115 0.502 0.502 PSCC-Net (Liu et al. 2022) 0.208 0.118 0.514 0.505 CAT-Net (Kwon et al. 2022) 0.301 0.194 0.589 0.537 MVSS-Net (Chen et al. 2021) 0.234 0.153 0.568 0.515 IF-OSN (Wu et al. 2022) 0.184 0.103 0.516 0.522 Ours 0.444 0.335 0.677 0.545 Table 1: Evaluation results for image-editing based methods using CIMD-R. Pixel-level F1 scores are calculated using both best and fixed (0.5) thresholds. For image-level performance, AUC and image-level accuracy are reported. and its corresponding Q-matrix from the JPEG file header. The Q-matrix is first repeated to have the same size as Q0 and we set the repeated Q-matrix as q. Thus, We compute the (k + 1)th re-compression quantized JPEG coefficients Qk+1 using the following equations sequentially:      Dk = Qk ⊙q Bk = IDCT(Dk) Ik+1 = RT(Bk) Qk+1 = [DCT(Ik+1) ⊘q] , (4) where ⊘denotes element-wise division, D, B, I and Q represent de-quantized DCT coefficients, de-transformed blocks using inverse DCT, image blocks and quantized JPEG coefficients respectively. The subscripts of the variables in the above equations represent the number of recompressions and we experimentally set k = 7. RT(·) is rounding and truncation operation. [·] denotes to the rounding operation. Thus, the residual de-quantized DCT coefficients R after k-times recompressions is defined as: R = 1 k k X i=1 (Qi −Qi−1). (5) For original Y-channel DCT coefficients Q0, we perform a clipping operation using a threshold value T, after which we convert them into a binary volume. Denote this binary value conversion as f : QH×W 0 →{0, 1}(T +1)×H×W . It is shown in (Yousfi and Fridrich 2020) that f is effective in evaluating the correlation between each coefficient in the DCT histogram. Therefore, the DCT coefficients Q0 is converted to binary volumes as: f(Qt 0(i, j)) =  1, if |clip(Q0(i, j))| = t, t ∈[0, T] , 0, otherwise. The function clip(·) is utilized to extract the histogram feature within [−T, T], which is essential for GPU memory constraints. We set T as 20 from the experiments. Additionally, we apply the absolute operation as DCT histogram exhibits symmetry. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7410 Method Pixel-level F1 Image-level Best Fixed AUC Acc DJPEG (Park et al. 2018a) 0.026 0.022 0.500 0.500 Comprint (Mareen et al. 2022) 0.030 0.010 0.467 0.500 CAT-Net (Kwon et al. 2022) 0.395 0.259 0.534 0.490 Ours 0.542 0.442 0.727 0.525 Table 2: Evaluation results for compression-based methods on the CIMD-C subset. The compression artifact learning method involves two element-wise multiplication operations. The first multiplication is performed between the histogram features and the Q-matrix, which is utilized to simulate the JPEG dequantization procedure. The second multiplication is used to guide the histogram feature to focus more on unstable coefficients, which is a critical step for detecting doublecompressed images using the same QF. In an 8 × 8 block of DCT coefficients, each coefficient position represents a specific frequency component. However, the convolution operations in the backbone are designed for RGB images and ignore these frequency relationships. To fully exploit the spatial and frequency information of the DCT coefficients, a reshaping operation is necessary. In detail, each block with a size of (8 × 8 × 1) is reshaped into a size of (1 × 1 × 64). Thus, the first and second dimensions represent the spatial information, while the third dimension represents the frequency relationship. Next, the de-quantized, quantized, and residual histogram features are concatenated in the channel dimension. Finally, the concatenated features are input to a 1 × 1 convolutional layer and the backbone network for the detection task. Experimental Results We first describe the experimental setup, and then compare the proposed network with the state-of-the-art methods on the newly proposed CIMD dataset. Datasets. The training datasets used in this study were adopted from (Kwon et al. 2022). The testing phase entailed the utilization of CIMD-R and CIMD-C to evaluate the efficacy of image-editing-based and compression-based methods, respectively. Evaluation metrics. Following most previous work, we evaluated the localization results using pixel-level F1 score with both the optimal and fixed 0.5 thresholds. For imagelevel detection, we employed AUC and image-level accuracy. We set 0.5 as the threshold for image-level accuracy. Only tampered images are used for the manipulation localization evaluation. Implementation details. Our model was implemented using PyTorch (Paszke et al. 2019) and trained on 8 RTX 2080 GPUs, with batch size 4. We set the initial learning rate as 0.001 with exponential decay. The training process consists of 250 epochs. The proposed model is designed to accept various image formats, including both JPEG and nonJPEG formats. The training objective is designed to minimize the pixel-level binary cross-entropy. Method CIMD-R Subset CIMD-C Subset F1 AUC F1 AUC RGB Stream 0.330 0.593 0.409 0.525 Frequency Stream 0.130 0.531 0.301 0.512 RGB + Freqnency 0.335 0.677 0.442 0.727 Table 3: Ablation study of two streams to work collaboratively and/or separately. Comparison With State-of-the-Art To guarantee a fair comparison and evaluate the previous models using newly introduced CIMD, we select the stateof-the-art approaches using these two standards: (1) pretrained model is publicly available, and (2) the evaluation datasets we used are not in their training sets. Following these criteria, we select RRU-Net, MantraNet, HiFi IFDL, CR-CNN, SPAN, PSCC-Net, MVSS-Net, IF-OSN, CATNet, DJPEG and Comprint. All the work we compared are appropriately referenced in the related work section. We use CIMD-R to evaluate the performance of the image-editingbased method, while CIMD-C is utilized for compressionbased approaches. Evaluation using CIMD-R subset. Table 1 reports the results of image-editing-based methods using CIMD-R, in which all image samples are uncompressed. Two Pixel-level F1 scores are calculated using the best F1 threshold for each image and using fixed F1 threshold of 0.5, respectively. Best scores are highlighted in bold. Our method outperforms existing SoTA methods in both image-level and pixel-level evaluation, which demonstrates its superiority for detecting small tampering regions. Evaluation using CIMD-C subset. Table 2 compares the performance of compression-based IMD methods, where all image samples are double compressed using the same QF and the evaluation settings are consistent with those used in Table 1. Our method is again the best performer in terms of overall performance, highlighting the effectiveness of our approach for double-compressed images with the same QF. Ablation study. We provide a simple ablation study shown in Table 3. Observe that our RGB stream is effective in both compressed and uncompressed data. Notably, the frequency stream fails to produce satisfactory results in CIMD-R due to the absence of compression artifacts. However, when the two branches work collaboratively, the model’s performance improves in both localization and detection evaluation. Conclusion This study presents a novel Challenging Image Manipulation Detection (CIMD) dataset, which comprise of two subsets that are designed for evaluating image-editingbased and compression-based approaches, respectively. The datasets were manually taken and tampered with, and come with high-quality annotations. Additionally, we propose a two-branch method that outperforms state-of-the-art models in detecting image manipulations using the CIMD dataset. We have released our dataset to facilitate future research. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7411 Ethics Statement To ensure ethical compliance, all photos presented in our dataset are original and obtained either in public places or with the owners’ explicit permission in private places, in accordance with local jurisdiction laws. Moreover, the authors ensure that the photos contain neither identifiable individuals nor personal information. As advised by institutional review boards (IRB), IRB approval is not required for the dataset. References Amerini, I.; Ballan, L.; Caldelli, R.; Del Bimbo, A.; and Serra, G. 2011. A sift-based forensic method for copy–move attack detection and transformation recovery. IEEE transactions on information forensics and security. Bayar, B.; and Stamm, M. C. 2018. Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection. IEEE Transactions on Information Forensics and Security, 13(11): 2691–2706. Bi, X.; Wei, Y.; Xiao, B.; and Li, W. 2019. RRU-Net: The Ringed Residual U-Net for Image Splicing Forgery Detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Chen, L.-C.; Papandreou, G.; Schroff, F.; and Adam, H. 2017. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Chen, X.; Dong, C.; Ji, J.; Cao, J.; and Li, X. 2021. Image manipulation detection by multi-view multiscale supervision. In IEEE/CVF International Conference on Computer Vision, 14185–14193. Cheng, B.; Xiao, B.; Wang, J.; Shi, H.; Huang, T. S.; and Zhang, L. 2020. Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation. In IEEE/CVF conference on computer vision and pattern recognition, 5386–5395. Cozzolino, D.; Poggi, G.; and Verdoliva, L. 2015. Splicebuster: A new blind image splicing detector. In IEEE International Workshop on Information Forensics and Security (WIFS), 1–6. IEEE. Cozzolino, D.; and Verdoliva, L. 2019. Noiseprint: A CNNBased Camera Model Fingerprint. IEEE Transactions on Information Forensics and Security, 15: 144–159. Daisy, M.; Buyssens, P.; Tschumperl´e, D.; and L´ezoray, O. 2014. A smarter exemplar-based inpainting algorithm using local and global heuristics for more geometric coherence. In IEEE International Conference on Image Processing (ICIP), 4622–4626. IEEE. Dong, J.; Wang, W.; and Tan, T. 2013a. Casia image tampering detection evaluation database. In 2013 IEEE China summit and international conference on signal and information processing, 422–426. IEEE. Dong, J.; Wang, W.; and Tan, T. 2013b. Casia image tampering detection evaluation database. In IEEE China summit and international conference on signal and information processing, 422–426. IEEE. Guan, H.; Kozak, M.; Robertson, E.; Lee, Y.; Yates, A. N.; Delgado, A.; Zhou, D.; Kheyrkhah, T.; Smith, J.; and Fiscus, J. 2019. MFC datasets: Large-scale benchmark datasets for media forensic challenge evaluation. In IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 63–72. IEEE. Guo, X.; Liu, X.; Ren, Z.; Grosz, S.; Masi, I.; and Liu, X. 2023. Hierarchical fine-grained image forgery detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3155–3165. Hu, J.; Liao, X.; Wang, W.; and Qin, Z. 2021. Detecting compressed deepfake videos in social networks using frametemporality two-stream convolutional network. IEEE Transactions on Circuits and Systems for Video Technology, 32(3): 1089–1102. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141. Hu, X.; Zhang, Z.; Jiang, Z.; Chaudhuri, S.; Yang, Z.; and Nevatia, R. 2020. SPAN: Spatial pyramid attention network for image manipulation localization. In European Conference on Computer Vision (ECCV),, 312–328. Springer. Huang, F.; Huang, J.; and Shi, Y. Q. 2010. Detecting Double JPEG Compression With the Same Quantization Matrix. IEEE Transactions on Information Forensics and Security, 5(4): 848–856. Huh, M.; Liu, A.; Owens, A.; and Efros, A. A. 2018. Fighting fake news: Image splice detection via learned selfconsistency. In European conference on computer vision (ECCV), 101–117. Kwon, M.-J.; Nam, S.-H.; Yu, I.-J.; Lee, H.-K.; and Kim, C. 2022. Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision, 1875–1895. Li, H.; and Huang, J. 2019. Localization of deep inpainting using high-pass fully convolutional network. In IEEE/CVF international conference on computer vision, 8301–8310. Liu, X.; Liu, Y.; Chen, J.; and Liu, X. 2022. Pscc-net: Progressive spatio-channel correlation network for image manipulation detection and localization. IEEE Transactions on Circuits and Systems for Video Technology, 32(11): 7505– 7517. Mahfoudi, G.; Tajini, B.; Retraint, F.; Morain-Nicolier, F.; Dugelay, J. L.; and Marc, P. 2019. DEFACTO: image and face manipulation dataset. In 2019 27Th european signal processing conference (EUSIPCO). IEEE. Mareen, H.; Bussche, D. V.; Guillaro, F.; Cozzolino, D.; Van Wallendael, G.; Lambert, P.; and Verdoliva, L. 2022. Comprint: Image Forgery Detection and Localization using Compression Fingerprints. In International Conference on Pattern Recognition (ICPR). Marra, F.; Gragnaniello, D.; Verdoliva, L.; and Poggi, G. 2020. A full-image full-resolution end-to-end-trainable CNN framework for image forgery detection. IEEE Access, 8: 133488–133502. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7412 Ng, T.-T.; Hsu, J.; and Chang, S.-F. 2009. Columbia image splicing detection evaluation dataset. DVMM lab. Columbia Univ CalPhotos Digit Libr. Niu, Y.; Li, X.; Zhao, Y.; and Ni, R. 2021. Detection of Double JPEG Compression With the Same Quantization Matrix via Convergence Analysis. IEEE Transactions on Circuits and Systems for Video Technology, 32(5): 3279–3290. Novozamsky, A.; Mahdian, B.; and Saic, S. 2020. IMD2020: A large-scale annotated dataset tailored for detecting manipulated images. In IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 71–80. Park, J.; Cho, D.; Ahn, W.; and Lee, H.-K. 2018a. Double JPEG Detection in Mixed JPEG Quality Factors using Deep Convolutional Neural Network. In European conference on computer vision (ECCV), 636–652. Park, J.; Cho, D.; Ahn, W.; and Lee, H.-K. 2018b. Double JPEG detection in mixed JPEG quality factors using deep convolutional neural network. In European conference on computer vision (ECCV), 636–652. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. Peng, P.; Sun, T.; Jiang, X.; Xu, K.; Li, B.; and Shi, Y. 2018. Detection of Double JPEG Compression with the Same Quantization Matrix Based on Convolutional Neural Networks. In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 717–721. IEEE. Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Morales, A.; and Ortega-Garcia, J. 2020. Deepfakes and beyond: A survey of face manipulation and fake detection. Information Fusion, 64: 131–148. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. 2020. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3349–3364. Wang, J.; Wu, Z.; Chen, J.; Han, X.; Shrivastava, A.; Lim, S.N.; and Jiang, Y.-G. 2022. ObjectFormer for Image Manipulation Detection and Localization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2364–2373. Wen, B.; Zhu, Y.; Subramanian, R.; Ng, T.-T.; Shen, X.; and Winkler, S. 2016. COVERAGE – A NOVEL DATABASE FOR COPY-MOVE FORGERY DETECTION. In IEEE International Conference on Image processing (ICIP). Wu, H.; Zhou, J.; Tian, J.; and Liu, J. 2022. Robust image forgery detection over online social network shared images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13440–13449. Wu, Y.; AbdAlmageed, W.; and Natarajan, P. 2019. ManTraNet: Manipulation Tracing Network for Detection and Localization of Image Forgeries With Anomalous Features. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9543–9552. Yang, C.; Li, H.; Lin, F.; Jiang, B.; and Zhao, H. 2020. Constrained R-CNN: A General Image Manipulation Detection Model. In IEEE International Conference on Multimedia and Expo (ICME), 1–6. IEEE. Yang, J.; Xie, J.; Zhu, G.; Kwong, S.; and Shi, Y.-Q. 2014. An Effective Method for Detecting Double JPEG Compression With the Same Quantization Matrix. IEEE Transactions on Information Forensics and Security, 9(11): 1933–1942. Yang, M.; He, D.; Fan, M.; Shi, B.; Xue, X.; Li, F.; Ding, E.; and Huang, J. 2021. DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local and Global Features. In IEEE/CVF International conference on Computer Vision, 11772–11781. Yousfi, Y.; and Fridrich, J. 2020. An intriguing struggle of cnns in jpeg steganalysis and the onehot solution. IEEE Signal Processing Letters, 27: 830–834. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7413
2024
823
18,655
TMFormer: Token Merging Transformer for Brain Tumor Segmentation with Missing Modalities Zheyu Zhang1,*, Gang Yang1,*, Yueyi Zhang1,2,†, Huanjing Yue4, Aiping Liu1, Yunwei Ou3,2,†, Jian Gong3,2,, Xiaoyan Sun1,2 1University of Science and Technology of China, Hefei 230026, China 2Hefei Comprehensive National Science Center, Institute of Artificial Intelligence, Hefei 230088, China 3Beijing Tiantan Hospital, Capital Medical University, Beijing 100050, China 4Tianjin University, Tianjin 300072, China zhyuey@ustc.edu.cn, ouyunwei@sina.com Abstract Numerous techniques excel in brain tumor segmentation using multi-modal magnetic resonance imaging (MRI) sequences, delivering exceptional results. However, the prevalent absence of modalities in clinical scenarios hampers performance. Current approaches frequently resort to zero maps as substitutes for missing modalities, inadvertently introducing feature bias and redundant computations. To address these issues, we present the Token Merging transFormer (TMFormer) for robust brain tumor segmentation with missing modalities. TMFormer tackles these challenges by extracting and merging accessible modalities into more compact token sequences. The architecture comprises two core components: the Uni-modal Token Merging Block (UMB) and the Multi-modal Token Merging Block (MMB). The UMB enhances individual modality representation by adaptively consolidating spatially redundant tokens within and outside tumor-related regions, thereby refining token sequences for augmented representational capacity. Meanwhile, the MMB mitigates multi-modal feature fusion bias, exclusively leveraging tokens from present modalities and merging them into a unified multi-modal representation to accommodate varying modality combinations. Extensive experimental results on the BraTS 2018 and 2020 datasets demonstrate the superiority and efficacy of TMFormer compared to state-of-the-art methods when dealing with missing modalities. Introduction Given the emergence of malignant brain tumors as a severe health threat, timely diagnosis is imperative for minimizing their impact. Brain tumor segmentation plays a pivotal role by identifying and delineating tumor boundaries in cerebral medical images (Havaei et al. 2017; Jia et al. 2020; She et al. 2023). Magnetic Resonance Imaging (MRI) sequences, including T1-weighted (T1), contrast-enhanced T1-weighted (T1ce), T2-weighted (T2), and Fluid Attenuation Inversion Recovery (FLAIR) modalities, are extensively employed for brain tumor segmentation. Several multi-modal techniques *Equal contribution. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Catch-all methods adopt zero maps to compensate for the missing modalities for fusion. (b) The T-SNE maps of catch-all methods are relatively sparse. (c) Our proposed TMFormer only utilizes available modalities. (d) The T-SNE map of proposed TMFormer. Figure 1: Comparison between catch-all methods and our proposed TMFormer. Our proposed TMFormer reduces not only redundant information but also feature bias for cases of missing modalities. In (b) and (d), each point denotes a sample from the BraTS 2020. The points of the same color between (b) and (d) belong to the same case of missing modalities, while different colors mean different cases. leverage these four MRI modalities to enhance brain tumor segmentation by integrating complementary information. However, in clinical settings, missing modalities are common due to image corruption, varying acquisition protocols, and contrast allergies (Liu et al. 2021a; Tran et al. 2017). Such absence of modalities significantly impairs the segmentation performance of multi-modal methods. Various strategies have emerged to address diverse scenarios of missing modalities. One approach entails training dedicated networks for every potential combination of available modalities (Wang et al. 2021b; Zhang et al. 2021), yet this leads to extensive training costs and deployment space requirements. Some researchers seek to synthesize absent modalities to create complete multi-modal sets (Wang et al. 2018; Shen et al. 2020), but the segmentation accuracy The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7414 is bound by the quality of the generated modalities, which can introduce unexpected noise and artifacts. Predominantly, catch-all methods prevail, utilizing a single model to handle all modality combinations (Havaei et al. 2016; Chen et al. 2019; Zhou et al. 2021). Nonetheless, these approaches use zero maps to compensate for missing modalities, introducing dramatic variations, called ‘feature bias’, in multi-modal feature fusion and superfluous computations in feature extraction and fusion for the absent modalities, as illustrated in Fig. 1. Hence, mitigating the impact of zero maps is crucial for enhancing brain tumor segmentation outcomes. To address these challenges in this task, we introduce an innovative approach termed the Token Merge Transformer (TMFormer), designed to tackle the diverse combinations of modalities essential for brain tumor segmentation. TMFormer interprets these varied amalgamations of modalities as sequences of tokens with varying and adaptable lengths. This architecture incorporates two pivotal components: the Uni-modal Token Merging Block (UMB), which focuses on information extraction from individual modalities, and the Multi-modal Token Merging Block (MMB), responsible for the fusion of multiple modalities. The UMB applies an adaptive token merging strategy that intelligently compresses spatially redundant tokens in regions associated with tumors. It retains more representative tokens in tumor-related regions and merges tokens more extensively in tumor-unrelated regions. This streamlined token sequence is subsequently refined to enhance global representation. Meanwhile, the MMB exclusively fuses and augments tokens from available modalities. The tokens from modalities that exert a relatively minor impact on segmentation into those with higher contributions to the segmentation process. The resultant multi-modal features are projected into a unified representative space, effectively mitigating feature biases that may arise when dealing with different scenarios involving missing modalities. This innovative TMFormer approach demonstrates its prowess in efficiently handling the intricate challenges of brain tumor segmentation by expertly managing diverse modalities and their nuanced interactions. In summary, the contributions of our work are as follows: • We introduce a Token Merging Transformer aiming to alleviate feature bias in scenarios involving missing modalities and reduce redundant computations. • We propose the UMB to reduce the spatially redundant information and augment the global representation of available modalities. • We propose the MMB to merge multi-modal tokens based on the respective contributions of modalities to segmentation and project the fused tokens into a unified representative space. • We conduct extensive experiments on the BraTS 2018 and 2020 datasets and demonstrate the superiority and effectiveness of our TMFormer for brain tumor segmentation with missing modalities. Related Work Multi-modal brain tumor segmentation with missing modalities. Several methods have been developed to address brain tumor segmentation with missing modalities, which can be divided into three categories: 1) ‘dedicated’ methods that train a dedicated segmentation model for each possible combination of available modalities, 2) ‘generative’ methods that synthesize missing modalities and training a segmentation model with complete modalities, and 3) ‘catch-all’ methods that train a single model for various combinations of modalities. For the dedicated methods, KDNet (Hu et al. 2020) distills knowledge from the multi-modal network to the dedicated network. Wang et al. (2021b) adopt a co-training strategy between the multi-modal network and the dedicated network, aligning the feature distribution in latent space. Since there are fifteen possible combinations of modalities, these methods suffer from high training costs. For the generative methods, Hu et al. (2020) adopt a localadaptive convolutional network to fuse the available modalities for generating the absent modalities. Shen et al. (2020) disentangle modality sequences into the content code and the style code, and the missing modalities are generated based on the content code. M3AE (Liu et al. 2023) adopts multi-modal masked auto-encoder and model inversion to build substitutes of multi-modal sequences, which performs a segmentation process on these substitutes. Nevertheless, as the generated modalities potentially have noises and artifacts, it is challenging to acquire an accurate segmentation result based on the synthesized modalities. For the catch-all methods, HeMiS (Havaei et al. 2016) fuses the multi-modal features using their mean and variance, which obtains the segmentation based on the fusion. RFNet (Ding, Yu, and Yang 2021) conducts multi-modal fusion according to the tumor regions, and UNet-MFI (Zhao, Yang, and Sun 2022) builds the graph for fusing multi-modal features. Zhang et al. (2022) propose the hybrid CNNTransformer architecture termed the mmFormer that utilizes multi-head self-attention to fuse multi-modal features only in the smallest scale. However, due to their convolutional operation defined by fixed-size kernels, these methods must use zero maps to compensate for the missing modalities during subsequent processing. This inadvertently introduces the feature bias. Besides, the feature extraction and fusion for zero maps lead to unnecessary computations. In contrast, we treat varying modality combinations as variable-length token sequences, which avoid the involvement of zero maps. Efficient vision transformers. Since transformers have quadratic complexity, many works aim to improve their efficiency from different aspects. Some focus on the attention mechanisms (Huang et al. 2019; Liu et al. 2021b; Dong et al. 2022). Swin Transformer (Liu et al. 2021b) adopts self-attention in shifted local windows. Huang et al. (2019) and Dong et al. (2022) propose cross-shaped windows for computing attention. Besides, PVT (Wang et al. 2021a) employs the pyramid structure within the downsampled key and value tokens. Reducing the number of tokens processed in the network is an alternative way to improve efficiency. Several works attempt to prune less informative tokens based on the predicted importance (Rao et al. 2021) or token similarity (Liang et al. 2022; Fayyaz et al. 2022; The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7415 Figure 2: Overview of the proposed TMFormer, which is composed of four convolution-tokenization encoders, four UMBs, an MMB, an unmerging block, and a decoder. Kong et al. 2023). Bolya et al. (2022) propose to merge adjacent similar tokens to accelerate the inference of ViT. Token Learner (Liang et al. 2022) fuse inattentive tokens that contribute less to the class token. The mentioned methods have achieved promising performance for image classification. Recently, Lu, de Geus, and Dubbelman (2023) propose to share values of tokens belonging to the same class for semantic segmentation. However, to the best of our knowledge, reducing redundant tokens remains unexplored within brain tumor segmentation, particularly in scenarios involving missing modalities. It is non-trivial to propose a tokenreducing strategy as 3D MRI has redundant information in intra-modal and inter-modal spatial dimensions. Method In this section, we first briefly explain the motivation. Then we illustrate the overall architecture of our TMFormer in Fig. 2 and its components. Finally, we describe its corresponding optimization loss. Motivation The prevalent catch-all methods utilize zero maps as substitutes for absent modalities, which inevitably introduce feature bias and the wastage of computational resources. Drawing inspiration from Transformer’s proficiency in handling variable-length token sequences, we propose to employ the Transformer architecture to deal with diverse scenarios of missing modalities. The different combinations of available modalities are regarded as variable-length token sequences while the substitutes are no longer utilized. Given the considerable length and potential redundancy of 3D medical image token sequences, we undertake the merging of modality tokens along intra-modal and intermodal spatial dimensions. Furthermore, to alleviate feature bias across distinct scenarios involving absent modalities, we introduce the modeling of global dependencies within the compact token sequences and project them into a unified multi-modal representative space. The Overall Architecture A full-modal complete image set consists of four different modalities, i.e., FLAIR, T1ce, T1, and T2 modalities. In the task of missing modalities, a multi-modal data x is given with the dimension M × D × H × W. D, H, and W are the depth, height, and width of the image, respectively, and M is the number of available modalities. Its multi-modal code is represented by h = {h1, h2, h3, h4}, where hi ∈{0, 1} indicates whether the corresponding modality is available. We illustrate the overall architecture in Fig. 2. The TMFormer employs a multi-scale network structure, facilitatThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7416 (a) The partition for getting Auni and Buni in the UMB. (b) The partition for getting Amul and Bmul in the MMB. (c) The token merging process after dividing A and B in UMB and MMB. Figure 3: The token merging process of UMB and MMB. In (a) and (b), sample red tokens to set A and blue tokens to set B. In (c), merge uni-modal tokens from Auni into Buni in the UMB, while merge multi-modal tokens from Amul into Bmul in the MMB. The most similar tokens are merged as mixed tokens, while the dissimilar tokens are preserved as red and blue tokens. ing the integration of information across distinct hierarchical levels. At each scale, we first utilize encoders to project the data x into token sequences. Then, we adopt the UMB to decrease the spatial redundancy of token sequences and model the intra-modal global relationships for each available modality. After extracting features, we use MMB to decrease the redundancy among modalities and modal the inter-modal global relationship. Subsequently, the fused multi-modal token sequences are unmerged and re-arranged to their initial shape. Finally, the fused features are sent to convolutional decoders to yield the final segmentation result. Notably, as depicted in Fig. 2, the encoder branches marked as missing modalities by hi = 0 are not involved in computations, which is different from the other methods dedicated to missing modalities (Yang et al. 2022; Zhang et al. 2022). Convolution-tokenization Encoder We adopt individual encoders to extract local features for available modalities. Similar to the encoder part design of U-Net (Ronneberger, Fischer, and Brox 2015), we stack four convolutional blocks to extract multi-scale features, and each convolutional block consists of three convolutional layers. At each scale, the local features of each modality are projected into the token sequence xtok ∈R( D 2l−1 × H 2l−1 × W 2l−1 )×C through a patch embedding layer, where l indicates the scale, D 2l−1 × H 2l−1 × W 2l−1 denotes the length of token sequence, and C is the channel dimension. We set the patch size to 1×1×1 since the pixellevel information is essential to segmentation. Uni-modal Merging Block (UMB) Due to the inherently local nature of convolutional networks, we aim to improve the uni-modal feature representation by capturing longrange dependencies. For each modality, the input for the UMB is xtok ∈R( D 2l−1 × H 2l−1 × W 2l−1 )×C. At the l = 0 scale, the length of the sequence is D × H × W, which will lead to high complexity of computations. We firstly merge the token sequence with the UMB, which transforms the sequence length from D 2l−1 × H 2l−1 × W 2l−1 to Nl−1, i.e., xuni = UTM(xtok|Segcoarse), (1) where UTM denotes the uni-modal token merging process, and Segcoarse is the coarse segmentation map predicted from the previous scale. Since the segmentation only focuses on the tumor region, we use Segcoarse to identify the tumorrelated region and the tumor-unrelated region to merge spatial tokens adaptively. Based on Segcoarse, the uni-modal token merging process is as follows: 1) Partitioning the xtok into parts that each part belongs to a cubic window in 3D dimensions; 2) Sampling tokens from parts that more tokens are sampled for the tumor-related part while fewer tokens are sampled for the tumor-unrelated part; 3) Putting the sampled tokens into Buni and the rest tokens are in Auni; 4) Calculating the similarity scores between Auni and Buni; 5) Selecting the most similar token in Buni for each token in Auni, which builds the similar token pairs and preserves its similar score; 6) Merging the top N uni l−1 most similar token pairs that are sorted based on the similar score; 7) Appending the dissimilar tokens of Auni to Buni, and the length of dissimilar tokens is Nl−1 −N uni l−1. Significantly, a subset of tumor-unrelated tokens is allocated to Buni as these tokens estimated by coarse segmentation maps, may include false negatives, which should be preserved. The merging process is shown in Fig. 3a and Fig. 3c. After this process, we then establish a global feature relationship for the merged token sequence xuni ∈RNl−1×C via multi-head self-attention (MSA) and multi-layer perceptrons (MLP) to fully mine the feature information within the modality. The corresponding outputs are: xMSA = MSA(LN(xuni)) + xuni, (2) xMLP = MLP(LN(xMSA)) + xMSA, (3) where LN(·) denotes the layer normalization, and the xMLP is the final output of the UMB. Multi-modal Merging Block (MMB) After the feature extraction by UMB, we obtain the token sequences Xtok ∈ RM×Nl−1×C for M available modalities. To enhance the multi-modal representation capacity, we propose the MMB to model the long-range dependencies among different modalities. In the MMB, we first merge the inter-modal redundant information, i.e., xmul = MTM(Xtok|h), (4) where MTM(·) denotes the multi-modal token merging process and h is the modality code. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7417 Different from the UTM, the partition of the MTM is based on the observation that the FLAIR and T1ce modalities have a more pronounced impact on tumor segmentation (Ding, Yu, and Yang 2021; Yang et al. 2022). Consequently, we construct the multi-modal token sequences Xtok in the order of [xF LAIR tok , xT 1ce tok , xT 1 tok, xT 2 tok], maintaining sequential continuity even with the absence of modalities. For example, if the T1ce modality is missing, the multi-modal token sequences Xtok are organized as [xF LAIR tok , xT 1 tok, xT 2 tok] that do not add zero maps. We partition the multi-modal token sequences Xtok into two parts, set Amul and set Bmul, which are illustrated as follows:    Amul =Xtok[0:1], Bmul =Xtok[2:M), ifM ≥3, Amul =Xtok[0], Bmul =Xtok[1], ifM =2, Amul =Xtok, Bmul =None, ifM =1, (5) To reduce the inter-modal redundancy, we fuse the tokens of Amul into the tokens of Bmul that are highly similar, concurrently appending dissimilar tokens of Amul to Bmul. In summary, the MTM process is as follows: 1) Constructing the multi-modal token sequences Xtok; 2) Partitioning the Xtok into Amul and Bmul based on Eq. 5; 3) Calculating the similarity scores between Amul and Bmul; 4) Selecting the most similar token in Bmul for each token in Amul; 5) Merging the top N mul l−1 most similar token pairs; 6) Appending the dissimilar tokens of Amul to Bmul. The merging process is shown in Fig. 3b and Fig. 3c. If only one modality is available, we send it to the following layers without merging. We then adopt MSA to facilitate information exchange between modalities. Finally, we utilize MLP to project the fused feature into a unified representation space for the following segmentation, which improves the robustness of segmentation in case of different scenarios of missing modalities. Similarly,the process is presented as follows: xMSA = MSA(LN(xmul)) + xmul, (6) xMLP = MLP(LN(xMSA)) + xMSA. (7) Unmerging Block and Decoder We employ the unmerging block to restore the initial length of the token sequence and re-arrange the token sequence into the feature maps of D 2l−1 × H 2l−1 × W 2l−1 . We use one multi-scale decoder to gradually restore spatial resolution, smoothly transitioning from the high-level latent space to the original mask space. In the unmerging block, the tokens that are merged out share the identical value as the merged token, effectively reinstating the length of the enhanced token sequences to their original state before merging. In the decoder, the decoder blocks employ the feature maps for generating coarse-tofine segmentation maps within our four-level structure. The coarse segmentation map plays a vital role in guiding the process of UTM within our UMB. Optimization Loss Following previous works (Dou et al. 2017; Ding, Yu, and Yang 2021; Zhao, Yang, and Sun 2022), we employ the weighted cross-entropy loss LCE and Dice loss LDice to optimize our TMFormer at each scale, which is defined as: L = LCE(y, ˆy) + LDice(y, ˆy), (8) (a) Input (b) Similarity map (c) Merged visualization Figure 4: Visualization of token merging. (a) depicts the initial input that has D × H × W tokens. (b) illustrates token similarities, where tokens of the same color indicate their merging. (c) shows the visualization of merged tokens, whose length is near 1 64 of the initial tokens. where y and ˆy are the segmentation prediction and the ground truth, respectively. Experiments and Results Datasets & Evaluation Metrics. The proposed method is evaluated on two publicly available multi-modal brain tumor segmentation (BraTS) datasets: BraTS 2018 (Bakas et al. 2018) and 2020 (Andres et al. 2020). The BraTS 2018 and 2020 datasets include 285 and 369 cases with ground truth publicly available, respectively. For the BraTS 2018 dataset, we split it into 199 : 29 : 57 for training, validation, and testing, respectively. Additionally, we use a three-fold validation with the same split as (Dorent et al. 2019). For the BraTS 2020 dataset, following (Ding, Yu, and Yang 2021), we randomly split it into 219 : 50 : 100. The BraTS datasets consist of four different modalities: FLAIR, T1ce, T1, and T2 modalities. Each of them captures different properties of brain tumor subregions: GDenhancing tumor (ET), peritumoral edema (ED), and the necrotic and non-enhancing tumor core (NCR/NET). Different subregions of brain tumors are combined into three nested subregions: the whole tumor (WT), the tumor core (TC), and the enhancing tumor (ET). Following (Ding, Yu, and Yang 2021; Zhao, Yang, and Sun 2022), we adopt Dice Similarity Coefficient (DSC) to evaluate the segmentation performance. Implementation Details. We implement our TMFormer in the PyTorch framework (1.13) and train all parameters on four NVIDIA GeForce RTX 3090 GPUs for 500 epochs. Our method is optimized by the ADAM optimizer with a batch size of 4. The initial learning rate is 2×10−4 with the weight decay of 1 × 10−5. In the training stage, we randomly crop each volume to a fixed size of 80 × 80 × 80 voxels that are further augmented with random flip, random rotation, and random intensity shift. In the inference stage, we crop the volumes from 240 × 240 × 155 to 128 × 128 × 128 with an overlap rate of 0.5. Motivation Verification. As illustrated in motivation, we first regard the different combinations of available modalities as variable-length token sequences, avoiding unnecessary calculations on missing modalities. As is shown in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7418 Methods HeMiS U-HVED RFNet UNet-MFI mmFormer M3AE 1 Mod. 2 Mods. 3 Mods. 4 Mods. Param. (M) 0.57 1.25 8.98 34.12 36.65 40.42 8.93 8.93 8.93 8.93 GFLOPs 2.27 4.58 102.28 499.52 30.23 36.14 57.24 72.62 88.00 103.37 Table 1: The parameters and GFLOPs of compared methods on the BraTS 2020 dataset. ‘1/2/3/4 Mods.’ means that we send 1/2/3/4 available modalities into our model, respectively. M F • ◦ ◦ ◦ • • • ◦ ◦ ◦ • • • ◦ • AVG T1ce ◦ • ◦ ◦ • ◦ ◦ • • ◦ • • ◦ • • T1 ◦ ◦ • ◦ ◦ • ◦ • ◦ • • ◦ • • • T2 ◦ ◦ ◦ • ◦ ◦ • ◦ • • ◦ • • • • WT 1 71.60 67.71 68.96 68.19 69.17 68.67 69.83 69.01 69.78 69.40 70.21 71.28 70.73 71.58 72.06 69.88 2 69.85 46.82 46.77 54.03 61.45 58.25 64.50 62.91 65.76 64.29 66.99 69.70 68.38 70.35 71.41 62.76 3 86.42 77.34 76.46 86.21 89.55 89.30 89.35 81.00 87.45 87.95 90.39 90.20 90.42 88.59 90.77 86.76 4 82.27 73.18 72.10 82.45 83.64 84.34 84.85 77.30 83.44 83.52 85.45 85.70 85.85 84.20 84.93 82.21 5 82.40 74.25 74.37 83.07 84.54 84.61 85.82 77.98 84.05 84.00 85.34 86.11 86.22 84.64 86.38 82.92 6 86.53 73.85 76.71 86.09 89.48 89.38 89.25 78.11 87.37 87.20 89.99 90.18 90.42 88.61 90.56 86.25 Ours 87.45 78.53 78.94 86.46 89.67 89.64 89.98 81.97 88.01 87.75 90.23 90.53 90.51 88.54 90.83 87.27 TC 1 53.43 51.41 51.56 51.11 51.70 51.08 51.85 51.88 52.35 51.51 52.95 53.76 52.97 54.38 55.03 52.46 2 34.62 35.51 27.30 37.67 42.15 38.26 43.41 44.93 47.53 44.97 49.13 51.30 49.40 52.72 54.17 43.53 3 65.04 82.37 64.31 68.47 84.69 71.45 72.62 83.15 84.06 72.11 84.71 84.70 74.28 84.11 84.74 77.39 4 63.94 77.63 59.38 68.05 79.92 68.23 70.72 77.61 80.09 70.21 80.03 80.94 71.40 80.75 81.28 74.01 5 66.19 77.96 61.17 69.18 80.36 69.58 71.55 79.93 80.79 70.90 80.18 81.31 72.02 81.12 81.22 74.90 6 68.04 81.39 66.00 70.27 82.01 73.82 74.95 82.39 83.01 72.54 82.44 83.06 75.09 84.06 84.40 77.56 Ours 70.19 82.59 67.12 71.84 84.62 74.65 74.76 83.13 84.24 73.33 84.69 84.64 75.17 84.00 84.88 78.66 ET 1 43.77 42.41 41.59 41.45 41.83 40.29 41.19 42.08 42.39 41.00 43.67 44.16 42.95 45.27 46.33 42.69 2 12.88 24.94 7.27 24.26 30.02 21.95 29.40 33.64 36.18 32.12 39.39 40.91 38.09 43.18 45.33 30.64 3 40.47 74.27 37.51 43.59 76.45 43.81 46.99 75.22 73.94 46.37 77.01 76.38 48.95 76.38 76.64 60.93 4 39.70 69.42 29.38 46.00 70.13 40.06 48.69 69.25 72.32 45.71 71.28 70.88 46.55 72.00 71.41 57.52 5 40.47 68.91 33.97 45.61 69.81 43.63 48.09 71.10 70.72 45.92 70.08 71.60 48.38 70.65 71.36 58.02 6 40.49 72.43 39.93 45.97 74.66 43.20 47.30 75.42 76.81 46.63 75.94 77.08 48.19 77.40 78.00 61.30 Ours 42.28 76.21 38.21 46.94 76.37 48.20 51.67 78.68 78.25 48.81 78.64 78.45 51.23 78.51 78.98 63.43 Table 2: Performance comparison (DSC%) with SOTA methods, including 1 HeMiS (Havaei et al. 2016), 2 UHVED (Dorent et al. 2019), 3 RFNet (Ding, Yu, and Yang 2021), 4 UNet-MFI (Zhao, Yang, and Sun 2022), 5 mmFormer (Zhang et al. 2022) and 6 M3AE (Liu et al. 2023) on the BraTS 2020 dataset. Available and missing modalities are denoted by • and ◦, respectively. Tab. 1, our model presents variable GFLOPs with different combinations of modalities. Conversely, the other methods exhibit fixed GFLOPs that consume redundant computations on zero maps for the missing modalities. Secondly, we visualize an example of merging uni-modal tokens in Fig. 4. We obtain the similarity map for merging in the feature space and employ it on the input image for better understanding. The merged image demonstrates balanced preservation of edge details without excessive smoothing. Thus, the UMB preserves the essential edges for brain tumor segmentation while decreasing the redundant information. Finally, as shown in Fig. 1, we obtain fused multi-modal features, which further go through the high-dimensional embedding pooling followed by the dimensionality reduction for 2D visualization via t-SNE on the BraTS 2020 dataset. We use 100 samples to simulate 4 cases of missing modalities, including 1/2/3/4 available modalities. After processing by our MMB, those samples tend to cluster more densely in Fig. 1d, in contrast to their sparser distribution of fused multi-modal features with zero maps in Fig. 1b. This observation demonstrates the capability of our MMB to unify multi-modal features, thus resulting in a consistent representation across varying modality combinations. Comparison with the state-of-the-art (SOTA) methods on missing modalities. To demonstrate the superiority of our method, we compare our TMFormer with six state-ofthe-art methods on different cases with missing MRI modalities.The involved methods contain HeMiS (Havaei et al. 2016), U-HVED (Dorent et al. 2019), RFNet (Ding, Yu, and Yang 2021), UNet-MFI (Zhao, Yang, and Sun 2022), mmFormer (Zhang et al. 2022), M3AE (Liu et al. 2023). For a fair comparison, all methods are trained under their recommended hyper-parameters within the same dataset split. As shown in Tab. 2, our method achieves preferable results for most combinations of missing modalities. We achieve improvements of 0.5%, 1.1%, and 2.5% over the second-ranked method on the average DSC for WT, TC, and ET. In Fig. 5, we provide the segmentation visualizations that our method yields more accurate segmentation results in different combinations of modalities. Ablation Study. We evaluate the proposed components on the BraTS 2020 dataset, employing the average DSC to measure the performance of the WT, TC, and ET. Ablative experiments are partitioned into three parts, i.e., the UMB, the MMB, and stages, as depicted in Tab. 3. We assess each part The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7419 Figure 5: Segmentation results of different methods with various available modalities on the BraTS 2020 dataset. Configuration UMB MMB #Stage Pooling Sconv SA SWC SRW SRW+SAR SA DO CO 1 2 3 4 DSC WT 82.92 82.21 85.11 86.00 87.23 87.27 87.04 86.98 87.27 86.81 87.12 87.21 87.27 TC 74.90 74.01 72.15 74.38 78.97 78.66 77.91 78.23 78.66 77.81 78.04 78.04 78.66 ET 53.68 53.73 48.00 49.43 62.20 63.43 62.89 62.45 63.43 60.06 62.53 62.67 63.43 Table 3: Ablation results of proposed components on the BraTS 2020 dataset. while keeping others constant. To evaluate the UMB, we use ‘Pooling’ or ‘Sconv’ for merging uni-modal tokens within cubic windows. ‘Pooling’ computes the average mean, while ‘Sconv’ employs the convolution with a stride equal to the kernel size. In contrast, ‘SA’, ‘SWC’, ‘SRW’, and ‘SRW+SAR’ perform global token merging, albeit with token sampling within cubic windows. ‘SA’ means alternate token sampling. ‘SWC’ stands for central token sampling, and ‘SRW’ signifies random token sampling. Both ‘SA’ and ‘SWC’ involve fixed sampled tokens. ‘SAR’ denotes adaptive token sampling, and ‘SRW+SAR’ is our chosen approach in the UMB. From Tab. 3, we find that global merging outperforms local merging. The random sampling with the guidance of a coarse segmentation map also boosts the performance from 62.20 to 63.43 on ET. To assess our MMB, we replace the ‘CO’ with both ‘SA’ and ‘DO’. In MMB, ‘SA’ involves alternately sampling tokens from multi-modal sequences, without regard to the modality of the tokens. ‘DO’ constructs multi-modal sequences Xtok in a different order, i.e., [xT 1 tok, xT 2 tok, xF LAIR tok , xT 1ce tok ]. ‘CO’ represents constructing tokens in our proposed order, i.e., [xF LAIR tok , xT 1ce tok , xT 1 tok, xT 2 tok]. The results show that our MMB improves the multi-modal feature fusion. We verify the efficacy of TMFormer’s multi-scale design by progressively integrating our UMB and MMB into each scale. The outcomes demonstrate that incorporating our proposed blocks within a multi-scale framework yields improvements in performance. More results are in Appendix. Conclusion In this paper, we introduce a novel Token Merging transFormer (TMFormer) to tackle the challenge of missing modalities in brain tumor segmentation. This addresses the issues caused by using zero maps as substitutes, which lead to feature bias and redundant computations. TMFormer treats modalities’ diverse combinations as variable-length token sequences, considering only available modalities. Our TMFormer comprises two pivotal modules: the UMB for uni-modal feature extraction and the MMB for multi-modal feature fusion. The UMB initially reduces spatially redundant tokens guided by a coarse segmentation map and models global dependencies for each available modality. The MMB merges uni-modal tokens based on modalities’ contribution order to segmentation. The fused multi-modal token sequence is then projected into a unified representation to alleviate feature bias in different combinations of modalities. These proposed components collectively demonstrate the potential to mitigate feature bias and avoid unnecessary computations for missing modalities. Extensive experiments show the proficiency of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7420 Acknowledgments This work was in part supported by the National Natural Science Foundation of China under grants 62032006 and 62021001. References Andres, E. A.; Fidon, L.; Vakalopoulou, M.; Lerousseau, M.; Carr´e, A.; Sun, R.; Klausner, G.; Ammari, S.; Benzazon, N.; Reuz´e, S.; et al. 2020. Dosimetry-driven quality measure of brain pseudo computed tomography generated from deep learning for MRI-only radiation therapy treatment planning. International Journal of Radiation Oncology* Biology* Physics, 108(3): 813–823. Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R. T.; Berger, C.; Ha, S. M.; Rozycki, M.; et al. 2018. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv preprint arXiv:1811.02629. Bolya, D.; Fu, C.-Y.; Dai, X.; Zhang, P.; Feichtenhofer, C.; and Hoffman, J. 2022. Token merging: Your vit but faster. arXiv preprint arXiv:2210.09461. Chen, C.; Dou, Q.; Jin, Y.; Chen, H.; Qin, J.; and Heng, P.-A. 2019. Robust multimodal brain tumor segmentation via feature disentanglement and gated fusion. In Medical Image Computing and Computer Assisted Intervention– MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part III 22, 447– 456. Springer. Ding, Y.; Yu, X.; and Yang, Y. 2021. RFNet: Region-aware fusion network for incomplete multi-modal brain tumor segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 3975–3984. Dong, X.; Bao, J.; Chen, D.; Zhang, W.; Yu, N.; Yuan, L.; Chen, D.; and Guo, B. 2022. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12124–12134. Dorent, R.; Joutard, S.; Modat, M.; Ourselin, S.; and Vercauteren, T. 2019. Hetero-modal variational encoderdecoder for joint modality completion and segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II 22, 74–82. Springer. Dou, Q.; Yu, L.; Chen, H.; Jin, Y.; Yang, X.; Qin, J.; and Heng, P.-A. 2017. 3D deeply supervised network for automated segmentation of volumetric medical images. Medical image analysis, 41: 40–54. Fayyaz, M.; Koohpayegani, S. A.; Jafari, F. R.; Sengupta, S.; Joze, H. R. V.; Sommerlade, E.; Pirsiavash, H.; and Gall, J. 2022. Adaptive token sampling for efficient vision transformers. In European Conference on Computer Vision, 396– 414. Springer. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; and Larochelle, H. 2017. Brain tumor segmentation with deep neural networks. Medical image analysis, 35: 18–31. Havaei, M.; Guizard, N.; Chapados, N.; and Bengio, Y. 2016. Hemis: Hetero-modal image segmentation. In Medical Image Computing and Computer-Assisted Intervention– MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19, 469– 477. Springer. Hu, M.; Maillard, M.; Zhang, Y.; Ciceri, T.; La Barbera, G.; Bloch, I.; and Gori, P. 2020. Knowledge distillation from multi-modal to mono-modal segmentation networks. In Medical Image Computing and Computer Assisted Intervention–MICCAI, 772–781. Springer. Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; and Liu, W. 2019. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 603–612. Jia, H.; Xia, Y.; Cai, W.; and Huang, H. 2020. Learning highresolution and efficient non-local features for brain glioma segmentation in MR images. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23, 480–490. Springer. Kong, Z.; Ma, H.; Yuan, G.; Sun, M.; Xie, Y.; Dong, P.; Meng, X.; Shen, X.; Tang, H.; Qin, M.; et al. 2023. Peeling the onion: Hierarchical reduction of data redundancy for efficient vision transformer training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 8360–8368. Liang, Y.; Ge, C.; Tong, Z.; Song, Y.; Wang, J.; and Xie, P. 2022. Not all patches are what you need: Expediting vision transformers via token reorganizations. arXiv preprint arXiv:2202.07800. Liu, H.; Wei, D.; Lu, D.; Sun, J.; Wang, L.; and Zheng, Y. 2023. M3AE: Multimodal Representation Learning for Brain Tumor Segmentation with Missing Modalities. In Proceedings of the AAAI Conference on Artificial Intelligence, 1657–1665. Liu, Y.; Fan, L.; Zhang, C.; Zhou, T.; Xiao, Z.; Geng, L.; and Shen, D. 2021a. Incomplete multi-modal representation learning for Alzheimer’s disease diagnosis. Medical Image Analysis, 69: 101953. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021b. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Lu, C.; de Geus, D.; and Dubbelman, G. 2023. Contentaware Token Sharing for Efficient Semantic Segmentation with Vision Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 23631–23640. Rao, Y.; Zhao, W.; Liu, B.; Lu, J.; Zhou, J.; and Hsieh, C.-J. 2021. Dynamicvit: Efficient vision transformers with dynamic token sparsification. Advances in neural information processing systems, 34: 13937–13949. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7421 Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer. She, D.; Zhang, Y.; Zhang, Z.; Li, H.; Yan, Z.; and Sun, X. 2023. EoFormer: Edge-Oriented Transformer for Brain Tumor Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 333–343. Springer. Shen, L.; Zhu, W.; Wang, X.; Xing, L.; Pauly, J. M.; Turkbey, B.; Harmon, S. A.; Sanford, T. H.; Mehralivand, S.; Choyke, P. L.; et al. 2020. Multi-domain image completion for random missing input data. IEEE transactions on medical imaging, 40(4): 1113–1122. Tran, L.; Liu, X.; Zhou, J.; and Jin, R. 2017. Missing modalities imputation via cascaded residual autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1405–1414. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021a. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision, 568–578. Wang, Y.; Zhang, Y.; Liu, Y.; Lin, Z.; Tian, J.; Zhong, C.; Shi, Z.; Fan, J.; and He, Z. 2021b. Acn: Adversarial co-training network for brain tumor segmentation with missing modalities. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VII 24, 410–420. Springer. Wang, Y.; Zhou, L.; Yu, B.; Wang, L.; Zu, C.; Lalush, D. S.; Lin, W.; Wu, X.; Zhou, J.; and Shen, D. 2018. 3D autocontext-based locality adaptive multi-modality GANs for PET synthesis. IEEE Transactions on Medical Imaging, 38(6): 1328–1339. Yang, Q.; Guo, X.; Chen, Z.; Woo, P. Y.; and Yuan, Y. 2022. D 2-Net: Dual disentanglement network for brain tumor segmentation with missing modalities. IEEE Transactions on Medical Imaging, 41(10): 2953–2964. Zhang, Y.; He, N.; Yang, J.; Li, Y.; Wei, D.; Huang, Y.; Zhang, Y.; He, Z.; and Zheng, Y. 2022. mmformer: Multimodal medical transformer for incomplete multimodal learning of brain tumor segmentation. In International Conference on Medical Image Computing and ComputerAssisted Intervention, 107–117. Springer. Zhang, Y.; Yang, J.; Tian, J.; Shi, Z.; Zhong, C.; Zhang, Y.; and He, Z. 2021. Modality-aware mutual learning for multimodal medical image segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24, 589–599. Springer. Zhao, Z.; Yang, H.; and Sun, J. 2022. Modality-adaptive feature interaction for brain tumor segmentation with missing modalities. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 183–192. Springer. Zhou, T.; Canu, S.; Vera, P.; and Ruan, S. 2021. Latent correlation representation learning for brain tumor segmentation with missing MRI modalities. IEEE Transactions on Image Processing, 30: 4263–4274. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7422
2024
824
18,656
FaceRSA: RSA-Aware Facial Identity Cryptography Framework Zhongyi Zhang, Tianyi Wei*, Wenbo Zhou*, Hanqing Zhao, Weiming Zhang, Nenghai Yu University of Science and Technology of China {ericzhang@mail., bestwty@mail., welbeckz@, zhq2015@mail., zhangwm@, ynh@}ustc.edu.cn Abstract With the flourishing of the Internet, sharing one’s photos or automated processing of faces using computer vision technology has become an everyday occurrence. While enjoying the convenience, the concern for identity privacy is also emerging. Therefore, some efforts introduced the concept of “password” from traditional cryptography such as RSA into the face anonymization and deanonymization task to protect the facial identity without compromising the usability of the face image. However, these methods either suffer from the poor visual quality of the synthesis results or do not possess the full cryptographic properties, resulting in compromised security. In this paper, we present the first facial identity cryptography framework with full properties analogous to RSA. Our framework leverages the powerful generative capabilities of StyleGAN to achieve megapixel-level facial identity anonymization and deanonymization. Thanks to the great semantic decoupling of StyleGAN’s latent space, the identity encryption and decryption process are performed in latent space by a well-designed password mapper in the manner of editing latent code. Meanwhile, the password-related information is imperceptibly hidden in the edited latent code owing to the redundant nature of the latent space. To make our cryptographic framework possesses all the properties analogous to RSA, we propose three types of loss functions: single anonymization loss, sequential anonymization loss, and associated anonymization loss. Extensive experiments and ablation analyses demonstrate the superiority of our method in terms of the quality of synthesis results, identity-irrelevant attributes preservation, deanonymization accuracy, and completeness of properties analogous to RSA. Introduction In today’s world, privacy has become a crucial concern, especially with regard to facial identity information. However, many computer vision tasks require uploading photos or videos, which may compromise user privacy. For instance, family cameras are used to monitor the behavior of infants and young children, but an attacker shouldn’t gain access to facial identity information. At the same time, trusted users, such as family members, may require access to the original images. This presents a challenge since we need to design *Corresponding authors: Wenbo Zhou, Tianyi Wei Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. a special cryptographic system without affecting the use of facial images for other computer vision tasks. Some traditional anonymization methods simply use pixel-level operations, such as downsampling, blurring, and masking. These methods will greatly impair the quality and usability of the image or could be easily reverted with advances in deep learning techniques. Recently, some approaches (Maximov, Elezi, and Leal-Taix´e 2020; Hukkel˚as, Mester, and Lindseth 2019) have utilized generative networks to produce photo-realistic anonymized images, but these methods pay no attention to recovering the original images. Inspired by RSA (Rivest, Shamir, and Adleman 1978), a popular public-private key encryption method that offers provable security and computational privacy protection through mathematical complexity, Gu et al. (Gu et al. 2020) introduced the concept of password and reverse password into face anonymization and deanonymization. However, their approach has several limitations such as lowquality synthesis results and incomplete RSA properties. RiDDLE (Li et al. 2023) also proposes a face anonymization framework. However, their framework relies on the same password for both anonymization and deanonymization thus posing a risk of password leakage. Additionally, their method struggles to preserve identity-irrelevant information in the original image. To address the aforementioned limitations, we introduce a novel method, FaceRSA, which is the first facial identity cryptography framework with full properties analogous to RSA. It is important to note that our framework only provides RSA-aware properties in the domain of human faces and does not offer any security guarantees based on mathematical complexity like traditional cryptographic systems. The security of our system is based on ambiguity, i.e., an attacker could not tell whether an image has been anonymized, rather than the security based on computational complexity like traditional cryptography. We also need to clarify that the passwords we use for face anonymization and deanonymization are not RSA keys. Different from directly using conditional GAN (Chen et al. 2016) for anonymization at image level, we use the powerful representation ability of the pre-trained StyleGAN (Karras, Laine, and Aila 2019) by inverting the real image into the StyleGAN latent space through GAN inversion methods to realize face anonymization and deThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7423 anonymization. This design is based on the powerful representational ability of StyleGAN and the redundant nature of its latent space. We further refine the scheme of using a password to control the anonymization and deanonymization process to meet the properties of RSA, specifically: 1) Locating the Identity-relevant Layers. The different latent space layers of StyleGAN correspond to different levels of semantics from coarse to fine. We locate the latent layers that are most relevant to identity by examining the corresponding semantics of different layers, thus ensuring preserving identity-irrelevant attributes through anonymization and deanonymization. 2) Password Converter. To better control the anonymization process, we use a password converter to convert the discrete password into a 512-dimensional password vector to align the dimensions of the StyleGAN latent space. 3) Modulation Model. We use a modulation model so that the converted password vector can be used to explicitly control the change of the latent code, which enhances the controllability of our framework. In order to ensure that our framework can possess all the properties analogous to RSA, we designed three types of loss functions: 1) Single anonymization loss is used to control the anonymization process under a single pair of encryption and decryption passwords. 2) Sequential anonymization loss is used to implement some extensive anonymization and deanonymization requirements when multiple pairs of encryption and decryption passwords are utilized. 3) Associated anonymization loss is used to ensure the existence of the equivalent password for both encryption and decryption processes. Our framework has been evaluated through qualitative and quantitative experiments, demonstrating its superiority in terms of the quality of the synthesis images, preservation of identity-irrelevant information, deanonymization accuracy and properties analogous to RSA. We also conduct some extensive ablation experiments in the supplementary material to verify the effectiveness of our framework and loss function design. Overall, the contributions of our method consist of the following: • Our proposed framework is the first facial identity cryptography framework with full properties analogous to RSA, which supports megapixel-level facial identity anonymization and deanonymization. • We choose to build such a facial identity cryptography system with the help of StyleGAN’s latent space by proposing a mechanism to locate identity-related layers, designing the password mapper, and customizing three types of training losses. • Extensive experiments and ablation studies are conducted to show the superiority of our method and the necessity of each new design. Related Works Face Anonymization Simple human face anonymization methods using blurring, noise, masking, etc. on the face region may greatly destroy the usability of the image, so some work is devoted to the anonymization of a face image without compromising image quality. DeepPrivacy (Hukkel˚as, Mester, and Lindseth 2019) proposed an inpainting-based method to realize face de-identification, Li et al. (Li et al. 2021) found identityaware face regions to remove original identity while keeping other attributes, IdentityDP (Wen et al. 2022) introduced the concept of differential privacy into de-identification to achieve measurable anonymization. However, these studies only focused on removing the identity of the original image and did not take into account for the possibility of recovering the original image. Recent work has taken this deficiency into account and proposed methods that can recover the original image. For instance, FIT (Gu et al. 2020) introduced a discrete password-based method that controls the generated anonymized image and ensured that an attacker using a wrong password can only obtain a wrong but photo-realistic image. Cao et al. (Cao et al. 2021) separated identity and attribute information, and realize anonymization and deanonymization through controllable rotation of identity vector. Concurrent with our work, RiDDLE (Li et al. 2023) used a transformer structure to anonymize the image by a randomly sampled latent code. Although these works consider recovering the original image, they may fail in complex anonymization scenarios e.g. deanonymizing the image that has been anonymized multiple times, which could impact their practical application as well as make the anonymized image distinguishable thus compromising its security. In contrast, our proposed FaceRSA builds a cryptosystem with full properties analogous to RSA, thereby overcoming these limitations. For a comparison of our work and some prior work, see Table 1. Image Manipulation Based on StyleGAN StyleGAN (Karras, Laine, and Aila 2019; Karras et al. 2020) is a powerful generative network that can generate highresolution images on various data domains. Surprisingly, its latent space exhibits promising disentanglement properties (Collins et al. 2020; Jahanian, Chai, and Isola 2019; Abdal et al. 2021). As a result, many works (Patashnik et al. 2021; Wei et al. 2022a; Jiang et al. 2021; Sun et al. 2022; Wei et al. 2023) have utilized StyleGAN for various image manipulation tasks. StyleCLIP (Patashnik et al. 2021) realized text-guided image manipulation with the help of CLIP’s powerful image-text representation capability (Radford et al. 2021). HairCLIP (Wei et al. 2022a) introduced a modulation module to achieve direct control of the hair condition input over the latent code. In this paper, we implement passwordbased identity manipulation with the powerful generative ability of StyleGAN and the great semantic decoupling of its latent space, which shares the same philosophy as the previously mentioned methods. Preliminaries RSA is a widely used encryption and decryption algorithm and has many excellent properties. As our approach is an RSA-aware system, the following properties should be satThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7424 Method DeepPrivacy IdentityDP Li et al. FIT Cao et al. RiDDLE Ours Face Anonymization " " " " " " " Face Deanonymization % % % " " " " Sequential Anonymization and Deanonymization % % % % % % " Password Interchangeability % % % % % % " Password Associativity % % % % % % " Table 1: Comparisons between our approach and mainstream face anonymization methods in terms of functionality. Only our method supports all kind of scenarios. isfied. Here I∗denotes using cryptography algorithm on image I, when multiple passwords exist in ∗, it means use cryptography algorithm with them in order, i.e Ie,d means first use anonymization algorithm with password e on I and then use de-anonymization algorithm with password d on Ie. (ei, di) represents the i-th pair of encryption and decryption passwords. (a)Photo-realism. We hope that the cryptography algorithm will still generate a real-looking face, so an attacker cannot distinguish whether the face is anonymized or not, and it will not affect the usage of this image for computer vision tasks. Let Φ be the manifold of human faces, this property could be formed as: ∀I ∈Φ, ∀e, s.t. Ie ∈Φ (b)Anonymized with password. Let f represents the function that maps the facial image to the identity, the anonymization progress via encryption password e could be formalized as: f(Ie) ̸= f(I) (c)Deanonymized with correct password. The original identity of I could be recovered with the correct decryption password d. f(Ie,d) = f(I) (d)Wrong deanonymized with wrong password. When a wrong password d′ is given to deanonymize the image, the system will generate a new identity that is different from both the original image and the anonymized image. f(Ie,d′) ̸= f(Ie) f(Ie,d′) ̸= f(I) where d′ ̸= d and Ie,d′ ∈Φ (e)Diversity. Different identities should be generated when using different encryption passwords on a single image. f(Ie1) ̸= f(Ie2) where e1 ̸= e2 (f)Cycle anonymized and deanonymized with paired passwords. When the anonymization operation is applied to the same image multiple times, the identity of the original image will be obtained by deanonymizing in the order of anonymization. At the same time, the identities of each pair of intermediate images should also remain the same. For properties (f), (g), and (h), we use two pairs of encryption and decryption passwords to illustrate the corresponding effect. f(Ie1,e2,d2) = f(Ie1) f(Ie1,e2,d2,d1) = f(I) (g)Passwords interchangeability when deanonymization. When deanonymizing an image that has been anonymized multiple times, the identity of the original image will be recovered regardless of the deanonymization order. Meanwhile, paired anonymization and deanonymization operations are eliminated as if they were never performed. f(Ie1,e2,d1) = f(Ie2) f(Ie1,e2,d1,d2) = f(I) (h)Passwords associativity. In multi-step anonymization and deanonymization operations, multi-step operations performed by multiple passwords can be equivalent to one operation performed by an associated password. Here ‘+’ means the summation of different passwords. f(Ie1,e2) = f(Ie1+e2) Method Overview Our purpose is to design an RSA-aware cryptography system based on passwords without compromising image quality. As mentioned in E2Style (Wei et al. 2022b), the negligible effect of rounding the floating-point latent code suggested that the StyleGAN latent space contains a high degree of redundancy. Based on this property, instead of using conditional GAN at image level, we proposed using the StyleGAN latent space to embed password information, realizing image anonymization and deanonymization through latent code manipulation. Next, we design some modules and mechanisms to better control the editing of latent code. We first locate the identityrelevant layers in StyleGAN to minimize the impact on nonidentity attributes. Then we use a password converter to convert password into a vector, which will be used in a modulation model to modulate the latent code. To enhance the security of the anonymized image and align with the full properties of RSA, it is essential to impose constraints not only on the single pair but also on scenarios involving multiple pairs of encryption and decryption passwords, so we introduce three types of loss functions: single anonymization loss, sequential anonymization loss and associated anonymization loss. Before introducing the specific framework and loss functions, we briefly introduce the latent space of StyleGAN. The image synthesis process of StyleGAN involves its multiple latent spaces. It first randomly samples a vector z ∈ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7425 Output Image StyleGAN Inversion Encoder Input Image StyleGAN Generator [0,0,1,0,1,1…] User-given Password Password Converter 𝑤 𝓌6−9 𝓌6−9′ Password Vector 𝑝 Modulation L-ReLU fc Modulation L-ReLU fc Modulation L-ReLU fc 𝑤′ ∆𝓌6−9 Password Mapper Modulation Structure of 𝑝 𝑥 𝑛𝑜𝑟𝑚 𝑓𝛽 𝑓𝛾 𝛽𝑝 𝛾𝑝 𝑥′ 𝑒1 [2,1,0,…] FaceRSA [0,1,1,…] [1,1,0,…] 𝑑2 [1,0,0,…] 𝑑1 [1,1,0,…] 𝑑2 [1,0,0,…] 𝑑1 𝑑1 + 𝑑2 𝑒2 FaceRSA [0,0,1,…] property(f) property(f) property(g) property(g) property(h) property(d)(e) (a) Structure of FaceRSA (b) Properties Illustration of FaceRSA Figure 1: (a)The overall pipeline of our method, models with gray color are frozen. (b)Properties demonstration of our framework in the case of two pairs of encryption and decryption passwords. Property (a)-(c) can be directly observed from the figure, and we distinguish properties (d)-(h) with different colored lines and boxes. Our model satisfies the RSA properties mentioned in Preliminaries Section. For the sake of brevity, we only show the complete input and output with our FaceRSA framework twice, and the rest of the FaceRSA framework is omitted. R512 and transforms it to a style code w ∈R512 after 8 fully connected layers. This space is called W space, and many studies have shown that it is rich in semantic information. Some studies (Collins et al. 2020; Goetschalckx et al. 2019; Jahanian, Chai, and Isola 2019) have extended the W space to W+ space, which consists of different w vectors corresponding to different layers of the StyleGAN structure. These different layers of W+ Space controls various semantic information from coarse to fine. In the case of the 18layer StyleGAN network, the vector in the W+ space can be expressed as: w = [w1, w2, · · · , w18]. FaceRSA Thanks to the powerful synthesis and semantic decoupling capabilities of StyleGAN, we choose to accomplish the identity transformations in the StyleGAN latent space. Specifically, to anonymize a real image, we first obtain its latent code w in the W+ space using e4e (Tov et al. 2021), which is an encoder-based GAN inversion method with better editing capability. Then, we utilize the well-designed mapping network to predict the bias ∆w in latent space based on the user-given password, a N-bit binary code. The modified latent code, w′ = w + ∆w, is subsequently fed back into the pre-trained StyleGAN to obtain the output result. The overall pipeline is illustrated in Figure 1. Locating the Identity-relevant Layers. As observed by many researches (Xia et al. 2021; Yang, Shen, and Zhou 2021; Patashnik et al. 2021), various layers in the latent space of StyleGAN correspond to different semantic features. In order to preserve identity-irrelevant attributes of an image, such as pose and expression, we localized the identity-relevant layers by modifying the original latent code step by step. Finally, we opted to change only specific layers - detailly, layers 6-9 in the W+ latent space - while keeping all other layers unchanged. Specific implementation details and ablation experiments for this setting are provided in the supplementary material. By adopting this approach, we can change identity information while minimizing the effect on other image attributes. Password Converter. To better utilize the password for controlling the mapping of latent codes, we convert the Nbit discrete password into a 512-dimensional real password vector using a password converter, a simple 2-layer MLP. This is done to align the dimensions of the StyleGAN latent space and facilitate the use of the modulation module in subsequent steps. Modulation Model. To implement identity transformations that are controlled through passwords, we propose the usage of a modulation module. This module enables the password vectors to explicitly control the changes of latent code. We borrow the structure of modulation model used in HairCLIP (Wei et al. 2022a), which has the following form: x′ = γp x −µx σx  + βp where µx and σx denote the mean and standard deviation of x. γp and βp are calculated from the password vector p with simple fully connected networks. Loss Functions Our objective is to enable adaptation to changes in image identity information across various scenarios while preserving other identity-irrelevant information. Here all the original images mentioned below refer to the inverted images. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7426 Our total loss function for different cryptography scenarios comprises three parts: single anonymization loss, sequential anonymization loss, and associated anonymization loss. Single Anonymization Loss Lsingle. The main purpose of this part of the loss function is to constrain the function of the cryptography system in the case of only single encryption and decryption operation is used. We consider the simplest case of this scenario: two different encryption passwords e1 and e2, one correct decryption password d1 and one wrong decryption password d′ 1. According to the RSA properties (a)-(e) mentioned in Preliminaries Section, to realize the anonymization and deanonymization function, we introduce facial identity difference loss on image pairs (I, Ie1),(I, Ie2),(I, Ie1,d′ 1),(Ie1, Ie2) with the formulation Lchange = cos(F(I1), F(I2)) where cos(·) denotes the cosine similarity and F is a pre-trained Arcface (Deng et al. 2019) network to extract facial identity embeddings. I1 and I2 represent the different images in the image pairs. Also, to ensure the original image is recovered correctly, pixel L2 loss Lpix = ∥I −Ie1,d1∥2 and cosine identity similarity loss Lrecon = 1 −cos(F(I), F(Ie1,d1)) are used on image pair (I, Ie1,d1). During the training process, the generated latent codes must be constrained to remain on the well-defined manifold of the StyleGAN to prevent artifacts. Therefore, for each generated latent code w∗, we introduce a regularization loss Lreg = ∥w∗−w∥2 where w denotes the inverted latent code of the original image I. Additionally, to minimize the impact on downstream tasks and prevent changes in attributes such as expressions while changing the identity, we also use face parsing loss Lparsing as mentioned in E2Style (Wei et al. 2022b) and facial landmark loss Llmk for all generated images I∗. LPIPS loss (Zhang et al. 2018) is also applied to all generated images I∗to preserve image quality and improve similarity in feature level. Sequential Anonymization Loss Lseq. When facing the situation of sequential anonymization and deanonymization, we define an image sequence as  Ik 2m k=0 with m-pairs of encryption and decryption passwords. Here image I0 denotes the original image and each image In in the sequence is generated by applying cryptography algorithm on image In−1 with a single password from the sequence e1, · · · , em, dm, · · · , d1 in order. It is crucial to ensure that the identity information is maintained between the corresponding intermediate image pairs  (Ik, I2m−k) m−1 k=0 . We also add loss functions to ensure the image quality and the identity diversity of each anonymized image  Ik m k=1. Associated Anonymization Loss Lasso. In this context, we require that multiple anonymization operations can be regarded as obtaining an equivalent password through one anonymization operation. We randomly selected images Ii−1 and Ij in the sequence  Ik m k=0 where j > i, and the equivalent password is expressed as the sum of intermediate consecutive encryption passwords, that is ˜e = Pj k=i ek. Associated identity loss Lasso−id and pixel L2 loss Lasso−pix I Ie1 Ie2 Ie1,d1 Ie1,d′ 1 Figure 2: Qualitative result for single encryption and decryption password pair of our framework. I refers to the original image, (e1, d1) is a pair of encryption and decryption password and d′ 1 ̸= d1, e1 ̸= e2, each column shares the same password. Our framework satisfies the basic anonymization and deanonymization requirements. Original CIAGAN DeepPrivacy FIT RiDDLE Ours Figure 3: Qualitative comparison of anonymization ability. Our method shows the best quality and preserves the most identity-irrelevant attributes, which is fully compliant with the requirements of anonymization. are computed between the image (Ii−1)˜e and Ij to satisfy the associated anonymization property. Note that although this loss is only imposed on anonymization process, it will be shown in the experiment section that our framework can generalize associated password to de-anonymization process. The detailed designs and the hyperparameter settings of Lsingle, Lseq and Lasso are presented in the supplementary material. Finally, the total loss of our training process is Ltotal = Lsingle + Lseq + Lasso Experiment Implementation details of our approach are provided in the supplementary material. For all compared methods, we use the official pre-trained models. We first show in Figure 2 the effect of our entire system while only a pair of encryption and decryption password is used. As we can see, using different encryption passwords for a single image result in different anonymized images, whereas using the same encryption password for different images also generates different anonymization images. Additionally, the original image can be correctly recovered with the correct decryption password, while using an incorrect decryption password will produce a new anonymized image The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7427 ID↓ Detect↑ Lmk↓ Pose↑ Exp↑ CIAGAN 0.150 0.950 1.208 0.930 0.283 DeepPrivacy 0.118 0.998 0.023 0.966 0.289 FIT 0.296 0.992 0.011 0.971 0.341 RiDDLE 0.115 0.999 0.013 0.956 0.314 Ours 0.247 1.000 0.009 0.996 0.546 Table 2: Quantitative comparison of face anonymization ability. Our method outperforms others in most metrics except identity similarity since our method preserves more identity-irrelevant attributes, which will undesirably increase the identity similarity measured by Arcface. Original FIT RiDDLE Ours Figure 4: Qualitative comparison of recovery ability. Our method shows the best recovery quality compared with the existing methods, thus having better application potential. that differs from the original identity. Figure 2 shows that our framework satisfies the properties (a)-(e) mentioned in Preliminaries Section which related to the scenario of single pair of encryption and decryption password. We compare our method with two password-aware anonymization methods, FIT (Gu et al. 2020) and RiDDLE (Li et al. 2023), one identity vector controlled conditional GAN based method CIAGAN (Maximov, Elezi, and Leal-Taix´e 2020), and one inpainting-based method DeepPrivacy (Hukkel˚as, Mester, and Lindseth 2019). For the specific implementation details, RiDDLE and our method are based on GAN inversion while others are based on conditional GAN. In our experiment, we ignore the minor distortion caused by GAN inversion, and the test images are after inversion. We separately compare anonymization ability with each method and deanonymization ability with FIT and RiDDLE, since only FIT and RiDDLE enable the recovery of the original image. Here we use ei to represent the i-th encryption password and di to represent the i-th decryption password, but the relationships between the encryption and decryption passwords used by each methods are actually different. Anonymization Ability. Qualitative result of anonymization could be seen in Figure 3. We observe that CIAGAN suffers from severe image distortion, which affects the usability of the image. FIT always generates an anonymized image with some white dots and changes the lighting of the overall image. DeepPrivacy successfully generates photorealistic images, but sometimes modify the expression of the LPIPS↓ MSE↓ PSNR↑ SSIM↑ ID↑ FIT 0.1780 0.0064 61.21 0.9928 0.6767 RiDDLE 0.0324 0.0124 67.61 0.9987 0.8315 Ours 0.0099 0.0019 75.51 0.9996 0.9206 Table 3: Face recovery ability. Our method outperforms on all metrics, demonstrating the best original image recovery ability. original image. RiDDLE also utilizes StyleGAN2 as a generator, but remains less identity-irrelevant information, such as pose and hairstyle since they use all of the latent code to transform identity. Our method, on the contrary, remains more identity-irrelevant information thanks to changing only some specific layers related to identity. Quantitative evaluation is conducted on several aspects with anonymized images: 1) identity similarity with the original image using pre-trained Arcface (Deng et al. 2019) network, 2) face detection rate using dlib (Kazemi and Sullivan 2014) to ensure that the results are still faces, and 3) performance on different computer vision tasks, such as landmark and pose detection. We evaluate normalized L2 landmark distance using face alignment (Bulat and Tzimiropoulos 2017), cosine pose similarity using 6DRepNet (Hempel, Abdelrahman, and Al-Hamadi 2022), and cosine expression similarity using DECA (Feng et al. 2021). Quantitative results are shown in Table 2, our method perform the best in most of the metrics except identity similarity. Although our method does not reach the lowest identity similarity, we could observe from Figure 3 that the anonymization effect is obvious enough. Meanwhile, our method retains more identity-irrelevant attributes in the original image while Arcface uses some identity-irrelevant information for its identity embedding, which will undesirably increase the measured identity similarity. Recovery Ability. Qualitative results for face recovery are shown in Figure 4, our method realizes the most faithful recovery of the original image while FIT appears artifacts and RiDDLE comes up with inconsistent expressions. Moreover, we use the following metrics to measure the performance of image recovery : LPIPS, MSE, PSNR, SSIM and ID similarity, quantitative results are listed in Table 3. Our method shows the best performance on all metrics, holding the best recovery ability of the original image. To the best of our knowledge, our work is the first to consider an anonymization algorithm with multiple passwords involved. We separately demonstrate the performance of our framework in multiple scenarios below. Sequential Anonymization and Deanonymization. Figure 5 demonstrate the result of sequential anonymization and deanonymization compared with recovery-aware method FIT and RiDDLE. Although these methods take into account the recovery of images, when multiple anonymization algorithms are applied, these methods cannot sequentially recover the corresponding images in the anonymization process. This brings a security risk that an attacker could easily tell whether an image has been anonymized through applyThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7428 I Ie1 Ie1,e2 Ie1,e2,d2 Ie1,e2,d2,d1 FIT RiDDLE Ours Figure 5: Sequential anonymization and deanonymization comparison. Our method could correctly recover the corresponding images while other either fails or appears artifacts. ing anonymization and deanonymization method on it, and further takes DoS(Deny of Service) attack on an anonymized image. FIT fails to accurately recover the corresponding encrypted image and the original image, while RiDDLE demonstrates some recovery capability but shows inconsistencies in some facial details, such as the opening and closing of lips. Our approach, however, can precisely recover the original image and the encrypted image, which satisfies the property (f) as mentioned in Preliminaries Section. Password Interchangeability. We then investigate the scenario where the decryption passwords are not used in the same order as the encryption passwords during the deanonymization process. In the case of RSA, when encryption and decryption passwords are used in pairs, the encryption effect will be eliminated. We present the results of using two pairs of encryption and decryption passwords in Figure 6, we could observe from the left two columns that only our method could recover the original image even if the decryption passwords are not used in the correct order. Also, the right two columns show that although (e1, d1) are not used continuously, it comes out that the effect of anonymization is eliminated, which satisfies the property (g). Password Associativity. We show qualitative results for associated passwords in Figure 7. The left two columns demonstrate that only our method achieves almost the same result via an equivalent password as the anonymization algorithm sequentially used on an image with two encryption passwords. Although the case is not covered during the training phase, the right two columns show that an equivalent password also exists during the decryption process, which satisfies the properties (h). In addition, we also performed detailed ablation experiments in the supplemental material to verify the effectiveness of our framework and loss function design. Conclusion Our paper introduces FaceRSA, an RSA-aware facial identity cryptography framework. The framework is built upon the latent space of StyleGAN, thanks to its good editability and redundancy nature which fits well for our task. With I Ie1,e2,d1,d2 Ie1,e2,d1 Ie2 FIT RiDDLE Ours Figure 6: Commutative decryption result. The left two columns show the effect of deanonymization in another order, the right two columns show the elimination effect of pairwise used encryption and decryption password. I Ie1,e2,d1+d2 Ie1,e2 Ie1+e2 FIT RiDDLE Ours Figure 7: Result of encryption and decryption with associated password. The left two columns show the effect of equivalent password during anonymization and the right two columns show the effect during deanonymization. our well-designed password mapper, the generation process of facial identity is controlled by the user-given discrete passwords, which makes our framework a cryptosystem for anonymization and deanonymization. The mechanism to locate identity-related layers helps us to minimize the impact on other unrelated attributes while completing the editing of the identity. In addition, the customized three types of losses enable our cryptography framework to possess all the properties analogous to RSA. Extensive qualitative and quantitative comparisons demonstrate that our framework outperforms existing methods in terms of the quality of the synthesis images, preservation of identity-irrelevant information, deanonymization accuracy and properties analogous to RSA. In the future, we consider introducing some other cryptographic concepts into image anonymization and deanonymization. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7429 Acknowledgments This work was supported in part by the Natural Science Foundation of China under Grant 62372423, U2336206, 62121002, U20B2047, 62072421, and 62002334, Key Research and Development program of Anhui Province under Grant 2022k07020008. References Abdal, R.; Zhu, P.; Mitra, N. J.; and Wonka, P. 2021. Styleflow: Attribute-conditioned exploration of stylegangenerated images using conditional continuous normalizing flows. ACM Transactions on Graphics (ToG), 40(3): 1–21. Bulat, A.; and Tzimiropoulos, G. 2017. How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks). In International Conference on Computer Vision. Cao, J.; Liu, B.; Wen, Y.; Xie, R.; and Song, L. 2021. Personalized and invertible face de-identification by disentangled identity information manipulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3334–3342. Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; and Abbeel, P. 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in neural information processing systems, 29. Collins, E.; Bala, R.; Price, B.; and Susstrunk, S. 2020. Editing in style: Uncovering the local semantics of gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5771–5780. Deng, J.; Guo, J.; Xue, N.; and Zafeiriou, S. 2019. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4690–4699. Feng, Y.; Feng, H.; Black, M. J.; and Bolkart, T. 2021. Learning an Animatable Detailed 3D Face Model from InThe-Wild Images. volume 40. Goetschalckx, L.; Andonian, A.; Oliva, A.; and Isola, P. 2019. Ganalyze: Toward visual definitions of cognitive image properties. In Proceedings of the ieee/cvf international conference on computer vision, 5744–5753. Gu, X.; Luo, W.; Ryoo, M. S.; and Lee, Y. J. 2020. Password-conditioned Anonymization and Deanonymization with Face Identity Transformers. In European Conference on Computer Vision. Hempel, T.; Abdelrahman, A. A.; and Al-Hamadi, A. 2022. 6d rotation representation for unconstrained head pose estimation. In 2022 IEEE International Conference on Image Processing (ICIP), 2496–2500. IEEE. Hukkel˚as, H.; Mester, R.; and Lindseth, F. 2019. Deepprivacy: A generative adversarial network for face anonymization. In Advances in Visual Computing: 14th International Symposium on Visual Computing, ISVC 2019, Lake Tahoe, NV, USA, October 7–9, 2019, Proceedings, Part I 14, 565– 578. Springer. Jahanian, A.; Chai, L.; and Isola, P. 2019. On the” steerability” of generative adversarial networks. arXiv preprint arXiv:1907.07171. Jiang, Y.; Huang, Z.; Pan, X.; Loy, C. C.; and Liu, Z. 2021. Talk-to-edit: Fine-grained facial editing via dialog. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13799–13808. Karras, T.; Laine, S.; and Aila, T. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4401–4410. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2020. Analyzing and Improving the Image Quality of StyleGAN. In Proc. CVPR. Kazemi, V.; and Sullivan, J. 2014. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1867–1874. Li, D.; Wang, W.; Zhao, K.; Dong, J.; and Tan, T. 2023. RiDDLE: Reversible and Diversified De-identification with Latent Encryptor. arXiv preprint arXiv:2303.05171. Li, J.; Han, L.; Chen, R.; Zhang, H.; Han, B.; Wang, L.; and Cao, X. 2021. Identity-preserving face anonymization via adaptively facial attributes obfuscation. In Proceedings of the 29th ACM International Conference on Multimedia, 3891–3899. Maximov, M.; Elezi, I.; and Leal-Taix´e, L. 2020. Ciagan: Conditional identity anonymization generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5447–5456. Patashnik, O.; Wu, Z.; Shechtman, E.; Cohen-Or, D.; and Lischinski, D. 2021. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2085–2094. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Rivest, R. L.; Shamir, A.; and Adleman, L. 1978. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2): 120–126. Sun, J.; Deng, Q.; Li, Q.; Sun, M.; Ren, M.; and Sun, Z. 2022. Anyface: Free-style text-to-face synthesis and manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18687–18696. Tov, O.; Alaluf, Y.; Nitzan, Y.; Patashnik, O.; and CohenOr, D. 2021. Designing an Encoder for StyleGAN Image Manipulation. arXiv preprint arXiv:2102.02766. Wei, T.; Chen, D.; Zhou, W.; Liao, J.; Tan, Z.; Yuan, L.; Zhang, W.; and Yu, N. 2022a. Hairclip: Design your hair by text and reference image. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Wei, T.; Chen, D.; Zhou, W.; Liao, J.; Zhang, W.; Hua, G.; and Yu, N. 2023. HairCLIPv2: Unifying Hair Editing via Proxy Feature Blending. In Proceedings of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7430 IEEE/CVF International Conference on Computer Vision (ICCV), 23589–23599. Wei, T.; Chen, D.; Zhou, W.; Liao, J.; Zhang, W.; Yuan, L.; Hua, G.; and Yu, N. 2022b. E2Style: Improve the efficiency and effectiveness of StyleGAN inversion. IEEE Transactions on Image Processing, 31: 3267–3280. Wen, Y.; Liu, B.; Ding, M.; Xie, R.; and Song, L. 2022. Identitydp: Differential private identification protection for face images. Neurocomputing, 501: 197–211. Xia, W.; Yang, Y.; Xue, J.-H.; and Wu, B. 2021. Tedigan: Text-guided diverse face image generation and manipulation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2256–2265. Yang, C.; Shen, Y.; and Zhou, B. 2021. Semantic hierarchy emerges in deep generative representations for scene synthesis. International Journal of Computer Vision, 129: 1451– 1466. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7431
2024
825
18,657
Spatial-Contextual Discrepancy Information Compensation for GAN Inversion Ziqiang Zhang1, Yan Yan1*, Jing-Hao Xue2, Hanzi Wang1 1Xiamen University, China 2University College London, UK zhangzq@stu.xmu.edu.cn, yanyan@xmu.edu.cn, jinghao.xue@ucl.ac.uk, hanzi.wang@xmu.edu.cn Abstract Most existing GAN inversion methods either achieve accurate reconstruction but lack editability or offer strong editability at the cost of fidelity. Hence, how to balance the distortioneditability trade-off is a significant challenge for GAN inversion. To address this challenge, we introduce a novel spatial-contextual discrepancy information compensationbased GAN-inversion method (SDIC), which consists of a discrepancy information prediction network (DIPN) and a discrepancy information compensation network (DICN). SDIC follows a “compensate-and-edit” paradigm and successfully bridges the gap in image details between the original image and the reconstructed/edited image. On the one hand, DIPN encodes the multi-level spatial-contextual information of the original and initial reconstructed images and then predicts a spatial-contextual guided discrepancy map with two hourglass modules. In this way, a reliable discrepancy map that models the contextual relationship and captures finegrained image details is learned. On the other hand, DICN incorporates the predicted discrepancy information into both the latent code and the GAN generator with different transformations, generating high-quality reconstructed/edited images. This effectively compensates for the loss of image details during GAN inversion. Both quantitative and qualitative experiments demonstrate that our proposed method achieves the excellent distortion-editability trade-off at a fast inference speed for both image inversion and editing tasks. Our code is available at https://github.com/ZzqLKED/SDIC. Introduction Over the past few years, a variety of powerful GAN models, such as PGGAN (Karras et al. 2017) and StyleGAN (Karras, Laine, and Aila 2019; Karras et al. 2020), have been developed to generate high-quality images based on the latent code in the latent space. Based on these models, we can manipulate some attributes of generated images by modifying the latent code. However, such manipulation is only applicable to the images given by the GAN generator due to the lack of inference capability in GANs (Xia et al. 2022). Recently, GAN inversion methods (Xia et al. 2022) have been proposed to manipulate real images. These methods usually follow an “invert first, edit later” procedure, which *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. first inverts the real image back into a latent code of a pretrained GAN model, and then a new image can be reconstructed or edited from the inverted latent code. Some GAN inversion methods (Richardson et al. 2021; Tov et al. 2021) invert a real image into the native latent space of StyleGAN (i.e., the W space) and achieve good editability. However, such a way inevitably leads to the loss of image details during inversion. As a result, the reconstructed images are often less faithful than the original images. Although some methods (Abdal, Qin, and Wonka 2019; Kang, Kim, and Cho 2021) extend the W space to the W+/W* space or perform per-image optimization to enhance the reconstruction fidelity, their editability can be greatly affected. Such a phenomenon is also known as the distortioneditability trade-off, indicating the conflict between image reconstruction fidelity and image editing quality. To alleviate the distortion-editability trade-off, some recent methods (such as HFGI (Wang et al. 2022) and CLCAE (Liu, Song, and Chen 2023)) recover the missing information by enriching the latent code and the feature representations of a particular layer in the GAN generator. In this way, they can achieve better fidelity than traditional GAN inversion methods. However, HFGI considers the distortion information only at the pixel level, which easily introduces significant artifacts; CLCAE generates images based on contrastive learning, and it may still lose some image details and degrade the editability. Therefore, existing GAN inversion methods still suffer from the gap in image details between the original image and the reconstructed/edited image. To address the above problems, we propose a novel spatial-contextual discrepancy information compensationbased GAN inversion method (SDIC), which consists of a discrepancy information prediction network (DIPN) and a discrepancy information compensation network (DICN). SDIC adopts a “compensate-and-edit” paradigm, which first compensates both the latent code and the GAN generator with the spatial-contextual guided discrepancy map, and then performs image inversion or editing. Specifically, DIPN, which consists of a two-branch spatial-contextual hourglass module and a discrepancy map learning hourglass module, is designed to encode the multilevel spatial-contextual information of the original and initial reconstructed images, and predict a spatial-contextual guided discrepancy map. In DIPN, a spatial attention mechThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7432 anism is leveraged to enable the network to adaptively select important parts for feature fusion. As a result, DIPN can accurately learn a reliable discrepancy map, which effectively captures the contextual relationship and fine-grained image details. Then, DICN is introduced to incorporate the discrepancy information into both the latent code and the GAN generator, generating high-quality reconstructed/edited images. In summary, our main contributions are given as follows: • We propose a novel GAN inversion method, which successfully exploits the multi-level spatial-contextual information of the original image to compensate for the missing information during inversion. Based on the “compensate-and-edit” paradigm, our method can generate high-quality and natural reconstructed/edited images containing image details without introducing artifacts. • We design DIPN to accurately predict the spatialcontextual guided discrepancy map with two hourglass modules and DICN to leverage this map to effectively compensate for the information loss in both the latent code and the GAN generator with different transformations. Therefore, our method can achieve an excellent distortion-editability trade-off. • We perform qualitative and quantitative experiments to validate the superiority of our method in fidelity and editability against state-of-the-art GAN inversion methods. The full version of this paper, including appendix, can be found at https://arxiv.org/abs/2312.07079. Related Work GAN Inversion. Existing GAN inversion methods can be divided into three categories: optimization-based, encoderbased, and hybrid methods. Optimization-based methods (Abdal, Qin, and Wonka 2020; Bau et al. 2020; Gu, Shen, and Zhou 2020; Zhu et al. 2020b) directly optimize the latent code or the parameters of GAN to minimize the reconstruction error for each given image. Although these methods can reconstruct high-fidelity images, they usually suffer from high computational cost and poor editability. Encoderbased methods (Alaluf, Patashnik, and Cohen-Or 2021; Hu et al. 2022; Kang, Kim, and Cho 2021; Tov et al. 2021) train an encoder to learn the mapping from a given image to a latent code and perform some operations on the latent code. Compared with the optimization-based methods, the encoder-based methods show better editability at a faster inference speed, but their reconstruction quality is much lower. Hybrid methods (Bau et al. 2019; Zhu et al. 2020a) leverage an encoder to learn a latent code and then optimize the obtained latent code. Recently, some methods (Alaluf et al. 2022; Dinh et al. 2022) introduce the hypernetwork to iteratively optimize the parameters of the GAN generator, obtaining better reconstruction results. Our method belongs to the encoder-based methods. Conventional encoder-based methods (such as pSp (Chang and Chen 2018) and e4e (Tov et al. 2021)) extend the W space to the W+/W* space to improve the reconstruction fidelity. However, the low-dimensional latent code still limits the reconstruction performance. Moreover, the editing flexibility of the W+/W* space is reduced. In contrast, we propose to compensate the latent code and the GAN generator by exploiting the multi-level spatial-contextual information. In this way, the expressiveness of the latent code and generator is largely improved, alleviating the information loss due to inversion. Meanwhile, we explicitly control the proximity of the latent code to the W space, providing better editability. Latent Space Editing. The latent spaces of pre-trained StyleGAN generators show good semantic interpretability, which enables diverse and flexible image editing. A number of methods have been developed to identify meaningful semantic directions for the latent code. Supervised methods (Abdal et al. 2021; Goetschalckx et al. 2019; Hutchinson et al. 2019; Shen et al. 2020) require pre-trained attribute classifiers or data with attribute annotations. InterFaceGAN (Shen et al. 2020) employs a support vector machine to identify the hyperplane that splits two binary attributes and considers the normal of the hyperplane as the manipulation direction. Unsupervised methods (H¨ark¨onen et al. 2020; Shen and Zhou 2021; Voynov and Babenko 2020; Wang and Ponce 2021) can discover unknown manipulation directions but require manual labeling of the found operational directions. GANspace (H¨ark¨onen et al. 2020) discovers multiple manipulation directions using principal component analysis. In this paper, we employ InterFaceGAN and GANspace for latent space editing due to their good editing performance. Spatial-Contextual Information. Convolutional neural network (CNN) encodes both low-level features at the early stages and high-level features at the later stages. The lowlevel features are rich in spatial details while the high-level features capture the contextual information, which encodes the visual knowledge from an object/patch and its surrounding backgrounds/patches. The spatial-contextual information plays an important role in many computer vision tasks, such as object detection and semantic segmentation. Some methods (Choi et al. 2010; Li et al. 2016) exploit the contextual information between the object and its surrounding background to improve the object detection performance. A few works (Chang and Chen 2018) improve the quality of the detailed parts of the disparity map by exploiting multiscale spatial-contextual information. Unfortunately, the spatial-contextual information is not well exploited in exisiting GAN inversion methods. In this paper, we introduce the spatial-contextual information (obtained from the original image) to the discrepancy map prediction between the original image and the initial reconstructed image. In this way, our method can preserve more appearance details and generate clearer edges, greatly reducing the artifacts of the reconstructed/edited images. Proposed Method Overview Motivation. Conventional GAN inversion methods (Alaluf, Patashnik, and Cohen-Or 2021; Collins et al. 2020; Kang, Kim, and Cho 2021; Pidhorskyi, Adjeroh, and Doretto 2020; Tov et al. 2021) invert a real image into the latent space of a pretrained GAN model and attain good editability. However, they usually suffer from the low fidelity of generated images due to severe information loss during the inversion process. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7433 Encoder (frozen) Generator (frozen) A A A … 18×512 Generator (frozen) CNN CNN Layer i+1 Layer i F = … 512×64×64 Input Image Initial Reconstructed Image Final Reconstruction DIPN DICN γ θ 3D feature map 4D feature map A Spatial attention fusion module Sigmoid Extend Diagram Notation Discrepancy Map Edited Image or E Skip connection Editing process Pretrained Model Two-Branch Spatial-Contextual Hourglass Module Discrepency Map Learning Hourglass Module w V D D G1 G2 G3 G4 C1 C2 C3 Fenhanced wenhanced Ro I E Figure 1: The architecture of SDIC, which consists of DIPN and DICN. DIPN contains a two-branch spatial-contextual hourglass module and a discrepancy map learning hourglass module. First, the original image I and the initial reconstructed image Ro (obtained by a pre-trained e4e model) are fed into DIPN to predict the discrepancy map. Then, the discrepancy map is fed into DICN for feature compensation in both the latent code and the GAN generator. To deal with this, some recent methods (such as HFGI) follow an “edit-and-compensate” paradigm, which computes the distortion map (between the original image and the initial edited image), and then encodes the distortion map to obtain the latent map. In this way, the latent map can be combined with the latent code to compensate for the loss of image details. However, the images generated by these methods easily suffer from artifacts. The problem of artifacts can be ascribed to the fact that the latent map only encodes the pixel-level spatial information (note that the distortion map is computed by subtracting the initial edited image from the original image). As a result, the latent map ignores high-level contextual information (i.e., the relationship between individual pixels and their surrounding pixels) and involves some attribute-specific disturbance caused by the adoption of the initial edited image. Design. To address the above problems, we exploit both the spatial and contextual information of the original image. This enables the network to learn the contextual relationship between pixels and spatial details, significantly reducing the artifacts. To this end, we propose a novel SDIC method. Instead of using the “edit-and-compensate” paradigm in previous methods, SDIC adopts a “compensateand-edit” paradigm, which enables high-quality and flexible editing of attributes. Notably, SDIC effectively exploits the spatial-contextual guided discrepancy information between the original image and the initial reconstructed image and leverages this information to compensate both the latent code and the GAN generator. In this way, our method achieves a good distortion-editability trade-off. The network architecture of SDIC is given in Fig. 1. SDIC consists of a discrepancy information prediction network (DIPN) and a discrepancy information compensation network (DICN). Given the original image and the initial reconstructed image, DIPN (which contains a two-branch spatialcontextual hourglass module and a discrepancy map learning hourglass module) predicts a discrepancy map. In particular, we incorporate the spatial and contextual information of the original image into the different layers of the discrepancy map learning hourglass module. As a result, the discrepancy map not only encodes fine-grained image details but also models the contextual relationship. The discrepancy map is subsequently fed into DICN to perform feature compensation for the information loss in both the latent code and the GAN generator, obtaining the enhanced latent code and the enhanced latent map. The attribute editing takes a similar process as the inversion process except that the enhanced latent code and enhanced latent map are modified by attribute editing operations in DICN. Discrepancy Information Prediction Network Two-Branch Spatial-Contextual Hourglass Module. The two-branch spatial-contextual hourglass module takes The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7434 both the original image I ∈R3×H×W and the initial reconstructed image Ro ∈R3×H×W generated from the pre-trained model (we use the popular e4e model (Tov et al. 2021)) as inputs and extracts their spatial and contextual information, where H and W denote the height and width of the image, respectively. Generally, this module consists of two parallel hourglass branches, where two branches have the same structures. Each branch consists of a convolutional block and a U-Net style upsampling block (Ronneberger, Fischer, and Brox 2015). Specifically, given an input image (I or Ro), it is first fed into a convolutional block (consisting of five 3×3 convolutional layers). In the convolutional block, the second, fourth, and fifth layers perform convolution with stride 2, reducing the feature map size to 1/2, 1/4, and 1/8 of the original size, respectively. Such a way gradually expands the receptive fields and captures coarse-to-fine information at different scales. Next, the reduced feature map is fed into a U-Net style upsampling block with skip-connection at different scales. Technically, the upsampling block contains three 4×4 convolutional layers with stride 1. Each convolutional layer is preceded by an upsampling layer that uses nearest-neighbor interpolation to upsample the feature map. By doing so, we can repeatedly upsample the feature map until its size reaches 3 × H × W. In this way, the sizes of the feature map are gradually resized to 1/4 and 1/2 of the input image size. Meanwhile, a 3×3 convolutional layer is also applied to merge the skip connection and the final upsampled feature. The receptive fields are gradually reduced during upsampling. Thus, more fine-grained information can be obtained. Note that during upsampling, the three feature maps C1, C2, C3, whose sizes correspond to 1/8, 1/4, and 1/2 of the original image size, respectively, serve as the multiresolution feature maps for the subsequent discrepancy map learning hourglass module. These feature maps model the multi-level spatial-contextual information at different resolutions. Generally, the higher-resolution feature map captures more spatial details while lower-resolution feature map encodes more contextual information. Finally, the module outputs two feature maps with the size of C × H × W (C=48) corresponding to the original image and the initial reconstructed image, respectively. Discrepancy Map Learning Hourglass Module. The two feature maps from the two-branch spatial-contextual hourglass module are concatenated together to obtain the feature volume V ∈R2C×H×W , which is taken as the input of the discrepancy map hourglass learning module. This module consists of a downsampling block and a fusion block. Based on the feature volume V, we apply a downsampling block to obtain the initial discrepancy information. Then, the fusion block combines the initial discrepancy information with the spatial-contextual information of the original image from the two-branch spatial-contextual hourglass module to predict the discrepancy map. Specifically, the feature volume is first fed into the downsampling block, which uses three downsampling layers to increase the receptive fields. Each downsampling layer conDeconv Conv Conv Ci G'i+1 Gi+1 Gi Wi Ĉi Figure 2: The architecture of the spatial attention fusion module. sists of a 3×3 ×3 3D convolution with stride 2 and a 3 ×3×3 3D convolution with stride 1. As a result, we get the discrepancy features G1 ∈RC×48×H/8×W/8 after downsampling. G1 is then fed into a fusion layer (consisting of a spatial attention fusion module and an upsampling layer) to obtain the spatial-contextual guided discrepancy feature G2 at a larger resolution. Each upsampling layer consists of a 4×4×4 3D transposed convolution with stride 2 and two 3×3×3 3D convolutions with stride 1. Similar operations are repeated to obtain higher resolution features G3 and G4. Instead of adding or concatenating the discrepancy feature and the spatial-contextual feature, we incorporate a spatial attention fusion module (Woo et al. 2018). This module enables us to adaptively select important regions of features for fusion. The architecture of the spatial attention fusion module is given in Fig. 2. Technically, Ci (i=1, 2, 3) is first upsampled by a 3D convolutional layer to obtain bCi which has the same dimensions as Gi. Then, we add bCi and Gi to obtain the enhanced feature, which is fed into a 5×5×5 3D convolutional layer to get spatial attention weights Wi, i.e., Wi = σ(f 5×5×5(Gi + bCi)), (1) where σ denotes the Sigmoid function and f 5×5×5 denotes the 5×5×5 3D convolution operation. The attention weights reflect the importance of each spatial location that needs to be emphasized. Hence, we fuse the discrepancy feature and the spatial-contextual feature as G ′ i+1 = Wi ⊙bCi + Gi, (2) where ‘⊙’ denotes the Hadamard product. Next, G ′ i+1 is fed into a 5×5×5 3D deconvolution layer to obtain Gi+1. Finally, we get G4 ∈R1×3×H×W and then perform a 3×3 convolutional operation on G4 to generate the final discrepancy map D ∈R3×H×W . Discrepancy Information Compensation Network As we previously mentioned, due to the information loss during inversion, the information involved in the latent code w (with the size of 18×512) is inadequate. Therefore, many existing methods extract additional information to compensate w. However, such a way cannot guarantee the preservation of image details because of the low dimensionality of w. In this paper, we introduce to compensate for the information loss in both w and the early layer of the GAN generator. The architecture of DICN is illustrated in Fig. 1. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7435 On the one hand, to compensate the latent code w with the discrepancy map D, we leverage the conventional linear affine transformation. As shown in Fig. 1, we apply two 3×3 convolutional layers to the discrepancy map obtained from DIPN, predicting the scaling parameter γ ∈R18×512 and the displacement parameter θ ∈R18×512, i.e., γ = f g(D), θ = f t(D), (3) where f g and f t denote the convolution layers. Based on the above, we apply a channel scaling operation to w using the scaling parameter γ, followed by a channel displacement operation using the displacement parameter θ. This process effectively filters out uninformative features while compensating for insufficient detailed features. The affine transformation expands the representation space of the generator and facilitates the extraction of high-fidelity features in StyleGAN. The above process is expressed as wenhanced(i) = γi ⊙wi + θi, (4) where wenhanced(i) is the i-th row of the enhanced latent code wenhanced; wi is the i-th row of w; γi and θi are the i-th rows of γ and θ, respectively. On the other hand, to compensate for the information loss in the generator, instead of using the affine transformation, we take a similar fusion way as done in DIPN. We apply the discrepancy map to compensate the output of the early layer of the generator (we choose layer 7 in this paper as suggested by HFGI). We denote the output of this layer as the latent map F (with the size of 512×64×64). We adopt the spatial attention fusion module to adaptively select the important parts and suppress the unimportant parts of features. Then, we add the attention map to F, that is, Fenhanced = σ(f c1(F + f c2(D))) ⊙f c2(D) + F, (5) where f c1 and f c2 denote convolutional blocks (f c1 contains two 3×3 convolution layers and f c2 contains four 3×3 convolution layers). Fenhanced denotes the enhanced latent map which is further fed to the next layer of the generator. Note that some methods (such as FS (Yao et al. 2022), CLCAE, and HFGI) also operate on both the latent code and the generator. However, the differences between these methods and our method are significant. FS and CLCAE follow the “compensate-and-edit” paradigm, where they train an additional encoder to generate a new latent code while obtaining a new latent map to replace one layer of the generator. Although such a way improves the fidelity, the editability is affected (since the new latent space is greatly different from the W space that is proven to have excellent editability). HFGI follows the “edit-and-compensate” paradigm and leverages the pixel-level distortion map (from the original image and the initial edited image) to compensate the latent map by a linear transformation, limiting the editing performance. Moreover, the above methods ignore spatial-contextual information, resulting in losing some image details or introducing artifacts. In contrast, we generate a spatial-contextual guided discrepancy map (from the original image and the initial reconstructed image) to compensate the latent map by a nonlinear transformation, adaptively fusing features for better compensation and expanding the representation space. Attribute Editing Operations We perform operations on both the enhanced latent code and the enhanced latent map for attribute editing. To be specific, the enhanced latent code wenhanced is first modified by the mainstream latent space editing method (H¨ark¨onen et al. 2020; Shen et al. 2020) to obtain the edited latent code (used as the input of the GAN generator). Then, the enhanced latent map is modified as follows. We first obtain the initial reconstructed image Ro and the initial edited image Eo. Then the latent map FR of the reconstructed image and the latent map FE of the edited image at layer 7 of the generator are extracted. Assume that the enhanced latent maps of FR and FE are represented as FE enhanced and FR enhanced, respectively. During the attribute editing, we expect that the difference between FR enhanced and FE enhanced should be close to that between FR and FE in the latent space to ensure editability. Hence, instead of generating FE enhanced via Eq. (5), we add the difference between FR and FE to FR enhanced for predicting FE enhanced, that is, FE enhanced = FR enhanced + FE −FR, (6) where FR enhanced is obtained via Eq. (5). Joint Loss During the training stage, the parameters of the generator are frozen so that we can focus on optimizing the encoding process. We design a joint loss to achieve high reconstruction quality and good editability. For the reconstruction quality, we define the reconstruction loss for the original image I and the reconstructed image Rf as Lrec = L2(I, Rf) + λLP IP SLLP IP S(I, Rf) +λIDLID(I, Rf), (7) where L2(I, Rf) denotes the Euclidean distance between I and Rf to evaluate the structural similarity; LLP IP S(I, Rf) denotes the LPIPS loss (Zhang et al. 2018) to evaluate the perceptual similarity; LID = 1−< F(I), F(Rf) > explicitly encourages the encoder to minimize the cosine similarity between I and Rf, which can measure the identity consistency. Here, F(·) represents the feature extractor (we use the pre-trained ArcFace model (Deng et al. 2019) for the face domain and the pre-trained ResNet-50 model (Tov et al. 2021) for other domains). λLP IP S and λID indicate the balancing weights. To ensure the editability, we also incorporate the editing loss, which is defined as Ledit = L1(w, wenhanced) + L1(F, Fenhanced), (8) where L1(·, ·) denotes the L1 norm. This loss is leveraged to constrain the distances between w and wenhanced as well as those between F and Fenhanced. In this way, we can keep the latent codes close to the W space, beneficial to maintain the editability, as suggested in (Tov et al. 2021). Finally, the joint loss is expressed as Ljoint = Lrec + λeditLedit, (9) where λedit indicates the balancing weight. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7436 CLCAE SDIC INPUT e4e Hyper HFGI Color Grass PoseAgeFS PTI Figure 3: Qualitative comparison between SDIC and several state-of-the-art methods on image inversion and editing tasks. More results are shown in Supplement B. Experiments Experimental Settings Datesets. We evaluate our method on two domains: human faces and cars. For the face domain, we adopt the widelyused FFHQ dataset (Karras, Laine, and Aila 2019) for training and the CelebA-HQ dataset (Karras et al. 2017; Liu et al. 2015) for testing. For the car domain, we used the Stanford car dataset (Krause et al. 2013) for training and testing. Comparison Methods. We compare our SDIC method with various GAN inversion methods, including optimizationbased method PTI (Roich et al. 2022) and encoder-based methods (e4e (Tov et al. 2021), HyperStyle (Alaluf et al. 2022), HFGI (Wang et al. 2022), FS (Yao et al. 2022), and CLCAE (Liu, Song, and Chen 2023)). All the results of these comparison methods are obtained by using the trained models officially provided by the corresponding authors. Implementation Details. In this paper, we adopt InterfaceGAN (Shen et al. 2020) for face image editing and GANSpace (H¨ark¨onen et al. 2020) for car image editing. We use the pre-trained StyleGAN generator and the e4e encoder in our method. The sizes of the input and output of the network are both 1024×1024. λLP IP S, λID, and λedit are empirically set to 0.8, 0.2, and 0.5, respectively. We use the ranger optimizer (Yong et al. 2020) with a learning rate of 0.001 and a batch size of 2. Our model is trained 100,000 steps on the NVIDIA GeForce RTX 3080 GPU. Reconstruction Results Quantitative Evaluation. We quantitatively compare our method with state-of-the-art GAN inversion methods. The results are shown in Table 1. We use the SSIM, PSNR, and L2 to measure the reconstruction error, and the LPIPS (Zhang et al. 2018) for the perceptual quality. We also use CurricularFace (Huang et al. 2020) to extract the features of two images and calculate their cosine similarity as the ID distance, which can measure the identity similarity between each reconstruction image and the original input image. These metrics are evaluated on the first 1,000 images of CelebA-HQ. In addition, we also report the inference time obtained by these methods. As we can see, our method significantly outperforms both the encoder-based methods (HFGI, FS, CLCAE, and HyperStyle) and the optimization-based method (PTI) in terms of reconstruction quality (including ID, SSIM, PSNR, LPIPS, and L2). Notably, our method achieves a much faster inference speed than the optimization-based method. In a word, SDIC obtains the highest fidelity at a fast inference speed among all the competing methods. Qualitative Evaluation. Fig. 3 gives the qualitative comparison between SDIC and several state-of-the-art methods. For the face domain, SDIC effectively preserves background and foreground details. In the first row of Fig. 3, only SDIC preserves the indentation on the cheeks. In the third row of Fig. 3, SDIC and PTI successfully reconstruct the earrings and bangs. For the car domain, SDIC shows superior preservation of image details such as car lights, front, and reflective parts compared with other encoder-based methods. The above results show the effectiveness of SDIC. Editing Results Quantitative Evaluation. There are no intuitive measures to evaluate the editing performance. Therefore, we calculate the ID distance (Huang et al. 2020) between the original image and the manipulation one. Meanwhile, we conduct a user study to evaluate the editing results as done in HFGI and CLCAE. Specifically, we collect 56 edited images of faces and cars for all the competing methods and ask 30 participants to choose the images with high fidelity and appropriate manipulation. The results are given in Table 1. Our proposed method achieves the same ID as PTI and greatly outperforms the other competing methods in terms of User Study. Qualitative Evaluation. The image editing results are given in Fig. 3. Compared with PTI, e4e, and Hyper, SDIC retains more detailed information in the editing results (e.g., the hat in the second row of Fig. 3, the shadow and gap on the ground in the eighth row of Fig. 3) while maintaining high image quality. In contrast, HFGI exhibits numerous artifacts (e.g., the neck in the second row and the right-side hair in the fourth row of Fig. 3). FS loses facial details and cannot edit car images well. CLCAE shows poor editing results (e.g., distortion in the neck in the fourth row of Fig. 3) and small attributes changes (e.g., minimal age reduction in the second row and almost no color change in the sixth row of Fig. 3). In general, our method effectively balances fidelity and editability by incorporating spatial-contextual information. Ablation Studies Influence of Spatial-Contextual Information. We compare the reconstruction results obtained by our method with and without the two-branch spatial-contextual hourglass module. The results are given in Fig. 4(a). For our method without the two-branch spatial-contextual hourglass module, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7437 Inversion Editing Method ID ↑ SSIM ↑ PSNR ↑ LPIPS ↓ L2 (MSE) ↓ Time (s) ↓ ID ↑ User Study ↑ PTI 0.832 0.703 24.355 0.110 0.012 201.357 0.726 22.500% e4e 0.495 0.537 19.390 0.206 0.050 0.033 0.452 18.750% HyperStyle 0.737 0.624 22.513 0.104 0.025 0.710 0.663 21.668% HFGI 0.606 0.641 22.372 0.136 0.026 0.108 0.610 22.918% FS 0.815 0.648 23.797 0.934 0.015 0.568 0.492 17.083% CLCAE 0.708 0.725 25.665 0.083 0.012 0.125 0.518 22.918% SDIC 0.871 0.815 27.672 0.057 0.007 0.321 0.726 65.832% Table 1: Quantitative comparison for inversion/editing quality on the face domain. The value of User Study is the percentage of users that choose this method. The best results are in bold. w/o SC w/ SC Input w/o Att w/ Att Input EC CE Input Lip Age+ Figure 4: Ablation study results on the face domain. In the figure, “w/” and “w/o” denotes “with” and “without”, respectively. “SC” denotes “Spatial-Contextual”. “EC” and “CE” denotes the “Edit-and-Compensate” paradigm and the “Compensate-andEdit” paradigm, respectively. we concatenate the two input images and directly feed them into the discrepancy map learning hourglass module. We select the two images under different conditions (i.e., side face and deformed mouth). Without the two-branch spatialcontextual hourglass module, our method shows substantial artifacts on the edge of the portrait and low fidelity for the mouth. In contrast, with the module, our method is more robust under different conditions, effectively removing artifacts and preserving more spatial details. Influence of Spatial Attention Fusion in DICN. To compensate the latent map with the discrepancy information, we leverage the spatial attention fusion module. We evaluate the performance of our method with the spatial attention fusion module and with the conventional affine transformation. The results are given in Fig. 4(b). Compared with our method with the spatial attention fusion module, our method with conventional affine transformation tends to give worse results. Some details of the generated image are overemphasized. For example, the wrinkles at the corners of the eyes are not natural while many reticulated lines appear in the background. This is because the affine transformation is only a linear transformation, leading to limited compensation results. In contrast, our model with the spatial attention fusion module adaptively selects informative parts and suppresses uninformative parts, generating high-quality images with a natural and smooth appearance. Influence of “Compensate-and-Edit” Paradigm. We compare the “compensate-and-edit” and the “edit-andcompensate” paradigms. Specifically, we design a variant of our method based on the “edit-and-compensate” paradigm. That is, this variant predicts the discrepancy information between the initial edited and original images with DIPN and reconstructs the image with DICN. A comparison between this variant and our method is shown in Fig. 4(c). The variant gives worse editing results than our method. It considers the attribute changes as low-fidelity regions that do not match the original image. Thus, the network tends to correct them. Such a way reduces the effectiveness of the editability. In contrast, our method retains more detail after editing by the paradigm of “first compensating and then editing”. The above experiments show the effectiveness of the “compensate-and-edit” paradigm. More ablation studies and applications of our method can refer to Supplement A and C. Conclusion In this paper, we introduce a novel SDIC method, consisting of DIPN and DICN. Following the “compensate-and-edit” paradigm, SDIC first generates the spatial-contextual guided discrepancy information between the original image and the initial reconstructed image by DIPN and then compensates both the latent code and the GAN generator with the discrepancy information by DICN. Experimental results show that our method can strike a good distortion-editability trade-off at a fast inference speed, which shows the effectiveness and efficiency of our method. One limitation of our proposed method is the difficulty in handling large manipulation cases (see Supplement D for failure cases). We intend to explicitly align the edited image and the original image in future work. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7438 Acknowledgments This work was supported by the National Natural Science Foundation of China under Grants 62372388, 62071404, and U21A20514, and by the Open Research Projects of Zhejiang Lab under Grant 2021KG0AB02. References Abdal, R.; Qin, Y.; and Wonka, P. 2019. Image2StyleGAN: How to embed images into the StyleGAN latent space? In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 4432–4441. Abdal, R.; Qin, Y.; and Wonka, P. 2020. Image2StyleGAN++: How to edit the embedded images? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8296–8305. Abdal, R.; Zhu, P.; Mitra, N. J.; and Wonka, P. 2021. Styleflow: Attribute-conditioned exploration of StyleGANgenerated images using conditional continuous normalizing flows. ACM Transactions on Graphics (ToG), 40(3): 1–21. Alaluf, Y.; Patashnik, O.; and Cohen-Or, D. 2021. ReStyle: A residual-based StyleGAN encoder via iterative refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 6711–6720. Alaluf, Y.; Tov, O.; Mokady, R.; Gal, R.; and Bermano, A. 2022. HyperStyle: StyleGAN inversion with hypernetworks for real image editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18511–18521. Bau, D.; Strobelt, H.; Peebles, W.; Wulff, J.; Zhou, B.; Zhu, J.-Y.; and Torralba, A. 2020. Semantic photo manipulation with a generative image prior. arXiv preprint arXiv:2005.07727. Bau, D.; Zhu, J.-Y.; Wulff, J.; Peebles, W.; Strobelt, H.; Zhou, B.; and Torralba, A. 2019. Seeing what a GAN cannot generate. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 4502–4511. Chang, J.-R.; and Chen, Y.-S. 2018. Pyramid stereo matching network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5410–5418. Choi, M. J.; Lim, J. J.; Torralba, A.; and Willsky, A. S. 2010. Exploiting hierarchical context on a large database of object categories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 129–136. Collins, E.; Bala, R.; Price, B.; and Susstrunk, S. 2020. Editing in style: Uncovering the local semantics of GANs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5771–5780. Deng, J.; Guo, J.; Xue, N.; and Zafeiriou, S. 2019. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4690–4699. Dinh, T. M.; Tran, A. T.; Nguyen, R.; and Hua, B.-S. 2022. Hyperinverter: Improving StyleGAN inversion via hypernetwork. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11389– 11398. Goetschalckx, L.; Andonian, A.; Oliva, A.; and Isola, P. 2019. Ganalyze: Toward visual definitions of cognitive image properties. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 5744–5753. Gu, J.; Shen, Y.; and Zhou, B. 2020. Image processing using multi-code GAN prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3012–3021. H¨ark¨onen, E.; Hertzmann, A.; Lehtinen, J.; and Paris, S. 2020. GANspace: Discovering interpretable GAN controls. Conference on Neural Information Processing Systems (NeurIPS), 33: 9841–9850. Hu, X.; Huang, Q.; Shi, Z.; Li, S.; Gao, C.; Sun, L.; and Li, Q. 2022. Style transformer for image inversion and editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11337–11346. Huang, Y.; Wang, Y.; Tai, Y.; Liu, X.; Shen, P.; Li, S.; Li, J.; and Huang, F. 2020. Curricularface: adaptive curriculum learning loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5901–5910. Hutchinson, B.; Denton, E.; Mitchell, M.; and Gebru, T. 2019. Detecting bias with generative counterfactual face attribute augmentation. arXiv preprint arXiv:1906.06439. Kang, K.; Kim, S.; and Cho, S. 2021. GAN inversion for out-of-range images with geometric transformations. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 13941–13949. Karras, T.; Aila, T.; Laine, S.; and Lehtinen, J. 2017. Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Karras, T.; Laine, S.; and Aila, T. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4401–4410. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2020. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8110–8119. Krause, J.; Stark, M.; Deng, J.; and Fei-Fei, L. 2013. 3d object representations for fine-grained categorization. In Proceedings of the International IEEE Workshop on 3D Representation and Recognition, 554–561. Li, J.; Wei, Y.; Liang, X.; Dong, J.; Xu, T.; Feng, J.; and Yan, S. 2016. Attentive contexts for object detection. IEEE Transactions on Multimedia (TMM), 19(5): 944–954. Liu, H.; Song, Y.; and Chen, Q. 2023. Delving StyleGAN inversion for image editing: A foundation latent space viewpoint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10072– 10082. Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep learning face attributes in the wild. In Proceedings of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7439 IEEE/CVF International Conference on Computer Vision (ICCV), 3730–3738. Pidhorskyi, S.; Adjeroh, D. A.; and Doretto, G. 2020. Adversarial latent autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14104–14113. Richardson, E.; Alaluf, Y.; Patashnik, O.; Nitzan, Y.; Azar, Y.; Shapiro, S.; and Cohen-Or, D. 2021. Encoding in style: a StyleGAN encoder for image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2287–2296. Roich, D.; Mokady, R.; Bermano, A. H.; and Cohen-Or, D. 2022. Pivotal tuning for latent-based editing of real images. ACM Transactions on graphics (TOG), 42(1): 1–13. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 234–241. Shen, Y.; Gu, J.; Tang, X.; and Zhou, B. 2020. Interpreting the latent space of GANs for semantic face editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9243–9252. Shen, Y.; and Zhou, B. 2021. Closed-form factorization of latent semantics in GANs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1532–1540. Tov, O.; Alaluf, Y.; Nitzan, Y.; Patashnik, O.; and Cohen-Or, D. 2021. Designing an encoder for StyleGAN image manipulation. ACM Transactions on Graphics (TOG), 40(4): 1–14. Voynov, A.; and Babenko, A. 2020. Unsupervised discovery of interpretable directions in the GAN latent space. In arXiv preprint arXiv:2002.03754. Wang, B.; and Ponce, C. R. 2021. The geometry of deep generative image models and its applications. arXiv preprint arXiv:2101.06006. Wang, T.; Zhang, Y.; Fan, Y.; Wang, J.; and Chen, Q. 2022. High-fidelity GAN inversion for image attribute editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11379–11388. Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I. S. 2018. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), 3–19. Xia, W.; Zhang, Y.; Yang, Y.; Xue, J.-H.; Zhou, B.; and Yang, M.-H. 2022. GAN inversion: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Yao, X.; Newson, A.; Gousseau, Y.; and Hellier, P. 2022. A Style-Based GAN Encoder for High Fidelity Reconstruction of Images and Videos. European Conference on Computer Vision (ECCV). Yong, H.; Huang, J.; Hua, X.; and Zhang, L. 2020. Gradient centralization: A new optimization technique for deep neural networks. In European Conference on Computer Vision (ECCV), 635–652. Springer. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 586–595. Zhu, J.; Shen, Y.; Zhao, D.; and Zhou, B. 2020a. In-domain GAN inversion for real image editing. In European Conference on Computer Vision (ECCV), 592–608. Zhu, P.; Abdal, R.; Qin, Y.; Femiani, J.; and Wonka, P. 2020b. Improved StyleGAN embedding: Where are the good latents? arXiv preprint arXiv:2012.09036. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7440
2024
826
18,658
Self-Distillation Regularized Connectionist Temporal Classification Loss for Text Recognition: A Simple Yet Effective Approach Ziyin Zhang*, Ning Lu*, Minghui Liao, Yongshuai Huang, Cheng Li, Min Wang, Wei Peng Huawei Technologies Co., Ltd., Shenzhen, China {zhangziyin1, luning12, liaominghui1, huangyongshuai1, licheng81, wangmin5, peng.wei1}@huawei.com Abstract Text recognition methods are gaining rapid development. Some advanced techniques, e.g., powerful modules, language models, and un- and semi-supervised learning schemes, consecutively push the performance on public benchmarks forward. However, the problem of how to better optimize a text recognition model from the perspective of loss functions is largely overlooked. CTC-based methods, widely used in practice due to their good balance between performance and inference speed, still grapple with accuracy degradation. This is because CTC loss emphasizes the optimization of the entire sequence target while neglecting to learn individual characters. We propose a self-distillation scheme for CTCbased model to address this issue. It incorporates a framewise regularization term in CTC loss to emphasize individual supervision, and leverages the maximizing-a-posteriori of latent alignment to solve the inconsistency problem that arises in distillation between CTC-based models. We refer to the regularized CTC loss as Distillation Connectionist Temporal Classification (DCTC) loss. DCTC loss is module-free, requiring no extra parameters, longer inference lag, or additional training data or phases. Extensive experiments on public benchmarks demonstrate that DCTC can boost text recognition model accuracy by up to 2.6%, without any of these drawbacks. Introduction Text Recognition (TR) is an indispensable technology that facilitates intelligent auto-driving (Zhu et al. 2018), revealing precise semantic information (Chen et al. 2021b) for sensitive information auditing, saving labor forces for financial processes, etc. Methods for scene text recognition (STR) are blooming at a breathless pace these years. For example, (Du et al. 2022; Da, Wang, and Yao 2022; Lu et al. 2021) focus on designing sophisticated architectures by inventing powerful modules; (Wang et al. 2022a,b) integrate a language model into a text recognition model to enable explicit language modeling; (Patel, Allebach, and Qiu 2023; Yang et al. 2022) learn better sequential features with an un-supervised or semi-supervised learning scheme by leveraging a large amount of label-free or partial labeled data; However, the problem that how to better optimize a text recognition model *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: An illustraion of optimization and distillation on CTC- and attention-based models. Also shows the alignment inconsistency problem from a perspective of loss functions is out of in the cold. It is also worth lots of effort since the dedicated designed loss function may be free of extra parameters, extra inference latency, extra training data, or extra training phases. Recent text recognition methods are often supervised by two loss functions, the Connectionist Temporal Classification (CTC) loss and the Cross-Entropy (CE) loss, which correspond to CTC-based and attention-based models, respectively. As illustrated in Fig. 1, CTC loss and CE loss optimize models in a Prediction-Target non-Aligned (PTnA) and Prediction-Target Aligned (PTA) mode respectively. Although much recent research empirically shows that CEbased models, which run in the PTA mode, can outperform CTC-based models (Cong et al. 2019; Shi et al. 2016; Baek et al. 2019), CTC-based models have three non-negligible advantages: 1) The CTC decoder is more robust to varying input sequence lengths than the attention-based deThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7441 coder (Cong et al. 2019; Chen et al. 2021a); 2) Compared to attention-based models, CTC-based models, getting rid of auto-regression, decode each time-step simultaneously, which can achieve better inference efficiency (Long, He, and Yao 2021; Chen et al. 2021a); 3) Due to its concise model design (Li et al. 2022; Kuang et al. 2021), CRNN (Shi, Bai, and Yao 2017), the classical CTC-based TR model, is still a mainstream industrial model. These practical advantages attract our research focus back to the CTC loss function, motivating this paper. CTC loss models the negative log total probability of all feasible paths that can be collapsed into the label sequence. However, some of the paths are more plausible. These paths are certain particular alignments of all positions along the sequence. Once discovered and additionally trained with such alignments, the model should be benefited from that. The process of discovering and training those more plausible alignments is known as Knowledge Distillation (KD) (Hinton, Vinyals, and Dean 2015). The more plausible alignments are also a form of “dark knowledge” (Hinton, Vinyals, and Dean 2015) in the context of KD. Nevertheless, a common issue when applying KD to CTC-based models is “alignment inconsistency” (Ding, Chen, and Huo 2020). This issue occurs when the features or outputs of the teacher model are found to be inaccurate or inconsistent. This inconsistency can arise due to the limitations of the teacher alignment, which cannot guarantee full correctness or consistency during training or across multiple teacher models. As a result, this can negatively impact the performance of the distillation process. The key to success in distillation on CTC models is to find the proper alignments, i.e., the latent alignments. Previous works (Ding, Chen, and Huo 2020; Huang et al. 2018) are module-dependent. They estimate the latent alignment directly from other teacher models’ outputs or intermediate features. To obtain more accurate latent alignments, these methods often require complex and well-trained teacher models. To further stabilize the estimate, some (Kim and Rush 2016; Ding, Chen, and Huo 2019) use specifically designed heuristic mechanism to adjust the original one or use an ensemble of a group of raw estimates. Some methods (Kurata and Audhkhasi 2018; Ding, Chen, and Huo 2019) use an ensemble of teachers to improve guidance accuracy. However, 1)they used extra complex teacher models, which increases computing resource demand; 2) they can hardly relieve the intrinsic inaccuracy as a result of directly taking the outputs of the teacher models as the latent alignment; and 3) they incurred distillation instability when using ensembles of teachers because of inconsistent peak positions (Kurata and Audhkhasi 2018), causing unstable collapsed latent alignments. We propose Distillation CTC loss (DCTC), a frame-wise, self-distillation scheme for the CTC-based models. By modeling latent alignment distribution as maximizing the posterior probability given the ground truth and model outputs, we derive a simple, effective, and module-free method to generate high-quality estimated latent alignment at each training iteration. This method is closed-formed and does not require any additional module to perform. In summary, our contributions are as follows: 1. We propose a self-distillation scheme, DCTC, to conduct frame-wise regularization for CTC-based models. It can directly apply to existing CTC-based text recognition model without introducing extra teacher models, training phases, or training data. 2. To our knowledge, it is the first work that uses MAP to perform latent alignment estimate. Our method well addresses the alignment inconsistency problem by generating high-quality estimated latent alignment most of the training time, which is supported by our quantitative analysis. 3. Exhaustive experiments over models and CTC loss variants demonstrate that our proposed DCTC loss effectively boost the performance of various text recognition models on both English and Chinese text recognition benchmarks. Related Works Text Recognition Text Recognition is vital in the Optical Character Recognition (OCR) area. In the deep-learning era, how to design powerful modules attracts lots of interest. Shi et al. (Shi, Bai, and Yao 2017) proposed a segmentation-free method, CRNN, which models sequential relationships between frames and employs CTC loss (Graves et al. 2006), adaptively aligning features to targets to train a neural network. This method gained huge success and opened a new era for STR. (Du et al. 2022) used ViT (Dosovitskiy et al. 2021) to develop a single powerful visual model for recognition. It also employs CTC loss to align targets. (Lu et al. 2021; Fang et al. 2021; Yu et al. 2020; Bhunia et al. 2021b) formulated text recognition problem as a translation task that translates a cropped image into a string, using an encoder-decoder framework, along with an attention mechanism (Baek et al. 2019). Recently, thanks to SelfAttention (Vaswani et al. 2017), (Lu et al. 2021; Li et al. 2021) proposed transformer-based STR models to solve the attention drift problem (Cheng et al. 2017). Besides, Liao et al. (Liao et al. 2019) proposed to segment and recognize text from two-dimensional perspective. Another active direction is to lay their hope in a language model. (Qiao et al. 2020) claimed that the encoder-decoder framework only focuses on the local visual feature while ignoring global semantic information. So they used a pre-trained language model to guide the decoding process to improve the model’s performance. (Fang et al. 2021) integrated a language model into a vision-based recognition model to enhance its feature representative ability, which iteratively refines the model’s prediction. A more advanced work by Bautista et al. (Bautista and Atienza 2022) used permutation language modeling to refine recognition results. To leverage large unlabelled data, (Aberdam et al. 2021) proposed a contrastive pre-training learning scheme to boost performance. Recently, (Guan et al. 2022a,b; Yang et al. 2022) used self-supervision framework to refine visual and language features at a fine-granularity level to improve recognition accuracy. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7442 CTC-Related Text Recognition Methods Many endeavors have been devoted to improving CTCbased text recognition models. (Feng, Yao, and Zhang 2019) proposed FocalCTC to aim at the imbalance problem of Chinese words, introducing loss (Lin et al. 2020) into CTC loss to modulate the importance of hard and easy word examples. Naturally, CTC loss is not designed for 2D spatial prediction. Xie et al. (Xie et al. 2019) proposed an easy-to-apply aggregated cross entropy (ACE) loss to better solve 2D prediction problems with fast and lightweight implementation. (Wan et al. 2019) extended vanilla CTC as 2D-CTC to adapt to 2D text images by modeling a spatial decoding strategy. To encourage cohesive features, Center loss (Wen et al. 2016) is introduced to CTC loss as Center-CTC loss (Du et al. 2021). (Gao, Zhang, and Liu 2021) provided an expectation-maximum view of CTC loss and a novel voting algorithm to improve decoding performance. Based on maximum entropy regularization (Jaynes 1957), (Liu, Jin, and Zhang 2018) proposed EnCTC to address peaky distribution problem (Graves et al. 2006). VarCTC (Chao, Chen, and Chu 2020) is also proposed to relieve the problem. Tanaka et al. (Tanaka, Ono, and Furuhata 2019) used the framework of virtual adversarial training (Miyato et al. 2017) to develop a fast regularization algorithm FDS on CTC loss by smoothing posterior distributions around training data points. Knowledge Distillation on Text Recognition or CTC-Based Models There are a lot of works attempted to apply KD on TR models or CTC-based models. (Bhunia et al. 2021a) creatively employed a knowledge distillation loss to train a unified model for scene and handwritten TR tasks. However, this method needs two additional teacher models, leading to a complicated training procedure. (Takashima, Li, and Kawai 2018) investigated frame- and sequence-level KD on CTC-based acoustic models. (Kim and Rush 2016) proposed a word-level and a sequence-level distillation method and apply them to neural machine translation task. They used beam search to generate hypotheses from output probabilities and kept a K-best list to approximate the teacher distribution. (Ding, Chen, and Huo 2019) used N-best hypotheses imitation to do frame- and segment-wise distillation from a complex teacher model. (Kurata and Audhkhasi 2018) proposed an alignment-consistent ensemble technique to relieve unstable ensemble alignment problem. (Moriya et al. 2020) uses self-distillation KD on the CTC-based ASR system by using a Transformer (Vaswani et al. 2017) module to generate latent alignment. Recently, CCD (Guan et al. 2022b) used a self-distillation module to perform character-level distillation. SIGA (Guan et al. 2022a) used a self-supervised implicit glyph attention module to relief the alignment-drifted issue, which can be also seen as an character-level self-distillation. The aforementioned methods are all module-dependent and need extra teacher models to provide accurate estimated latent alignment. Also, they can hardly give a closed form for the estimated latent alignments. Methods A key problem in CTC distillation is alignment inconsistency (Kurata and Audhkhasi 2018). The problem can be described as to find a proper latent alignment z ∈V ′T , from which the student model can distill and whose length is equal to that of the logits sequence U ∈RK+1,T . V is the character vocabulary, V ′ = V ∪{blank} is the augmented vocabulary in the CTC setting, and there are |V | = K, |V ′| = K + 1. L is the length of the label sequence, T is the number of time steps, or the length of the logit sequence. It is required that T > L in the CTC setting, causing nonunique alignment, which is the source of alignment inconsistency. We need a way to estimate proper alignments for U to perform frame-wise KD, which motivates our work. The Distillation Loss Term in CTC Scenario Given a sequence of the logits sequence U ∈RK+1,T , the output probability sequence P = SoftmaxV ′(U), the ground truth label sequence y ∈V L, and the latent alignment z ∈V ′T . The true label sequence y can be regarded as an oracle teacher from which we want the student logits sequence to distill. Originally it was impossible because the lengths of the true label sequence and the logits sequence are different (L < T). However, bridging by z as an agent, the distillation loss term can be formulated as: Ldistill(P, z) = LCE(P, z) = − T X t=1 zt log P(zt, t) (1) Distillation CTC is defined as follows: LDCTC(U, P, y, z) = LCTC(U, y) + λLdistill(P, z) (2) where λ is the coefficient controlling the amplitude of distillation. The question is how to give a proper latent alignment z. In the self-distillation scheme, it can be generated by a layer or an additional MLP head of the student model. Nevertheless, module-dependent methods often yield bad generation quality, which will be shown in experiments. Aiming to solve this problem, instead of using an module-dependent method, we deduce a closed-form estimation of z via Maximum-APosteriori (MAP). Estimation of Latent Alignment For every t from 1 to T, we want to find a certain value for zt, which can most likely be decoded into the given true label sequence y. Denote the best estimation of z as z∗, then z∗is given by z∗= arg min V ′ G P (3) where G is the gradient tensor of CTC loss with respect to the logits sequence, that is: G = ∂LCTC(U, y) ∂U (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7443 Figure 2: The Architecture of DCTC in Self-distillation Scheme Derivation of our generation method Given input (image) X and its corresponding true label sequence y, at time t, the certain zt that is most likely to decoded into y can be formulated as a MAP estimate: z∗ t = arg max zt∈V ′ p(y|zt, X) = arg max zt∈V ′ p(zt|y, X)p(y|X) p(zt|X) = arg max zt∈V ′ p(zt|y, X) p(zt|X) (5) In Eq. (5), p(zt|X) is the probability that is directly output by the model. p(zt|X, y) is the probability that character zt ∈V ′ appears at time step t when y and X are given. We now model p(zt|X, y) in the CTC setting. Using (Graves et al. 2006)’s notation, let α(·, ·) and β(·, ·) be the forward and the backward table respectively. α(·, ·), β(·, ·) ∈Rl′,T , where l′ = 2L + 1 is the length of the augmented true label sequence y′ ∈V ′l′ used in computing α and β. Forward table element α(i, t) means the probability that the cumulative paths go through y′ i ∈V ′ at time step t from the start of y′. Backward table element β(i, t) means the probability that the cumulative paths go through y′ i at time step t from the end of y′. As such, α(i, t)β(i, t)/P(y′ i, t) is the probability that the total paths go through y′ i at time step t over the whole time sequence. Denote S(i, t) = α(i, t)β(i, t)/P(y′ i, t) for simplicity. Then, for a specific class c, in the CTC setting, we have: p(zt = c|X, y) ∝ l′ X i,y′ i=c S(i, t) (6) We need a way to connect Eq. (6) to a value that we can easily compute. Observe that the gradients of CTC loss with respect to U is given by (Graves et al. 2006): ∂LCTC ∂U(c, t) = G(c, t) = P(c, t) − Pl′ i,y′ i=c S(i, t) p(y|X) (7) So we have: p(zt = c|X, y) ∝(P(c, t) −G(c, t))p(y|X) ∝P(c, t) −G(c, t) (8) Note the fact that P(c, t) = p(zt = c|X). Now, comparing Eq. (5) and Eq. (8), Eq. (5) becomes: z∗ t = arg max c∈V ′ P(c, t) −G(c, t) P(c, t) = arg max c∈V ′  1 −G(c, t) P(c, t)  = arg min c∈V ′ G(c, t) P(c, t) (9) The ultimate form Eq. (3) is simply the vectorized version of Eq. (9), which is easy to implement. We use Eq. (3) to generate latent alignment in practice. There might be a concern that Eq. (3) seems to have singularities when P has zeros, which impedes the calculation of z∗. However, it is not the case. There is NO singularity at all. We can prove that G(c, t)/P(c, t) is bounded within [0, 1] along P(c, t) changing from 0 to 1. The proof, however, is cumbersome. Readers who are interested in it can refer to supplementary materials. Our proposed estimation method can generate incredibly high-quality latent alignment. We empirically show that in experiments. Summary of DCTC Loss DCTC loss works in a self-distillation scheme, as such, z∗ is directly estimated from the CTC loss that supervises the student model. No other teacher models participated. So, we substitute Eq. (3) into Eq. (2) and get: LDCTC(U, P, y, z∗) = LCTC(U, y) + λLdistill(P, z∗) (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7444 We show a pseudo code of DCTC loss in Algorithm 1 for a clear understanding. Meanwhile, the architecture of our method is shown in Fig. 2. Algorithm 1: Calculation of DCTC loss in self-distillation scheme Input: the input logits U, ground truth label sequence y, weighting factor λ 1: Calculate probabilities P = softmaxV ′U. 2: Calculate CTC loss L1 = LCTC(U, y). 3: Without tracing gradients, copy U as U′ 4: Calculate CTC loss L2 = LCTC(U′, y). 5: Calculate gradients G = ∂L2/∂U′ 6: Take argmin over vocabulary: z∗= arg minV ′ G/P 7: Compute LDCTC = L1 + λLdistill(P, z∗) Output: LDCTC Experiments Datasets All datasets used in our experiments are publicly available. Our experiments are conducted on English and Chinese scenarios. For English text recognition task, we train all models on two commonly used synthetic scene text recognition datasets: ST (Gupta, Vedaldi, and Zisserman 2016) and MJ (Jaderberg et al. 2014). We evaluate all models on six English benchmark datasets: IC13 (Karatzas et al. 2013), IC15 (Karatzas et al. 2015), SVT (Wang, Babenko, and Belongie 2011), SVTP (Phan et al. 2013), IIIT (Mishra, Karteek, and Jawahar 2012) and CT (Risnumawan et al. 2014). Each of these six contains 857, 647, 3000, 1811, 645 and 288 test samples, respectively. For Chinese text recognition task, we use the Chinese Benchmark datasets (Chen et al. 2021a). It contains four subsets: Scene, Web, Document(Doc) and Handwritten(Hand). Each of these four contains 63646, 14059, 50000 and 18651 test samples, respectively. We train all models on their own training set and evaluate them on their own test set. The license of academically using Handwritten subset, aka SCUT-HCCDoc (Zhang, Liang, and Jin 2020), has been issued by its owner as per our request. Implementation Details Base Models We choose six base models as the student models. They are CRNN (Shi, Bai, and Yao 2017), TRBA (Baek, Matsui, and Aizawa 2021), SVTR-T, SVTRS and SVTR-B (Du et al. 2022). We use them trained with CTC loss as the baseline models and compare them to models trained with DCTC loss in a self-distillation scheme (directly replacing CTC loss with DCTC loss), meaning the teacher is the student itself. All models are implemented with PaddleOCR1. We implemented DCTC as a CUDA-CPP extension for computation efficiency. Hyperparameters Hyperparameters for the number of training epochs, batch size, data augmentation strategy, optimizer, learning rate, and decay policy are different as per 1https://github.com/PaddlePaddle/PaddleOCR the base models and follow the base models’ own original settings described in their source. The image size used for English task is (h,w)=(32,100), and for Chinese task is (32,256). The distillation coefficient λ in LDCTC is set to 0.025 for English tasks, and 0.01 for Chinese tasks. All experiments are conducted on Nvidia Tesla V100 GPUs. Metrics and Evaluation Protocols We use accuracy to evaluate all models’ performance. Accuracy (ACC) is the ratio of the number of totally correct predictions over the number of test samples. Certain protocols applied when evaluating. For English tasks, only numbers and letters (case-insensitive) are evaluated. For Chinese tasks, we follow (Chen et al. 2021a)’s conventions: 1) convert full-width characters to half-width characters; 2) convert traditional Chinese characters to simplified Chinese characters; 3) all letters to lowercase, and 4) discard all spaces. In addition, we propose a new metric “Alignment Accuracy (AACC)” to measure the quality of the latent alignment estimate. It is defined as the ACC of the decoded latent alignments and the ground truth labels. The difference from evaluating model performance is that we do not apply any protocol when evaluating AACC. We decode latent alignments in a CTC-greedy way, meaning collapsing repeating characters and removing all blanks. A Model-Wise Comparison We compare DCTC loss with CTC loss on six models mentioned and collect all results in Tab. 1. Each model is compared to its baseline, which is the one trained by CTC loss. We can clearly see that all models achieve accuracy improvement over almost all benchmark datasets, which profoundly verifies the effectiveness of our method at the model level. CRNN, the most classical, representative, and widelyused industrial CTC-based text recognition model, obtains a 2.6% average accuracy increment on English and 2.1% on Chinese benchmarks. The advanced CTC-based singlevisual-model text recognition method, SVTR series, can also gain accuracy improvement by our method. Up to 0.9% and 1.1% average accuracy improvement in English and Chinese are observed when trained with DCTC loss. Besides, as our method does not change the structure of the models, the inference speed remains the same. A Loss-Wise Comparison Our method can be regarded as a variant of CTC loss when working in a self-distillation scheme. Many variants of CTC have been proposed but have yet to be experimented with advanced models or on Chinese benchmarks. In this part, we compare our method with other variants of CTC loss. The chosen variants are FocalCTC2 (Feng, Yao, and Zhang 2019) and EnCTC3 (Liu, Jin, and Zhang 2018) We choose them because they 1) have been peer-reviewed and 2) have public code bases (in footnotes). We align the hyperparameters with their original settings to make the comparison fair. For FocalCTC loss, α = 1 and γ = 2; for EnCTC loss, the regularization coefficient β = 2https://github.com/PaddlePaddle/PaddleOCR 3https://github.com/liuhu-bigeye/enctc.crnn The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7445 Base Model Methods English Benchmarks Chinese Benchmarks IC13 SVT IIIT IC15 SVTP CT Avg Scene Web Doc Hand Avg CRNN CTC 90.3 78.9 84.3 65.9 64.8 61.3 77.3 54.9 56.2 97.5 48.0 68.7 DCTC 90.7 82.4 88.9 66.1 65.4 68.1 79.9 58.6 57.0 98.0 49.7 70.8 TRBA CTC 94.0* 88.9* 93.6* 76.5* 79.8* 84.0* 87.3 59.6* 57.8* 98.2* 48.9* 71.3 DCTC 94.2 90.4 93.9 78.1 81.3 85.8 88.2 61.1 58.6 99.2 49.5 72.4 SVTR-T CTC 96.3 91.6 94.4 84.1 85.4 88.2 90.8 67.9 61.8* 99.1* 47.2* 75.3 DCTC 96.4 92.3 95.4 85.3 86.1 89.9 91.7 68.3 63.9 99.2 48.1 75.9 SVTR-S CTC 95.7 93.0 95.0 84.7 87.9 92.0 91.6 69.0 63.9* 99.2* 49.5* 76.3 DCTC 96.4 92.5 96.2 86.2 88.1 92.4 92.5 70.3 65.8 99.4 50.3 77.3 SVTR-B CTC 97.1 91.5 96.0 85.2 89.9 91.7 92.3 71.4 64.1* 99.3* 50.0* 77.5 DCTC 97.1 92.9 96.3 87.2 89.6 92.1 93.1 72.2 67.0 99.4 50.4 78.2 SVTR-L CTC 97.2 91.7 96.3 86.6 88.4 95.1 92.8 72.1 66.3* 99.3* 50.3* 78.1 DCTC 97.4 93.7 96.9 87.3 88.5 92.3 93.3 73.9 68.5 99.4 51.0 79.2 Table 1: Results of Model-wise Comparison. Bold ACCs are the model-wise better results. ACC marked by * means those data are not reported and thus reproduced by us. Results on English benchmarks of the baseline models of CRNN and TRBA are reported by (Baek, Matsui, and Aizawa 2021). Results on Chinese benchmarks of the baseline model of CRNN are reported by (Chen et al. 2021a). Results of the baseline model of SVTR series are reported by (Du et al. 2022). Base Model Variants English Benchmarks Chinese Benchmarks IC13 SVT IIIT IC15 SVTP CT Avg Scene Web Doc Hand Avg CRNN CTC 90.3 78.9 84.3 65.9 64.8 61.3 77.3 54.9 56.2 97.5 48.0 68.7 FocalCTC 89.6 80.1 81.2 65.2 63.0 60.2 75.6 54.8 56.0 97.5 48.3 68.7 EnCTC 90.1 81.5 85.6 64.7 62.9 59.0 77.1 49.0 50.7 97.5 36.6 64.2 DCTC 90.7 82.4 88.9 66.1 65.4 68.1 79.9 58.6 57.0 98.0 49.7 70.8 SVTR-T CTC 96.3 91.6 94.4 84.1 85.4 88.2 90.8 67.9 61.8* 99.1* 47.2* 75.3 FocalCTC 96.0 91.0 94.3 84.1 85.1 87.9 90.6 67.1 60.2 99.2 46.5 74.8 EnCTC 94.9 90.8 94.5 84.3 85.4 88.2 90.6 65.9 63.7 97.9 47.1 74.2 DCTC 96.4 92.3 95.4 85.3 86.1 89.9 91.7 68.3 63.9 99.2 48.1 75.9 Table 2: Results of Loss-wise Comparison. ACC marked by * means those data are not reported and thus reproduced by us; Results of CTC loss and DCTC are the same as in Tab. 1; Results of FocalCTC and EnCTC are all reproduced by us. 0.2. The experiment results are collected in Tab. 2. We use CRNN and SVTR-T as base models for efficiency. We can see that our method consistently achieves improvements on all benchmarks, further proving our method’s effectiveness. Comparison of Latent Alignment Estimate Much previous research on distillation for CTC-based models has been working on finding reasonable estimates of the latent alignment. They used various means to directly to utilize p(z|X) to estimate the latent alignment. The most naive utilization way is to take the hard prediction of p(z|X), i.e., arg maxV ′ P. In this section, we compare our estimate method with two other sources of estimate: one is to take arg maxV ′ P directly from the model itself, denoted as “Self”. Another is to take arg maxV ′ P from a threelayer Transformer encoder branch additionally added to the model, denoted as “Teacher”. This branch is trained with a CTC loss during the training process. We use CRNN and SVTR-T as the experiment models. We record AACC in training on English, Chinese Scene and, Chinese Hand tasks under different estimate methods, respectively. AACCs are computed by the average over ten consecutive batches at certain progress points in training. Results of AACC are visually shown in Fig. 3. We also record the model accuracy under different estimate methods and collect the results in Tab. 3. We can see from Fig. 3 that our method (DCTC) can yield high-quality latent alignment (High AACC) even at the beginning of training. “Teacher” takes second place, while “Self” only generates moderate estimates after a period of training. Besides, our estimate method gets quickly saturated to nearly 100% AACC. In contrast, “Teacher” and “Self” ling for a long time at a low-to-middle level AACC and can hardly get close to 100%, not to mention that “Teacher” used an extra Transformer encoder. Tab. 3 shows that both “Teacher” and “Self” cause accuracy degradation on almost all benchmarks, suggesting that simply taking the output of distilled module as the latent alignments is harmful in the CTC setting, no matter whether using a teacher. The only exception is that “Teacher” boosts CRNN on English benchmarks. We explain that the added Transformer branch fundamentally increases the model capability of CRNN, overcoming the harm from the low-quality estimate of the “Teacher” method. In conclusion, our method can draw high-quality The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7446 Figure 3: Curves of AACC of Estimated Latent Alignment Base Model Estimate Method Extra Module English Benchmarks Chinese Benchmarks IC13 SVT IIIT IC15 SVTP CT Avg Scene Web Doc Hand Avg CRNN 90.3 78.9 84.3 65.9 64.8 61.3 77.3 54.9 56.2 97.5 48.0 68.7 Self N 88.6 75.1 82.4 63.8 60.5 59.4 75.0 46.1 50.7 92.4 40.2 61.6 Teacher Y 90.2 81.0 88.8 64.5 63.7 65.3 79.0 48.9 52.3 94.9 42.1 64.1 DCTC N 90.7 82.4 88.9 66.1 65.4 68.1 79.9 58.6 57.0 98.0 49.7 70.8 SVTR-T 96.3 91.6 94.4 84.1 85.4 88.2 90.8 67.9 61.8* 99.1* 47.2* 75.3 Self N 95.0 90.3 93.2 84.8 85.1 86.5 90.1 67.3 60.0 99.1 46.6 74.8 Teacher Y 95.9 91.1 94.0 83.4 85.7 86.4 90.3 66.4 61.1 99.0 46.5 74.5 DCTC N 96.4 92.3 95.4 85.3 86.1 89.9 91.7 68.3 63.9 99.2 48.1 75.9 Table 3: Results of latent alignment method estimate method comparison. ACC marked by * means those data are not reported and thus reproduced by us. distillation dark knowledge during the whole training time. This phenomenon explains why DCTC loss can still benefit the student model even under a self-distillation scheme, where no extra teacher model participates. Visual Show of the Effectiveness of Distillation Supervision Eq. (1) suggests that DCTC adds frame-wise and characterlevel supervision to original CTC supervision. Unlike CTC, who more emphasizes sequence supervision, this distillation supervision will make the character features more discriminative, which contributes to the overall performance improvement. We select several hard example clusters from test sets and fetch their features from an SVTR-T model trained with DCTC loss. A hard example cluster is a group of characters more prone to be wrongly recognized as each other. We make a feature visualization study with t-SNE (van der Maaten and Hinton 2008). Fig. 4 illustrates two hard example clusters of feature projections. Different characters are marked with different colors. Our method drive the model to extract more discriminative features which are more cohesive than those extracted by the baselines. For more clusters, please refer to the supplementary materials. Conclusion In this paper, we base on a self-distillation framework through MAP estimate to formulate DCTC, as a variant of CTC loss. The way we estimate the latent alignments can distill high-quality dark knowledge from the student model Figure 4: Feature visualization. Each row represents a hard sample cluster itself and well address the alignment inconsistency problem, which is supported by our quantitative analysis. Our proposed DCTC loss is concise yet quite effective. It boasts various text recognition models’ performance on both English and Chinese benchmarks. Furthermore, visual analysis shows that DCTC loss can yield more cohesive features, which explains performance improvement. Besides, our method barely incurs additional computational complexity, training data, and training phase. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7447 References Aberdam, A.; Litman, R.; Tsiper, S.; Anschel, O.; Slossberg, R.; Mazor, S.; Manmatha, R.; and Perona, P. 2021. Sequence-to-Sequence Contrastive Learning for Text Recognition. In Proc. CVPR, 15302–15312. Baek, J.; Kim, G.; Lee, J.; Park, S.; Han, D.; Yun, S.; Oh, S. J.; and Lee, H. 2019. What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis. In Proc. ICCV, 4714–4722. Baek, J.; Matsui, Y.; and Aizawa, K. 2021. What If We Only Use Real Datasets for Scene Text Recognition? Toward Scene Text Recognition With Fewer Labels. CoRR:2103.04400. Bautista, D.; and Atienza, R. 2022. Scene Text Recognition with Permuted Autoregressive Sequence Models. In Proc. ECCV, volume 13688, 178–196. Bhunia, A. K.; Sain, A.; Chowdhury, P. N.; and Song, Y. 2021a. Text is Text, No Matter What: Unifying Text Recognition using Knowledge Distillation. In Proc. ICCV, 963– 972. Bhunia, A. K.; Sain, A.; Kumar, A.; Ghose, S.; Chowdhury, P. N.; and Song, Y. 2021b. Joint Visual Semantic Reasoning: Multi-Stage Decoder for Text Recognition. In Proc. ICCV, 14920–14929. Chao, L.; Chen, J.; and Chu, W. 2020. Variational Connectionist Temporal Classification. In Proc. ECCV, volume 12373, 460–476. Chen, J.; Yu, H.; Ma, J.; Guan, M.; Xu, X.; Wang, X.; Qu, S.; Li, B.; and Xue, X. 2021a. Benchmarking Chinese Text Recognition: Datasets, Baselines, and an Empirical Study. CoRR, abs/2112.15093. Chen, X.; Jin, L.; Zhu, Y.; Luo, C.; and Wang, T. 2021b. Text Recognition in the Wild: A Survey. ACM Comput. Surv., 54(2). Cheng, Z.; Bai, F.; Xu, Y.; Zheng, G.; Pu, S.; and Zhou, S. 2017. Focusing Attention: Towards Accurate Text Recognition in Natural Images. In Proc. ICCV. Cong, F.; Hu, W.; Huo, Q.; and Guo, L. 2019. A Comparative Study of Attention-Based Encoder-Decoder Approaches to Natural Scene Text Recognition. In Proc. ICDAR, 916–921. Da, C.; Wang, P.; and Yao, C. 2022. Levenshtein OCR. In Avidan, S.; Brostow, G. J.; Ciss´e, M.; Farinella, G. M.; and Hassner, T., eds., Proc. ECCV, volume 13688, 322–338. Ding, H.; Chen, K.; and Huo, Q. 2019. Compression of CTC-Trained Acoustic Models by Dynamic Frame-Wise Distillation or Segment-Wise N-Best Hypotheses Imitation. In Interspeech, 3218–3222. Ding, H.; Chen, K.; and Huo, Q. 2020. Improving Knowledge Distillation of CTC-Trained Acoustic Models With Alignment-Consistent Ensemble and Target Delay. IEEE/ACM transactions on audio, speech, and language processing. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Du, Y.; Chen, Z.; Jia, C.; Yin, X.; Zheng, T.; Li, C.; Du, Y.; and Jiang, Y.-G. 2022. SVTR: Scene Text Recognition with a Single Visual Model. Proc. IJCAI. Du, Y.; Li, C.; Guo, R.; Cui, C.; Liu, W.; Zhou, J.; Lu, B.; Yang, Y.; Liu, Q.; Hu, X.; Yu, D.; and Ma, Y. 2021. PPOCRv2: Bag of Tricks for Ultra Lightweight OCR System. CoRR, abs/2109.03144. Fang, S.; Xie, H.; Wang, Y.; Mao, Z.; and Zhang, Y. 2021. Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition. Proc. CVPR, 7094–7103. Feng, X.; Yao, H.; and Zhang, S. 2019. Focal CTC Loss for Chinese Optical Character Recognition on Unbalanced Datasets. Complexity, 2019: 1–11. Gao, L.; Zhang, H.; and Liu, C. 2021. Regularizing CTC in Expectation-Maximization Framework with Application to Handwritten Text Recognition. In IJCNN, 1–7. Graves, A.; Fern´andez, S.; Gomez, F.; and Schmidhuber, J. 2006. Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. In ICML, ICML ’06, 369–376. New York, NY, USA. ISBN 1595933832. Guan, T.; Gu, C.; Tu, J.; Yang, X.; Feng, Q.; Zhao, Y.; and Shen, W. 2022a. Self-supervised Implicit Glyph Attention for Text Recognition. Guan, T.; Shen, W.; Yang, X.; Feng, Q.; and Jiang, Z. 2022b. Self-supervised Character-to-Character Distillation for Text Recognition. Gupta, A.; Vedaldi, A.; and Zisserman, A. 2016. Synthetic Data for Text Localisation in Natural Images. Proc. CVPR, 2315–2324. Hinton, G. E.; Vinyals, O.; and Dean, J. 2015. Distilling the Knowledge in a Neural Network. CoRR. Huang, M.; You, Y.; Chen, Z.; Qian, Y.; and Yu, K. 2018. Knowledge Distillation for Sequence Model. In Interspeech, 3703–3707. Jaderberg, M.; Simonyan, K.; Vedaldi, A.; and Zisserman, A. 2014. Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition. CoRR, abs/1406.2227. Jaynes, E. T. 1957. Information Theory and Statistical Mechanics. II. Physical Review. Karatzas, D.; i Bigorda, L. G.; Nicolaou, A.; Ghosh, S. K.; Bagdanov, A. D.; Iwamura, M.; Matas, J.; Neumann, L.; Chandrasekhar, V. R.; Lu, S.; Shafait, F.; Uchida, S.; and Valveny, E. 2015. ICDAR 2015 competition on Robust Reading. Proc. ICDAR, 1156–1160. Karatzas, D.; Shafait, F.; Uchida, S.; Iwamura, M.; i Bigorda, L. G.; Mestre, S. R.; Romeu, J. M.; Mota, D. F.; Almaz´an, J.; and de las Heras, L.-P. 2013. ICDAR 2013 Robust Reading Competition. 2013 12th International Conference on Document Analysis and Recognition, 1484–1493. Kim, Y.; and Rush, A. M. 2016. Sequence-Level Knowledge Distillation. In EMNLP, 1317–1327. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7448 Kuang, Z.; Sun, H.; Li, Z.; Yue, X.; Lin, T. H.; Chen, J.; Wei, H.; Zhu, Y.; Gao, T.; Zhang, W.; Chen, K.; Zhang, W.; and Lin, D. 2021. MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding. In ACMMM, 3791–3794. Kurata, G.; and Audhkhasi, K. 2018. Improved Knowledge Distillation from Bi-Directional to Uni-Directional LSTM CTC for End-to-End Speech Recognition. In SLT, 411–417. Li, C.; Liu, W.; Guo, R.; Yin, X.; Jiang, K.; Du, Y.; Du, Y.; Zhu, L.; Lai, B.; Hu, X.; Yu, D.; and Ma, Y. 2022. PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System. Li, M.; Lv, T.; Cui, L.; Lu, Y.; Florencio, D.; Zhang, C.; Li, Z.; and Wei, F. 2021. TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models. CoRR. Liao, M.; Zhang, J.; Wan, Z.; Xie, F.; Liang, J.; Lyu, P.; Yao, C.; and Bai, X. 2019. Scene Text Recognition from TwoDimensional Perspective. In Proc. AAAI, 8714–8721. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2020. Focal Loss for Dense Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42: 318–327. Liu, H.; Jin, S.; and Zhang, C. 2018. Connectionist Temporal Classification with Maximum Entropy Regularization. In NeurIPS, 839–849. Long, S.; He, X.; and Yao, C. 2021. Scene Text Detection and Recognition: The Deep Learning Era. Int. J. Comput. Vis., 129(1): 161–184. Lu, N.; Yu, W.; Qi, X.; Chen, Y.; Gong, P.; Xiao, R.; and Bai, X. 2021. MASTER: Multi-aspect non-local network for scene text recognition. Pattern Recognition, 117: 107980. Mishra, A.; Karteek, A.; and Jawahar, C. V. 2012. Top-down and bottom-up cues for scene text recognition. Proc. CVPR, 2687–2694. Miyato, T.; ichi Maeda, S.; Koyama, M.; and Ishii, S. 2017. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning. CoRR. Moriya, T.; Ochiai, T.; Karita, S.; Sato, H.; Tanaka, T.; Ashihara, T.; Masumura, R.; Shinohara, Y.; and Delcroix, M. 2020. Self-Distillation for Improving CTC-TransformerBased ASR Systems. In ISCA, 546–550. Patel, G.; Allebach, J. P.; and Qiu, Q. 2023. Seq-UPS: Sequential Uncertainty-aware Pseudo-label Selection for Semi-Supervised Text Recognition. In WACV, 6169–6179. Phan, T. Q.; Shivakumara, P.; Tian, S.; and Tan, C. L. 2013. Recognizing Text with Perspective Distortion in Natural Scenes. Proc. ICCV, 569–576. Qiao, Z.; Zhou, Y.; Yang, D.; Zhou, Y.; and Wang, W. 2020. SEED: Semantics Enhanced Encoder-Decoder Framework for Scene Text Recognition. In Proc. CVPR, 13525–13534. Risnumawan, A.; Shivakumara, P.; Chan, C. S.; and Tan, C. L. 2014. A robust arbitrary text detection system for natural scene images. Expert Syst. Appl., 41: 8027–8048. Shi, B.; Bai, X.; and Yao, C. 2017. An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition. TPAMI, 39: 2298–2304. Shi, B.; Wang, X.; Lyu, P.; Yao, C.; and Bai, X. 2016. Robust Scene Text Recognition with Automatic Rectification. In Proc. CVPR, 4168–4176. Takashima, R.; Li, S.; and Kawai, H. 2018. An Investigation of a Knowledge Distillation Method for CTC Acoustic Models. In ICASSP, 5809–5813. Tanaka, R.; Ono, S.; and Furuhata, A. 2019. Fast Distributional Smoothing for Regularization in CTC Applied to Text Recognition. In Proc. ICDAR, 302–308. van der Maaten, L.; and Hinton, G. E. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9: 2579–2605. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In NeurIPS. Wan, Z.; Xie, F.; Liu, Y.; Bai, X.; and Yao, C. 2019. 2D-CTC for Scene Text Recognition. CoRR, abs/1907.09705. Wang, H.; Liao, J.; Cheng, T.; Gao, Z.; Liu, H.; Ren, B.; Bai, X.; and Liu, W. 2022a. Knowledge Mining with Scene Text for Fine-Grained Recognition. In Proc. CVPR, 4614–4623. Wang, K.; Babenko, B.; and Belongie, S. J. 2011. End-toend scene text recognition. Proc. ICCV, 1457–1464. Wang, Y.; Xie, H.; Fang, S.; Xing, M.; Wang, J.; Zhu, S.; and Zhang, Y. 2022b. PETR: Rethinking the Capability of Transformer-Based Language Model in Scene Text Recognition. IEEE Trans. Image Processing. Wen, Y.; Zhang, K.; Li, Z.; and Qiao, Y. 2016. A Discriminative Feature Learning Approach for Deep Face Recognition. In Proc. ECCV. Xie, Z.; Huang, Y.; Zhu, Y.; Jin, L.; Liu, Y.; and Xie, L. 2019. Aggregation Cross-Entropy for Sequence Recognition. In Proc. CVPR, 6538–6547. Yang, M.; Liao, M.; Lu, P.; Wang, J.; Zhu, S.; Luo, H.; Tian, Q.; and Bai, X. 2022. Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition. 4214–4223. Yu, D.; Li, X.; Zhang, C.; Han, J.; Liu, J.; and Ding, E. 2020. Towards Accurate Scene Text Recognition with Semantic Reasoning Networks. CoRR:2003.12294. Zhang, H.; Liang, L.; and Jin, L. 2020. SCUT-HCCDoc: A new benchmark dataset of handwritten Chinese text in unconstrained camera-captured documents. Pattern Recognit., 108: 107559. Zhu, Y.; Liao, M.; Yang, M.; and Liu, W. 2018. Cascaded Segmentation-Detection Networks for Text-Based Traffic Sign Detection. IEEE Trans. Intell. Transp. Syst., 19(1): 209–219. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7449
2024
827
18,659
PNeRFLoc: Visual Localization with Point-Based Neural Radiance Fields Boming Zhao1, Luwei Yang2, Mao Mao1, Hujun Bao1, Zhaopeng Cui1* 1State Key Lab of CAD & CG, Zhejiang University 2Simon Fraser University bmzhao@zju.edu.cn, mluweiyang@outlook.com, maomao6006@gmail.com, zhpcui@gmail.com, bao@cad.zju.edu.cn Abstract Due to the ability to synthesize high-quality novel views, Neural Radiance Fields (NeRF) has been recently exploited to improve visual localization in a known environment. However, the existing methods mostly utilize NeRF for data augmentation to improve the regression model training, and their performances on novel viewpoints and appearances are still limited due to the lack of geometric constraints. In this paper, we propose a novel visual localization framework, i.e., PNeRFLoc, based on a unified point-based representation. On one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points as traditional structure-based methods; on the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization. Specifically, we propose a novel feature adaption module to close the gaps between the features for visual localization and neural rendering. To improve the efficacy and efficiency of neural rendering-based optimization, we also developed an efficient rendering-based framework with a warping loss function. Extensive experiments demonstrate that PNeRFLoc performs the best on the synthetic dataset when the 3D NeRF model can be well learned, and significantly outperforms all the NeRF-boosted localization methods with on-par SOTA performance on the real-world benchmark localization datasets. The code and supplementary material are available on the project webpage: https://zju3dv.github.io/PNeRFLoc/. Introduction Visual localization is a fundamental task in computer vision that aims to determine the precise position and orientation of a camera in a known scene based on the visual input, and it has widespread applications in areas such as robot navigation, augmented reality, virtual reality, etc. Traditional structure-based localization, as the mainstream solution for visual localization, has advantages such as scene agnosticism, robustness, and high precision. These methods (Brachmann and Rother 2021; Sarlin et al. 2019) require computing and storing a global map consisting of 3D point locations and try to find the correspondences between 2D feature points extracted in the query image and 3D points in the reconstructed scene and use a Perspective-n-Point (PnP) solver (Haralick et al. 1994; Bujnak, Kukelova, and Pajdla 2008) *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. in a RANSAC loop (Fischler and Bolles 1981; Chum and Matas 2008) to compute the camera poses. Besides handcrafted features (Bay et al. 2008; Lowe 2004), deep features (DeTone, Malisiewicz, and Rabinovich 2018; Dusmanu et al. 2019; Germain, Bourmaud, and Lepetit 2020; Sarlin et al. 2020) have been extensively utilized recently to improve feature matching for better localization. Very recently, some state-of-the-art (SOTA) feature matching methods have been proposed to train the deep features and align features through pose refinement in an end-to-end manner (Lindenberger et al. 2021; Sarlin et al. 2021). However, these structure-based methods rely on 2D-3D or 2D-2D point matching, and thus the accuracy is limited when the feature extraction and matching is sparse or noisy due to the large view changes between images and textureless structures. Regression-based localization trains a neural network and takes the network parameters as a global map representation which can directly regress the 6-DOF camera poses (Kendall, Grimes, and Cipolla 2015; Balntas, Li, and Prisacariu 2018; Kendall and Cipolla 2017; Moreau et al. 2022a; Shavit, Ferens, and Keller 2021) or the 3D scene coordinate of each pixel (Cavallari et al. 2017; Li et al. 2020; Yang et al. 2019) by taking the query image as the network input. For the simplicity and end-to-end training manner, these methods have attracted considerable attention. However, these methods are usually scene-specific, and the accuracy heavily relies on the distribution of the training images with poor generalization to new viewpoints (Sarlin et al. 2021). Thus Neural Radiance Fields (NeRF) (Mildenhall et al. 2020) has been introduced recently to render realistic novel viewpoint images for data augmentation (Chen et al. 2022; Chen, Wang, and Prisacariu 2021; Moreau et al. 2022b) that can boost the training of the regression network. However, these methods are inherently regression-based, which imposes constraints on the localization accuracy as it is not feasible to indefinitely expand the training data and cover the whole 6D pose space. To fix the problems of existing methods, we propose a novel framework for visual localization, i.e., PNeRFLoc, that integrates the structure-based framework and the renderingbased optimization with NeRF representation. Specifically, on one hand, our framework supports the initial pose estimation by matching 2D and 3D feature points; on the other hand, it also enables the pose refinement with novel view synthesis using rendering-based optimization, i.e., minimizing the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7450 photometric error between the rendered image and the query image. In this way, compared to the existing NeRF-boosted methods (Chen et al. 2022; Chen, Wang, and Prisacariu 2021; Moreau et al. 2022b), our approach transcends the limitations of regression-based techniques, achieving significant accuracy improvements in both indoor and outdoor scenes. Moreover, compared to the SOTA feature matching methods which may be limited by sparse matches between reference and query images due to large view changes and thus stuck in local optima, our method can achieve better accuracy by minimizing the photometric loss with the capability to render novel-view images. However, it is non-trivial to design the framework. First, there is no unified scene representation that supports both 2D-3D feature matching used in structure-based localization and neural rendering for rendering-based optimization. In this paper, we adapt a recent point-based neural radiance field representation (i.e., PointNeRF (Xu et al. 2022)) and design a feature adaptation module to bridge the gap between the scene-agnostic features for localization and the scenespecific features for neural rendering. We find that although these two types of features aim for different tasks, they can be easily transferred via a feature adaptation module. In this way, we can utilize any existing scene-agnostic features for initial localization (e.g., R2D2 (Revaud et al. 2019)), and learn the scene-specific adaptation module together with the NeRF models. Moreover, from the adaptation module, we can also learn a score for each dense feature for better feature matching and initial localization. Second, the renderingbased optimization may be easily stuck in the local minimum (Maggio et al. 2022) due to the backpropagation through the networks and also time-consuming. To improve the neural rendering-based optimization with point-based representation, we further propose a novel efficient rendering-based optimization framework by aligning the rendered image with the query image and minimizing the warping loss function. In this way, we don’t need to render a new image for each step of optimization and avoid the backpropagation through the networks for better convergence. Lastly, to further improve the robustness of the proposed method for outdoor illumination changes and dynamic objects, we utilize appearance embedding and segmentation masks to handle varying lighting conditions and complex occlusions respectively. Our contributions can be summarized as follows. At first, we propose a novel visual localization framework with a unified scene representation, i.e., PNeRFLoc, which enables both structure-based estimation and render-based optimization for robust and accurate pose estimation. Second, to close the gaps between the features for visual localization and neural rendering, we propose a novel feature adaptation module that can be learned together with NeRF models. Furthermore, a novel efficient rendering-based framework with a warping loss function is proposed to improve the efficacy and efficiency of neural rendering-based optimization. Extensive experiments show that the proposed framework outperforms existing learning-based methods when the NeRF model can be well learned, and performs on-par with the SOTA method on the visual localization benchmark dataset. Related Work Structure-based localization. Structure-based methods (Camposeco et al. 2017; Cheng et al. 2019; Sattler et al. 2015; Sattler, Leibe, and Kobbelt 2016; Toft et al. 2018; Zeisl, Sattler, and Pollefeys 2015) utilize 3D scene information from structure from motion (SfM), and a query image taken from the same scene can be registered with explicit 2D-3D correspondences and PnP + RANSAC algorithm. Typically, these methods can yield accurate poses but are prone to noisy matches. To mitigate outlier influence, recent scene coordinates regression (Brachmann et al. 2017; Brachmann and Rother 2021; Yang et al. 2019) methods rely on CNNs to fuse semantic features for obtaining accurate dense correspondences map, while the recent (Sarlin et al. 2020) excels graphical transformer with 2D relative positional encoding to achieve impressive sparse matching results. Despite the promising performance, the structure-based methods still suffer large-view changes especially when a few reference points are available. Regression-based localization. PoseNet (Kendall, Grimes, and Cipolla 2015) and its subsequent work (Walch et al. 2017) regress the camera pose of an image directly through CNN or LSTM. These methods are limited in terms of scalability and performance. Despite some attempts to improve the accuracy by incorporating geometry prior (Brahmbhatt et al. 2018), these methods can only perform comparable results to that of image retrieval baselines (Arandjelovic et al. 2016; Torii et al. 2015) and cannot achieve the identical performance of structure-based counterparts. Moreover, adapting these regressed models to novel scenes is prohibited, which narrows their potential for real-time applications. Localization with NeRF. Neural Radiance Fields (Mildenhall et al. 2020) has recently been employed for localization tasks. This is because NeRF can synthesize high-quality novel view images, which can be beneficial for localization tasks. For example, Purkait et al. proposed LENS (Moreau et al. 2022b), which uses NeRF-w (Martin-Brualla et al. 2021) to render realistic synthetic images to expand the training space. LENS leverages the NeRF-w model to obtain scene geometry information and render views from virtual camera poses covering the entire scene. However, LENS is limited by its long-time offline pre-training and infeasibility of covering the whole pose space, and it also lacks compensation for the domain gap between synthetic and real images, such as pedestrians and vehicles in outdoor scenes. Chen et al. proposed DFNet (Chen et al. 2022), which incorporates an additional feature extractor to learn high-level features to bridge the domain gap between synthetic and real images. However, the training process remains lengthy, as DFNet still needs to train NeRF, pose regression, and feature extraction networks separately. Maggio et al. proposed a Monte Carlo localization method called Loc-NeRF (Maggio et al. 2022), where Loc-NeRF continuously samples candidate poses under the initial pose and uses NeRF to render novel views to find the correct pose direction. However, Loc-NeRF is unstable and still requires an initial camera pose. Moreover, Yen-Chen et al. introduced iNeRF (Yen-Chen et al. 2021), an inverse NeRF approach to optimize camera poses, but it is also limited by the need to provide an initial pose. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7451 Figure 1: Visual localization with PNeRFLoc. In the proposed framework, we associate raw point clouds with scene-agnostic localization features and train a scene-specific feature adaptation together with the point-based neural radiance fields. Subsequently, PNeRFLoc integrates structure-based localization with novel rendering-based optimization to accurately estimate the 6-DOF camera pose of the query image. Method We propose a novel visual localization framework called PNeRFLoc based on the scene representation as shown in Fig.1. In order to enable both structure-based estimation and render-based optimization in a unified framework, we adapt the recent point-based radiance field representation (Xu et al. 2022) and design a feature adaptation module to bridge the scene-agnostic localization feature and the point-based neural rendering (Sec. 3.2). Additionally, to prevent iterative re-rendering of images for every optimization step like iNeRF (Yen-Chen et al. 2021), we propose an efficient rendering-based optimization strategy by minimizing the warping loss function to align the pixels on the rendered image and the query image, which reduced the neural rendering frequency to just once for most cases, while performing high accuracy (Sec. 3.3). Point-based Radiance Field Representation Neural Radiance Fields (NeRF) compute pixel radiance by sampling points along the ray shot through each pixel and computing the integral result. Specifically, each pixel in an image corresponds to a ray r(t) = o + td. To render the color of ray r, NeRF draws the point samples with distances {ti}N i=1 to the camera origin o along the ray, and passes the point locations r(ti) as well as view directions d to obtain density σi and colors ci. The resulting color is rendered following the quadrature rules (Max 1995): ˆC(r) = R(r, c, σ) = K X k=1 T(tk)α (σ(tk)δ(tk)) c(tk), T(tk) = exp(− k−1 X k′=1 σ(tk′)δk′), α(x) = 1 −exp(−x), (1) where R(r, c, σ) is the volumetric rendering through ray r of color c with density σ, c(t) and σ(t) are the color and density at point r(t) respectively, and δk = tk+1 −tk is the distance between two adjacent sampling points on the ray. Stratified sampling and informed sampling are used to select sample points {tk}K k=1 between the near plane tn and far plane tf. Additionally, the depth ˆD of each ray r can be computed as: ˆD = K X k=1 T(tk)α (σ(tk)δ(tk)) tk. (2) Following PointNeRF (Xu et al. 2022), we regress the radiance field from the point cloud P = {(pi, fi, γi)|i = 1, ..., N}, where each point i is located at pi and associated with a feature vector fi that encodes the local scene content. And γi represents the confidence of a point being located on the actual surface of the scene. Given any 3D location x, we query K neighboring neural points around x and regress the density σ and view-dependent color c from any viewing direction d as: (σ, c) = PointNeRF(x, d, p1, f1, γ1, ..., pK, fK, γK). (3) To enable PointNeRF to handle dynamic objects and illumination changes, we adopt the appearance embedding from NeRF-W (Martin-Brualla et al. 2021) and a segmentation mask to handle occasional object occlusions and illumination variations. Contrary to NeRF-W which directly employs a transient MLP to address the issue of occasional object occlusions, we adopt a more stable approach by utilizing a segmentation mask to compel the network to focus exclusively on architectural areas. In this paper, we utilize Detectron21 to perform object detection on the images and 1https://github.com/facebookresearch/detectron2 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7452 generate segmentation masks. Scene-Specific Feature Adaptation Once the point-based NeRF model is built, a straightforward way is to utilize the learned point-wise neural features for feature matching. However, we find that these neural features are not distinctive enough and cannot be used as robust and efficient descriptors for feature matching as shown in our supplementary material because these features are learned to encode local color and geometry information of the specific scene for the neural rendering. Based on this observation, we resort to existing well-studied deep features for visual localization that are trained on large datasets for feature matching and design a feature adaptation module to bridge the features for visual localization and neural rendering. Moreover, we can also learn the scores of neural features with the adaptation module for better feature matching for the specific scene. Scene-agnostic point localization feature extractor. In this paper, we utilize the deep feature R2D2 (Revaud et al. 2019) as the scene-agnostic feature for visual localization due to its ability to robustly extract reliable and distinctive features. For each reference image Ik ∈RW ×H×3, the R2D2 network extracts a feature map Fk ∈RW ×H×128. For each 3D point i constructed from each reference image k, we define the scene-agnostic point feature as: fi = Fk[pi] ∈R128, (4) where pi is the projection of i in the reference image and [·] is a lookup with sub-pixel interpolation. Searching for matches throughout the entire point cloud is inefficient as the reliability of scene-agnostic localization features can be compromised by the structure of the scene. Therefore, we utilize the matching score of each point (introduced in the feature adaptation) to enable point filtering during the matching process. By removing candidate points in the point cloud below a certain score threshold, we can reduce the number of matching pairs to be computed, thus improving efficiency. Scene-specific feature adaptation. As explained before, due to the significant gap between scene-agnostic features for localization and scene-specific features for neural rendering, we cannot learn the radiance fields from the R2D2 features. Thus we design a feature adaptation module to bridge this gap, which consists of a four-layer Multi-Layer Perceptron (MLP). We empirically find that despite the scene-agnostic feature and the scene-specific NeRF representation feature aiming at two completely different tasks, they can be adapted via the designed module. Thus any other SOTA scene-agnostic feature for visual localization can also be utilized in our framework. Moreover, as mentioned above, we also utilize the adaptation module to learn a score S for each point in the point cloud according to its dense features and position. These scores are then used for point filtering, which improves the efficiency of feature matching while maintaining the accuracy of the final pose estimation. Scene-specific PointNeRF reconstruction. Given the pretrained point localization feature extractor and feature adaptation module, similar to PointNeRF (Xu et al. 2022), we learn the NeRF model by minimizing the following loss function: Lrender = X r∈R ∥ˆC(r) −C(r)∥2 2, (5) where R is the set of rays in each batch, and C(r), Cˆ(r) are the ground truth and predicted RGB colors for ray r computed by Eq.1. To be noted, we also learned to fine-tune the feature adaptation module for each scene for better rendering quality. Please refer to our supp. material for more details. Two-stage Pose Estimation Once the NeRF model is learned, we design a two-stage pose estimation framework for the query image during the test. Initialization with structure-based localization. The goal of the structure-based localization stage is to establish the correspondence between the 2D key points on the query image and the 3D points in the scene point cloud, thereby providing an initial pose estimate for the subsequent pose refinement stage. For each query image q with the keypoints Pq and features Fq extracted by the scene-agnostic localization feature extractor, and the point cloud Pr generated by PointNeRF with features Fr, we can find a 2d-3D correspondence: ∀i ∈Pq, M(i) = arg max j∈Pr Fi q · Fj r ∥Fiq∥∥Fj r∥ , (6) where M(i) signifies the corresponding point within the point cloud for the keypoint i present on the query image q, which is ascertained via the maximization of cosine similarity. However, as mentioned in Sec. 3.2, the process of directly seeking correspondences within the entire point cloud proves to be inefficient. In response to this, we employ a thresholding technique based on the learned score S to filter the point cloud Pr. Given a threshold St, we can get the filtered point cloud as Ps = {i ∈Pr | Si ≥St}, where Si denotes the learned score for the point i. For each query image q and the discovered correspondence M, we define a residual: ri = ∥pi − Y (RM(i) + t)∥2, (7) where Q(·) represents the pixel obtained post the projection of the 3D point onto the image. (R, t) denote the camera pose to be determined, while pi signifies the pixel of the keypoint i within the query image. The total error over all key points is: E(R, t) = X i∈Pq ri. (8) Moreover, a direct optimization of Eq.8 is often susceptible to the distortions caused by incorrect correspondences (outliers). Therefore, a RANSAC loop is also adopted, effectively improving the accuracy. Pose refinement with efficient rendering-based optimization. Previous works (Yen-Chen et al. 2021; Zhu et al. 2022) have utilized gradient descent to minimize the photometric residuals between the rendered and input images for local pose estimation. However, this optimization method is inefficient since neural rendering is required for each optimization step, and it is also unstable due to the backpropagation over the deep networks. Therefore, we propose a novel and efficient rendering-based optimization strategy using the warping The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7453 loss function, which only requires rendering the image once and avoiding the backpropagation through the networks. Specifically, for a given query image q and initial pose (R, t), PNeRFLoc first renders the visual reference image qr under the initial pose according to Eq. (1) and the depth map dq according to Eq. (2). Subsequently, we randomly sample N pixels within the image qr. For the pose (R′, t′) that we aspire to optimize, we define the warping loss function as: Lwarping = X pi∈N ∥C(q, W(pi, R, t, R′, t′))−C(qr, pi)∥2, (9) W(pi, R, t, R′, t′) = Y (R′(R−1 Y−1 (pi, ˆD(pi) −R−1t) + t′), (10) where C(qr, pi) represents the RGB color at pixel pi on rendering image qr, and the function W denotes the corresponding pixel on query image q by warping pi from render image qr. Specifically, W back-projects pi into the 3D space of the qr’s camera coordinate system using the depth ˆD(pi), and then transposes it to the camera coordinate system of image q through the camera pose (R′, t′) and projects it onto image q finally. However, we find that there are often blanks in the rendering images, which occur when the rays emitted from the camera pass through the gaps in the point cloud and do not aggregate to the neural points. In this case, incorrect depth and color can interfere with the optimization. Hence, we propose using a blank depth mask to handle such situations. For the set of sampled pixels N on the visual reference image qr, we define the valid pixels set as Nv = {pi ∈N| ˆD(pi) >= 0.01} and let the warping loss function only consider the pixels in Nv. Our rendering-based optimization method optimizes the pose (R′, t′) by using warping loss aligning the RGB colors of sampled pixels on qr and q. Thereby we avoid gradient descent through the complex neural networks and improve accuracy and efficiency. Moreover, when the viewpoint changes significantly between the visual reference image and the query image, the optimization result may not reach its optimum. We could potentially enhance the accuracy by iteratively rendering the visual reference image multiple times. However, we found that a single rendering’s outcome was already satisfactory in our experiments. Therefore, to save time, our optimization process only renders the visual reference image once in practice. Experiments We first compare our method with various representative and SOTA learning approaches (Sarlin et al. 2021; Moreau et al. 2022a,b; Brachmann and Rother 2021) on both synthetic datasets and real-world datasets. Then, we offer insights into PNeRFLoc through additional ablation experiments. Datasets and Implementation Details Datasets. Following (Chen et al. 2022; Moreau et al. 2022b,a), we evaluate our method on two standard localization datasets since they have well-distributed training images which support dense 3D reconstruction. Moreover, we generate a synthetic localization dataset using the commonly used Replica dataset in NeRF-based SLAM systems. Methods CoordiNet PixLoc PNeRFLoc Replica room0 1.60 / 50.8 0.055 / 1.89 0.005 / 0.29 room1 1.38 / 47.3 0.020 / 0.36 0.016 / 0.55 room2 1.26 / 20.2 0.901 / 8.71 0.022 / 0.92 office0 1.14 / 20.1 0.021 / 0.71 0.006 / 0.69 office1 0.81 / 36.3 0.016 / 0.75 0.017 / 0.64 office2 0.83 / 19.9 0.012 / 0.40 0.007 / 0.44 office3 0.76 / 18.8 0.015 / 0.67 0.006 / 0.30 office4 0.89 / 46.3 0.033 / 0.82 0.009 / 0.23 Table 1: Comparison on Replica datasets. We report median translation/rotation errors (meters/degrees) and the best results are highlighted in bold. • Cambridge Landmarks (Kendall, Grimes, and Cipolla 2015) contains five outdoor scenes, with 200 to 2000 images captured at different times for each scene. This dataset is challenging for camera pose estimation because the query images are taken at different times than the reference images, resulting in different lighting conditions and occlusions from objects such as people and vehicles. • 7Scenes (Shotton et al. 2013) contains seven indoor scenes, captured by a Kinect RGB-D sensor. Each scene has 1k to 7k reference images and 1k to 5k query images, captured along different trajectories. • Replica (Straub et al. 2019) contains eight synthetic indoor scenes, commonly used for SLAM evaluation. We follow iMAP (Sucar et al. 2021), using its produced sequences as training images, with an image size of 1200*680 pixels, and randomly generate 50-120 query images. Due to the small number of reference images and the significant changes in the viewpoint of the query images, this dataset presents a certain level of challenge for localization tasks. Implementation. We use R2D2 (Revaud et al. 2019) as the scene-agnostic localization feature extractor. In the structurebased localization stage, the score threshold St is set to 0.7, and the number of RANSAC iterations is set to 20k. During the rendering-based localization stage, we use the Adam optimizer with a learning rate of 0.001. All our experiments are evaluated on a single NVIDIA GeForce RTX 3090 GPU. Similar to DSAC* (Brachmann and Rother 2021), we obtain the estimated depth images rendered from a 3D model learned by Factorized-NeRF (Zhao et al. 2022). For the 7scenes dataset, we follow PixLoc (Sarlin et al. 2021) utilizing the estimated depth rendered by DSAC* (Brachmann and Rother 2021). Lastly, we leveraged a pre-trained model from MonoSDF (Yu et al. 2022) to render the estimated depth for the Replica dataset. Please refer to the supp. material for more details. Evaluation on the Replica Dataset We first compare our method with the SOTA structure-based method PixLoc (Sarlin et al. 2021) and the regression-based method CoordiNet (Moreau et al. 2022a) on the Replica dataset. Our method and PixLoc adopt 200 images as reference images, while for CoordiNet, we use 2000 images for training to generate reasonable results. The evaluation results The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7454 t=0 t=50 t=100 t=150 t=200 Photometric Loss Warping Loss Figure 2: Pose estimation results. We visualize the rendering images based on the estimated pose at time t and the query image to compare different optimization methods. Figure 3: We show the effectiveness of our reliable and repeatable score filtering on localization efficiency and accuracy. are shown in Table 1. Our method achieves state-of-the-art results on the Replica dataset. We believe that PNeRFLoc surpasses PixLoc on the Replica dataset for two primary reasons: i) Each query image in Replica has a relatively large view-point change compared to the reference image, leading to a large initial reprojection error in PixLoc and causing the optimization to fall into incorrect local minima; ii) with high-quality input images and accurate camera poses, PNeRFLoc can learn a fine NeRF model, which further facilitates rendering-based optimization given the initial pose estimation from the structure-based localization. The regression-based method CoordiNet has the worst performance unsurprisingly due to its poor generalization to the query image with large view-point changes although more reference images are provided to train the regression model. Please refer to our supplementary material for more comparisons. Evaluation on the Cambridge and 7Scenes We compare with multiple SOTA approaches (Sarlin et al. 2021; Moreau et al. 2022a,b; Brachmann and Rother 2021) on the benchmark visual localization datasets, i.e., Cambridge Landmarks and 7Scenes. We report the median translation and rotation error for each scene in Table 2. For the indoor 7Scenes dataset, since the generated depth by DSAC* (Brachmann and Rother 2021) are noisy with misalignments, it affects the training of the methods based on depth images, like DSAC* and our method. As a result, our method performs on-par or slightly worse than the SOTA PixLoc method on 7Scenes, while it still outperforms all other methods in general. In the subsequent ablation experiments, we provide the results with relatively better depth inputs, which confirms our observations. For the outdoor Cambridge Landmarks dataset, there are large appearance variations and dynamic objects. Moreover, we find that the provided camera poses and intrinsic parameters of the training images are not very accurate, which prevents PNeRFLoc from learning a fine 3D NeRF model and rendering higher-quality novel view images. Even so, PNeRFLoc still performs on-par with the SOTA method and performs much better than all other NeRF-boosted localization methods (Moreau et al. 2022b; Chen et al. 2022), which demonstrates its robustness and the potential of the NeRFbased methods for outdoor datasets. As long as more accurate depths and camera poses are provided, our method achieves the SOTA performance as shown on the Replica dataset. Ablation Studies Justification of the proposed rendering-based optimization. As shown in Fig. 2 and Table 3, we justify our design decisions by comparing different variants of PNeRFLoc. All experiments were optimized 250 times during the renderingbased localization stage. We report the median translation/rotation errors (meters/degrees) and time consumption (seconds/per image). We can see that the proposed rendering-based optimization significantly improves the localization accuracy given the initial pose estimated in the structure-based estimation stage. Without the blank depth mask, the accuracy slightly degrades due to the numerous sampling points on the image, and a small proportion of blank area sampling does not affect the overall trend of optimization. Furthermore, direct optimization using photometric loss is more time-consuming, and it may also fall into incorrect local minima due to the backpropagation through networks. These ablation studies demonstrate the efficacy and efficiency of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7455 Methods PoseNet CoordiNet LENS DFNet DSAC* PixLoc PNeRFLoc 7Scenes Chess 0.32 / 8.12 0.14 / 6.7 0.03 / 1.3 0.04 / 1.48 0.02 / 1.10 0.02 / 0.80 0.02 / 0.80 Fire 0.47 / 14.4 0.27 / 11.6 0.10 / 3.7 0.04 / 2.16 0.02 / 1.24 0.02 / 0.73 0.02 / 0.88 Heads 0.29 / 12.0 0.13 / 13.6 0.07 / 5.8 0.03 / 1.82 0.01 / 1.82 0.01 / 0.82 0.01 / 0.83 Office 0.48 / 7.68 0.21 / 8.6 0.07 / 1.9 0.07 / 2.01 0.03 / 1.15 0.03 / 0.82 0.03 / 1.05 Pumpkin 0.47 / 8.42 0.25 / 7.2 0.08 / 2.2 0.09 / 2.26 0.04 / 1.34 0.04 / 1.21 0.06 / 1.51 Kitchen 0.59 / 8.64 0.26 / 7.5 0.09 / 2.2 0.09 / 2.42 0.04 / 1.68 0.03 / 1.20 0.05 / 1.54 Stairs 0.47 / 13.8 0.28 / 12.9 0.14 / 3.6 0.14 / 3.31 0.03 / 1.16 0.05 / 1.30 0.32 / 5.73 Cambridge Kings 1.66 / 4.86 0.70 / 2.92 0.33 / 0.5 0.43 / 0.87 0.15 / 0.3 0.14 / 0.24 0.24 / 0.29 Hospital 2.62 / 4.90 0.97 / 2.08 0.44 / 0.9 0.46 / 0.87 0.21 / 0.4 0.16 / 0.32 0.28 / 0.37 Shop 1.41 / 7.18 0.73 / 4.69 0.27 / 1.6 0.16 / 0.59 0.05 / 0.3 0.05 / 0.23 0.06 / 0.27 Church 2.45 / 7.96 1.32 / 3.56 0.53 / 1.6 0.50 / 1.49 0.13 / 0.4 0.10 / 0.34 0.40 / 0.55 Court 2.45 / 3.98 0.000.000.49 / 0.3 0.30 / 0.14 0.81 / 0.25 Table 2: Comparison on the Cambridge Landmarks and 7Scenes datasets. We report median translation/rotation errors(meters/degrees). We highlight the top two results in bold. Config. Replica room0 office0 w/o Rendering-Based Optimization 0.030 / 0.79 / 3.20 0.082 / 1.38 / 1.88 w/o Blank Depth Mask 0.027 / 0.63 / 5.84 0.050 / 1.09 / 5.66 w/o Warping Loss, w/ Photometric Loss 0.035 / 0.81 / 47.7 0.082 / 1.38 / 39.2 Full Model 0.005 / 0.29 / 5.56 0.006 / 0.56 / 5.45 Table 3: Ablation study. We report the median translation/rotation errors (meters/degrees) and time consumption (seconds/per image). The best results are highlighted in bold. Config. 7Scenes Replica chess office stairs room1 office0 office1 With Estimated Depth 0.02 / 0.80 0.03 / 1.05 0.32 / 5.73 0.02 / 0.55 0.01 / 0.56 0.02 / 0.64 With GT Depth 0.02 / 0.80 0.03 / 1.06 0.20 / 3.61 0.01 / 0.34 0.01 / 0.54 0.01 / 0.39 Table 4: Comparison of using input depth and estimated depth. We report median translation/rotation errors (meters/degrees) and the best results are highlighted in bold. the proposed rendering-based optimization. Impact of score filtering. We analyzed the impact of score filtering on the Replica’s office4 scene. As shown in Fig.3, we report the time consumption (in seconds) and recall at (5cm, 5◦). With the increase of score threshold, the efficiency of the PnP algorithm is significantly improved, which is because the number of remaining candidate points in the point cloud decreases, saving the time to calculate cosine similarity. At the same time, we can see that the accuracy is preserved and even slightly improved with the score filtering because more reliable points for the scene are selected. Performance with input depth images. Since our method requires depth images to establish a point-based NeRF representation of the scene, we analyze the robustness of PixelLoc against the depth images. So as shown in Table ??, we compare our results of learning the NeRF model with ground-truth input depth and estimated depth on the 7Scenes and Replica datasets. Since the Replica dataset is a synthetic dataset, its GT depth is dense and accurate, allowing for more precise neural point clouds and better rendering quality. In contrast, the estimated depth often loses details. As illustrated in Fig.4, the estimated depth rendered by MonoSDF (Yu et al. 2022) fails to capture the vase. Therefore, using GT depth on the Replica dataset significantly improves localization accuracy. In real-world scenes, however, the depth obtained by the Kinect RGB-D sensor is noisy, which is mainly influenced by the reflection and refraction of object surfaces, as well as the sensor’s maximum and minimum measurement ranges. Consequently, the improvement in scene training quality is limited, and there is no significant improvement in localization accuracy. However, the advantage of taking the input depth is evident in the stairs scene, where the complex spatial structure leads to misalignment in the estimated depth. Please refer to our supp. material for more ablation studies. Conclusion In this paper, we present a novel visual localization method based on point-based neural scene representation. With a novel feature adaption module that bridges the features for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7456 Figure 4: We inspect the trained model with estimated depth or with input GT depth. localization and neural rending, the proposed PNeRFLoc enables 2D-3D feature matching for initial pose estimation and rendering-based optimization for pose refinement. Moreover, we also develop several techniques for efficient rendering-based optimization and robustness against illumination changes and dynamic objects. Experiments show the superiority of the proposed method by integrating both structure-based and rendering-based optimization, especially on the synthetic data suitable for NeRF modeling. Although our current framework is more efficient than the existing neural rendering-based optimization, we should further improve the efficiency and integrate it into visual odometry for real-time applications. Acknowledgments This work was partially supported by the NSFC (No. 62102356). We are also very grateful for the illustrations crafted by Lin Zeng. References Arandjelovic, R.; Gronat, P.; Torii, A.; Pajdla, T.; and Sivic, J. 2016. NetVLAD: CNN architecture for weakly supervised place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5297–5307. Balntas, V.; Li, S.; and Prisacariu, V. 2018. Relocnet: Continuous metric learning relocalisation using neural nets. In Proceedings of the European Conference on Computer Vision (ECCV), 751–767. Bay, H.; Ess, A.; Tuytelaars, T.; and Van Gool, L. 2008. Speeded-up robust features (SURF). Computer vision and image understanding, 110(3): 346–359. Brachmann, E.; Krull, A.; Nowozin, S.; Shotton, J.; Michel, F.; Gumhold, S.; and Rother, C. 2017. Dsac-differentiable ransac for camera localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6684– 6692. Brachmann, E.; and Rother, C. 2021. Visual camera relocalization from RGB and RGB-D images using DSAC. IEEE transactions on pattern analysis and machine intelligence, 44(9): 5847–5865. Brahmbhatt, S.; Gu, J.; Kim, K.; Hays, J.; and Kautz, J. 2018. Geometry-aware learning of maps for camera localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2616–2625. Bujnak, M.; Kukelova, Z.; and Pajdla, T. 2008. A general solution to the P4P problem for camera with unknown focal length. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, 1–8. IEEE. Camposeco, F.; Sattler, T.; Cohen, A.; Geiger, A.; and Pollefeys, M. 2017. Toroidal constraints for two-point localization under high outlier ratios. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4545–4553. Cavallari, T.; Golodetz, S.; Lord, N. A.; Valentin, J.; Di Stefano, L.; and Torr, P. H. 2017. On-the-fly adaptation of regression forests for online camera relocalisation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4457–4466. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7457 Chen, S.; Li, X.; Wang, Z.; and Prisacariu, V. A. 2022. Dfnet: Enhance absolute pose regression with direct feature matching. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part X, 1–17. Springer. Chen, S.; Wang, Z.; and Prisacariu, V. 2021. Direct-PoseNet: absolute pose regression with photometric consistency. In 2021 International Conference on 3D Vision (3DV), 1175– 1185. IEEE. Cheng, W.; Lin, W.; Chen, K.; and Zhang, X. 2019. Cascaded Parallel Filtering for Memory-Efficient Image-Based Localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Chum, O.; and Matas, J. 2008. Optimal randomized RANSAC. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(8): 1472–1482. DeTone, D.; Malisiewicz, T.; and Rabinovich, A. 2018. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 224–236. Dusmanu, M.; Rocco, I.; Pajdla, T.; Pollefeys, M.; Sivic, J.; Torii, A.; and Sattler, T. 2019. D2-net: A trainable cnn for joint description and detection of local features. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, 8092–8101. Fischler, M. A.; and Bolles, R. C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6): 381–395. Germain, H.; Bourmaud, G.; and Lepetit, V. 2020. S2dnet: Learning accurate correspondences for sparse-to-dense feature matching. arXiv preprint arXiv:2004.01673. Haralick, B. M.; Lee, C.-N.; Ottenberg, K.; and Nölle, M. 1994. Review and analysis of solutions of the three point perspective pose estimation problem. International journal of computer vision, 13: 331–356. Kendall, A.; and Cipolla, R. 2017. Geometric loss functions for camera pose regression with deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5974–5983. Kendall, A.; Grimes, M.; and Cipolla, R. 2015. Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, 2938–2946. Li, X.; Wang, S.; Zhao, Y.; Verbeek, J.; and Kannala, J. 2020. Hierarchical scene coordinate classification and regression for visual localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11983–11992. Lindenberger, P.; Sarlin, P.-E.; Larsson, V.; and Pollefeys, M. 2021. Pixel-perfect structure-from-motion with featuremetric refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5987–5997. Lowe, D. G. 2004. Distinctive image features from scaleinvariant keypoints. International journal of computer vision, 60: 91–110. Maggio, D.; Abate, M.; Shi, J.; Mario, C.; and Carlone, L. 2022. Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields. arXiv preprint arXiv:2209.09050. Martin-Brualla, R.; Radwan, N.; Sajjadi, M. S.; Barron, J. T.; Dosovitskiy, A.; and Duckworth, D. 2021. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7210–7219. Max, N. 1995. Optical Models for Direct Volume Rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2): 99–108. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV. Moreau, A.; Piasco, N.; Tsishkou, D.; Stanciulescu, B.; and de La Fortelle, A. 2022a. CoordiNet: uncertainty-aware pose regressor for reliable vehicle localization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2229–2238. Moreau, A.; Piasco, N.; Tsishkou, D.; Stanciulescu, B.; and de La Fortelle, A. 2022b. LENS: Localization enhanced by NeRF synthesis. In Conference on Robot Learning, 1347– 1356. PMLR. Revaud, J.; Weinzaepfel, P.; de Souza, C. R.; and Humenberger, M. 2019. R2D2: Repeatable and Reliable Detector and Descriptor. In NeurIPS. Sarlin, P.-E.; Cadena, C.; Siegwart, R.; and Dymczyk, M. 2019. From Coarse to Fine: Robust Hierarchical Localization at Large Scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Sarlin, P.-E.; DeTone, D.; Malisiewicz, T.; and Rabinovich, A. 2020. Superglue: Learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4938–4947. Sarlin, P.-E.; Unagar, A.; Larsson, M.; Germain, H.; Toft, C.; Larsson, V.; Pollefeys, M.; Lepetit, V.; Hammarstrand, L.; Kahl, F.; et al. 2021. Back to the feature: Learning robust camera localization from pixels to pose. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3247–3257. Sattler, T.; Havlena, M.; Radenovic, F.; Schindler, K.; and Pollefeys, M. 2015. Hyperpoints and fine vocabularies for large-scale location recognition. In Proceedings of the IEEE International Conference on Computer Vision, 2102–2110. Sattler, T.; Leibe, B.; and Kobbelt, L. 2016. Efficient & effective prioritized matching for large-scale image-based localization. IEEE transactions on pattern analysis and machine intelligence, 39(9): 1744–1756. Shavit, Y.; Ferens, R.; and Keller, Y. 2021. Paying attention to activation maps in camera pose regression. arXiv preprint arXiv:2103.11477. Shotton, J.; Glocker, B.; Zach, C.; Izadi, S.; Criminisi, A.; and Fitzgibbon, A. 2013. Scene coordinate regression forests for camera relocalization in RGB-D images. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2930–2937. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7458 Straub, J.; Whelan, T.; Ma, L.; Chen, Y.; Wijmans, E.; Green, S.; Engel, J. J.; Mur-Artal, R.; Ren, C.; Verma, S.; et al. 2019. The Replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797. Sucar, E.; Liu, S.; Ortiz, J.; and Davison, A. J. 2021. iMAP: Implicit mapping and positioning in real-time. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6229–6238. Toft, C.; Stenborg, E.; Hammarstrand, L.; Brynte, L.; Pollefeys, M.; Sattler, T.; and Kahl, F. 2018. Semantic Match Consistency for Long-Term Visual Localization. In Proceedings of the European Conference on Computer Vision (ECCV). Torii, A.; Arandjelovic, R.; Sivic, J.; Okutomi, M.; and Pajdla, T. 2015. 24/7 place recognition by view synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1808–1817. Walch, F.; Hazirbas, C.; Leal-Taixe, L.; Sattler, T.; Hilsenbeck, S.; and Cremers, D. 2017. Image-based localization using lstms for structured feature correlation. In Proceedings of the IEEE International Conference on Computer Vision, 627–637. Xu, Q.; Xu, Z.; Philip, J.; Bi, S.; Shu, Z.; Sunkavalli, K.; and Neumann, U. 2022. Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5438–5448. Yang, L.; Bai, Z.; Tang, C.; Li, H.; Furukawa, Y.; and Tan, P. 2019. Sanet: Scene agnostic network for camera localization. In Proceedings of the IEEE/CVF international conference on computer vision, 42–51. Yen-Chen, L.; Florence, P.; Barron, J. T.; Rodriguez, A.; Isola, P.; and Lin, T.-Y. 2021. inerf: Inverting neural radiance fields for pose estimation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1323– 1330. IEEE. Yu, Z.; Peng, S.; Niemeyer, M.; Sattler, T.; and Geiger, A. 2022. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. arXiv preprint arXiv:2206.00665. Zeisl, B.; Sattler, T.; and Pollefeys, M. 2015. Camera Pose Voting for Large-Scale Image-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Zhao, B.; Yang, B.; Li, Z.; Li, Z.; Zhang, G.; Zhao, J.; Yin, D.; Cui, Z.; and Bao, H. 2022. Factorized and controllable neural re-rendering of outdoor scene for photo extrapolation. In Proceedings of the 30th ACM International Conference on Multimedia, 1455–1464. Zhu, Z.; Peng, S.; Larsson, V.; Xu, W.; Bao, H.; Cui, Z.; Oswald, M. R.; and Pollefeys, M. 2022. Nice-slam: Neural implicit scalable encoding for slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12786–12796. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7459
2024
828
18,660
SimDistill: Simulated Multi-Modal Distillation for BEV 3D Object Detection Haimei Zhao1, Qiming Zhang1, Shanshan Zhao1 Zhe Chen2 Jing Zhang1, Dacheng Tao1 1School of Computer Science, The University of Sydney, Australia, 2School of Computing, Engineering and Mathematical Sciences, La Trobe University, Australia hzha7798@uni.sydney.edu.au, qzha2506@uni.sydney.edu.au, sshan.zhao00@gmail.com, zhe.chen@latrobe.edu.au, jing.zhang1@sydney.edu.au, dacheng.tao@gmail.com Abstract Multi-view camera-based 3D object detection has become popular due to its low cost, but accurately inferring 3D geometry solely from camera data remains challenging and may lead to inferior performance. Although distilling precise 3D geometry knowledge from LiDAR data could help tackle this challenge, the benefits of LiDAR information could be greatly hindered by the significant modality gap between different sensory modalities. To address this issue, we propose a Simulated multi-modal Distillation (SimDistill) method by carefully crafting the model architecture and distillation strategy. Specifically, we devise multi-modal architectures for both teacher and student models, including a LiDARcamera fusion-based teacher and a simulated fusion-based student. Owing to the “identical” architecture design, the student can mimic the teacher to generate multi-modal features with merely multi-view images as input, where a geometry compensation module is introduced to bridge the modality gap. Furthermore, we propose a comprehensive multimodal distillation scheme that supports intra-modal, crossmodal, and multi-modal fusion distillation simultaneously in the Bird’s-eye-view space. Incorporating them together, our SimDistill can learn better feature representations for 3D object detection while maintaining a cost-effective camera-only deployment. Extensive experiments validate the effectiveness and superiority of SimDistill over state-of-the-art methods, achieving an improvement of 4.8% mAP and 4.1% NDS over the baseline detector. The source code will be released at https://github.com/ViTAE-Transformer/SimDistill. Introduction 3D object detection is a pivotal technique with extensive applications in fields such as autonomous driving, robotics, and virtual/augmented reality (Zhang and Tao 2020). In recent years, camera-based 3D object detection methods, which infer objects’ 3D locations from multi-view images (Huang et al. 2021; Li et al. 2023b), have attracted great attention from both academia and industry because of the high perceptual ability of dense color and texture information with low deployment cost. However, due to the lack of accurate 3D geometry reasoning ability, their detection performance falls largely behind LiDAR-based methods, which poses a challenge to the practical deployment of camera-based methods. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. LiDAR Camera Camera Camera Large Camerabased Detector Camera-based Detector (a) Intra-modal Distillation (c) Simulated multi-modal Distillation Fusion-based Detector Simulated Fusion-based Detector LiDAR LiDAR-based Detector Camera-based Detector (b) Cross-modal Distillation Camera Path (Simulated) LiDAR Path Knowledge Distillation KD KD KD Camera Simulated LiDAR Camera Figure 1: Comparison of our SimDistill with previous distillation frameworks. (a) Intra-modal distillation between camera-only teacher and student models cannot learn accurate 3D information due to the limited capacity of the teacher model for inferring 3D geometry. (b) Cross-modal distillation between the LiDAR teacher and Camera student enables learning useful 3D information from the teacher but suffers from the large cross-modal gap. (c) Our simulated multi-modal distillation enables effective knowledge distillation within/between modalities and fully takes advantage of complementary information from different modalities. To address this issue, researchers attempt to impose LiDAR data to provide accurate 3D geometry information. Some multi-view camera-based methods (Li et al. 2023b,a) generate ground truth depth from LiDAR point cloud and use it as the supervisory signal for depth estimation to help transform image features to the Bird’s-eye-view (BEV) space (Zhao et al. 2022) accurately. Except for directly using LiDAR as supervision during training, some recent work employs LiDAR information by applying the knowledge distillation (KD) technique (Gou et al. 2021) to improve the detection performance of camera-based methods. KD-based 3D object detection methods usually leverage the informative features or predictions of a well-trained teacher model to facilitate the learning of the student model. One straightforward approach is intra-modal distillation (Li et al. 2022a; Zhang et al. 2022) between a large teacher model and a small student model, as shown in Figure 1 (a), which conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7460 ducts distillation within the image modality. However, the ceiling performance of the model can be limited since the teacher model infers 3D geometry solely from image data as well. Another approach is cross-modal distillation, as shown in Figure 1 (b), which utilizes LiDAR data as the input of teacher models and transfers 3D knowledge to camerabased students (Chong et al. 2021; Chen et al. 2023; Li et al. 2022b). The student is usually forced to learn and mimic the output of a LiDAR-based teacher in different representation spaces, including monocular view features (Chong et al. 2021), BEV features (Chen et al. 2023), and voxel features (Li et al. 2022b). Nevertheless, performing knowledge distillation directly between different modalities might face significant cross-modal gaps and struggle in aligning features learned by distinct architectures of teacher and student models, resulting in limited performance improvements. In this paper, we address this challenge from the perspective of architecture design and multi-modal knowledge distillation scheme, presenting a Simulated multi-modal Distillation (SimDistill) method for 3D object detection. It encourages the student to simulate multi-modal representation with solely image modality as input thereby advancing the representation learning for 3D object detection. For the architecture, we design a LiDAR-camera fusion-based teacher and a simulated multi-modal student. The student model not only involves a camera path but also introduces an additional simulated LiDAR path parallel to the camera counterpart, as shown in Figure 1 (c). Different from other distillation methods in Figure 1 (a) and (b), our student model possesses two knowledge-transferring paths to learn complementary information from the corresponding two branches of the teacher model. Despite the simulation nature, our student shares a nearly “identical” pipeline as the teacher to produce the camera feature, LiDAR feature, fusion feature, and detection predictions. The resulting aligned learning workflow greatly mitigates the cross-modal gap and benefits multimodal knowledge distillation. Built upon this architecture, we propose a new simulated multi-modal distillation scheme that supports intra-modal (IMD), cross-modal (CMD), and multi-modal fusion distillation (MMD) simultaneously. We adopt the widely used MSE loss on the corresponding feature representations distillation in the unified BEV space and an additional qualityaware prediction distillation (Hong, Dai, and Ding 2022). It is noteworthy that directly transferring knowledge from the LiDAR feature to the simulated LiDAR feature is challenging due to the cross-modal gap. To approach this challenge, we devise a geometry compensation module in CMD to help it attend more to the valuable surrounding context from the learned locations to conduct geometry remediation and distill more informative features from object regions. Equipping the proposed model with the distillation scheme, our SimDistill could effectively learn better feature representations for 3D object detection while enjoying cost-effective camera-only deployment. The main contribution of this paper is threefold. Firstly, we propose a unique multi-modal distillation framework for BEV 3D object detection, including a LiDAR-camera fusion-based teacher and a carefully crafted simulated multimodal student. By ensuring that the teacher and student models share nearly the same workflows, we effectively reduce the modality gap in knowledge distillation. Secondly, we present a novel simulated multi-modal distillation scheme that supports intra-modal, cross-modal, and multimodal fusion distillation simultaneously, which is a universal strategy and can be easily adapted to different models. Thirdly, comprehensive experiments and ablation studies on the nuScenes benchmark validate the effectiveness of SimDistill and its superiority over existing state-of-the-art methods, improving the mAP and NDS of the baseline detector by 4.8% and 4.1%, respectively. Related Work Camera-based 3D Object Detection Monocular 3D object detection methods have been widely studied and made great progress (Simonelli et al. 2019; Reading et al. 2021; Wang et al. 2021b; Lu et al. 2021; Ma et al. 2021; Huang et al. 2022a) on the KITTI (Geiger, Lenz, and Urtasun 2012) benchmark. However, with the release of large-scale datasets with multi-view cameras such as nuScenes (Caesar et al. 2020) and Waymo (Sun et al. 2020), there is growing attention for accurate 3D object detection in these more challenging scenes. Recent works adopt the Bird’s-eye view (BEV) representation as an ideal feature space for multiview perception due to its excellent ability to address scaleambiguity and occlusion issues (Huang et al. 2021; Huang and Huang 2022; Li et al. 2022c). Various methods have been proposed to transform perspective image features to the BEV space, such as the lifting operation from LSS (Philion and Fidler 2020) used by BEVDet (Huang et al. 2021) and the cross-attention mechanism-based grid queries used by BEVFormer (Li et al. 2022c). The camera-based BEVDet approach has been further improved by imposing depth supervision (Li et al. 2023b,a; Wang et al. 2022; Chu et al. 2023) and temporal aggregation (Huang and Huang 2022; Park et al. 2022), resulting in better performance. However, there is still a significant performance gap compared to LiDAR-based and fusion-based counterparts. Fusion-based 3D Object Detection LiDAR differs from cameras in its ability to capture precise geometric and structural information. However, the data it produces is sparse and irregular, with a large volume. Some methods use PointNet (Qi et al. 2017a) directly on the raw point cloud (Qi et al. 2017b; Shi, Wang, and Li 2019; Chen et al. 2022) to learn 3D features, while others voxelize the point cloud into pillars (Lang et al. 2019; Wang et al. 2020; Yin, Zhou, and Krahenbuhl 2021) or voxels (Zhou and Tuzel 2018; Yan, Mao, and Li 2018) before extracting features using SparseConvNet (Graham, Engelcke, and Van Der Maaten 2018). Stateof-the-art techniques (Yin, Zhou, and Krahenbuhl 2021; Bai et al. 2022) typically transform 3D features into the BEV representation to simplify operations in 3D space, and then feed the resultant features to subsequent detection heads. Due to their distinct strengths in perceiving, both cameras and LiDAR are integrated into sensor fusion methods to enhance the performance of perception systems. Existing fusion-based approaches can be categorized as input-level The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7461 methods (Vora et al. 2020; Wang et al. 2021a; Xu et al. 2021) and feature-level methods (Bai et al. 2022; Liang et al. 2022; Liu et al. 2023; Yan et al. 2023), depending on the stage at which information from different sensors is combined. Recently, it has been shown that BEV space is an ideal space for multi-modal fusion, resulting in outstanding performance (Liang et al. 2022; Liu et al. 2023). These methods follow a simple yet effective pipeline that involves extracting features from both modalities, transforming features into the BEV space, fusing multi-modal features using fusion modules, and conducting subsequent detection, largely improving the performance. Knowledge Distillation in 3D Object Detection Knowledge distillation presents a promising avenue for empowering compact models (i.e., students) with effective representations via knowledge transfer from larger models (i.e., teachers). In the context of 3D object detection, prior research (Cho et al. 2023; Zhang et al. 2023, 2022; Yang et al. 2022) has successfully extended knowledge distillation techniques, requiring the student network to emulate features or predictions learned by a teacher model within the same modality. Recent advancements in the area of KD-based 3D object detection have ventured into employing teachers from different modalities (Chong et al. 2021; Li et al. 2022a; Hong, Dai, and Ding 2022; Chen et al. 2023), i.e., leveraging a LiDAR-based teacher. UVTR (Li et al. 2022b) aligns features from both LiDAR and camera in voxel space, facilitating knowledge distillation. BEVDistill (Chen et al. 2023) transforms features into the BEV space for the feature and instance-wise prediction distillation. In a similar vein, TiG-BEV (Huang et al. 2022b) introduces inner-depth supervision and inner-feature distillation to enhance geometry learning in the BEV space. These cross-modal distillation techniques underscore the potential of transferring knowledge from robust LiDAR teachers to camera-based students. Nevertheless, these approaches overlook the prospect of distilling multi-modal knowledge for 3D object detection. Our approach diverges by exploring a multi-modal teacher and designing a nearly identical yet simulated multimodal architecture alongside tailored distillation schemes to effectively perform multi-modal distillation. While concurrent work Unidistill (Zhou et al. 2023) also embraces a multi-modal teacher, it is designed as a universal knowledge distillation framework to support both single-to-single and fusion-to-single cross-modal distillation. It pays no attention to the architecture discrepancy issue between teacher and student and fails to perform comprehensive multi-modal distillation and overcome the cross-modal gap. Methodology In this section, we present the details of how the proposed SimDistill realizes comprehensive multi-modal knowledge distillation for 3D object detection. We first introduce the model architecture, which consists of a multi-modal fusionbased teacher and a simulated multi-modal student. Next, we describe the simulated multi-modal distillation scheme that supports knowledge distillation within and between modalities. Last, we present the training objectives for our method. Multi-modal Architecture SimDistill is proposed as a flexible multi-modal distillation method, offering the flexibility to select both the teacher model and the student model from diverse methods. In the subsequent sections, we present a concrete implementation of SimDistill, employing BEVFusion (Liu et al. 2023) as the teacher model and design the student model based on the camera branch of BEVFusion (BEVFusion-C). The architectural layout of SimDistill is depicted in Figure 2. The upper block depicts the configuration of the teacher model, while the lower block represents the student model. In both instances, the LiDAR branch and the camera branch workflows are denoted by red and blue arrows, respectively. Multi-modal Teacher To encode multi-modal knowledge effectively, we adopt the state-of-the-art fusion-based method, i.e., BEVFusion (Liu et al. 2023) as the teacher model. Its architecture comprises two branches, as depicted in the top part of Figure 2. The LiDAR branch follows the standard pipeline of a LiDAR-based detector (Yan, Mao, and Li 2018; Yin, Zhou, and Krahenbuhl 2021). It uses SparseConvNet (Graham, Engelcke, and Van Der Maaten 2018) EnT 3D to extract the 3D features, and obtains the BEV features F T Lbev through vertical dimension reduction (Flatten). On the other hand, the camera branch follows the paradigm of BEVDet (Huang et al. 2021), using a 2D feature extractor EnT 2D and an efficient projection ProjT to transform features from the camera view to the BEV space F T Cbev. Both modalities’ features are then embedded in a unified BEV space using a fully-convolutional fusion module fuseT , which produces the fused BEV features F T Ubev. Finally, a detection head headT predicts the objects’ bounding boxes and classes P T . This process is formulated as: F T Lbev = Flatten(EnT 3D(L)), F T Cbev = ProjT (EnT 2D(I)), F T Ubev = fuseT (F T Lbev, F T Cbev), P T = headT (F T Ubev), (1) where L and I denotes LiDAR and image input. T and S in all formulations represent the teacher and student models. The projection Proj will be explained in the following part. Simulated multi-modal Student For the student model, we adopt BEVFusion-C (Liu et al. 2023) as the basis model. To mimic the multi-modal fusion pipeline of the teacher model, we make a modification to the network, as shown in the bottom part of Figure 2. Specifically, after feature extraction from the 2D encoder EnS 2D, we devise an additional simulated LiDAR branch (workflow denoted with red arrows) in parallel to the camera branch (blue arrows in the bottom) to simulate LiDAR features from images, which are supervised by the real LiDAR features from the teacher. In the camera branch, we adopt the same efficient view projection ProjC with the one used in the teacher model (ProjT ) to transform camera-view features to the corresponding BEV features F S Cbev (Philion and Fidler 2020; Liu et al. 2023). During the feature transformation, the extracted 2D feature F S Cuv is first feed to a light Depth Net ϕ and a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7462 H Box&Class F 3D Encoder ("#$% & ) 2D Encoder ("#(% & ) Teacher Student Camera-to-BEV View Projection ()*+,&) Camera-to-BEV View Projection ()*+,-) View Projection with Geometry Compensation Module ()*+,.) Vertical Dim Reduction (/01223#) 2D Encoder ("#(% 4 ) H Box&Class F GT GT Lidar Feat. Camera Feat. Camera Feat. Multi-view Images Multi-view Images Camera BEV Feat. /-567 & LiDAR BEV Feat. /.567 & Camera BEV Feat. /-567 4 Sim-LiDAR BEV Feat. /.567 4 LiDAR Point Cloud supervision supervision MMD-P CMD IMD MMD-F CMD: Cross-modal Distillation MMD-F: Multi-modal Feature Distillation MMD-P: Multi-modal Prediction Distillation IMD: Intra-modal Distillation F Fusion Module H Detection Head Figure 2: Overall pipeline of SimDistill. It consists of a fusion-based teacher model (top) and a simulated multi-modal student model (bottom). SimDistill supports (1) Intra-Modal Distillation (IMD) between the camera features of the teacher and student; (2) Cross-Modal Distillation (CMD) between the teacher’s LiDAR feature and the student’s Simulated-LiDAR feature. (3) Multi-Modal fusion Distillation (MMD) between the fusion features (MMD-F) and predictions (MMD-P) of the teacher and student. The workflows of the (simulated) LiDAR and camera branches are denoted by red and blue arrows, respectively. Context Net ψ to predict the depth distribution and semantic context on each pixel. Then, each 2D feature pixel can be scattered into D discrete points along the camera ray by rescaling the context feature with their corresponding depth probabilities. The resulting 3D feature point cloud is then processed by the efficient BEV pooling operation ρ, to aggregate features in BEV grids and obtain the BEV features: F S Cbev = ProjC(F S Cuv) = ρ(ψ(F S Cuv) × ϕ(F S Cuv)). (2) In the simulated LiDAR branch, to acquire the simulated LiDAR feature F S Lbev, the view projection ProjL is combined with a specifically designed geometry compensation module in both camera-view and BEV spaces, which will be explained later in Eq. (5) of Sec. 3.2.2. It offers the ability to mitigate the geometry misalignment caused by inaccurate depth prediction and modality gap during distillation. After obtaining BEV features from two branches F S Cbev and F S Lbev, we use the fusion module fuseS to acquire the multi-modal fusion features F S Ubev. And the detection head headS is exploited to yield the final detection results P S. Both the fusion module and detection head have the same architecture as the teacher. This process is formulated as: F S Cuv = EnS 2D(I), F S Lbev = ProjL(F S Cuv), F S Cbev = ProjC(F S Cuv), F S Ubev = fuseS(F S Lbev, F S Cbev), P S = headS(F S Ubev). (3) Owing to the simulated multi-modal fusion architecture, the student model can learn features from multiple modalities without equipping a real LiDAR. In the next part, we will explain how this architecture facilitates effective knowledge distillation within and between modalities, including intramodal, cross-modal, and multi-modal fusion distillation. Multi-modal Distillation To better utilize the knowledge of different modalities encoded by different branches of the teacher model, we propose a novel simulated multi-modal distillation scheme including Intra-modal Distillation (IMD), Cross-modal Distillation (CMD), and Multi-modal fusion Distillation (MMD). Intra-modal Distillation Since both the teacher and student models take images as input, a straightforward strategy is to align the image features from the camera branch of both models, which we name intra-modal distillation. Specifically, we leverage the BEV feature of the teacher F T Cbev as the supervisory signal for the learning of the student counterpart F S Cbev via an MSE loss, i.e., LIMD = MSE(F T Cbev, F S Cbev). (4) Due to the same modality in IMD, the student model can be trained directly through the above distillation objective to gain useful visual domain knowledge to facilitate 3D object detection performance. However, relying on images alone may not provide enough geometry-related information to help detect target objects. To address this limitation, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7463 LiDAR Voxel Feat. LiDAR BEV Feat. Image Feat. Image Feat. View Projection BEV GCM View Projection after UV GCM Final Sim-LiDAR BEV Feat. Cam BEV Feat. Z x Z x y Z x y Z x y Z x Z x Z x Z x ①Teacher LiDAR Feature ②Student Sim-LiDAR Feature w/o GCM ③Student Sim-LiDAR Feature w/ GCM Target point Target point after GCM BEV Feat. w/o BEV GCM Figure 3: Illustration of Geometry Compensation Module (GCM). The colorful voxels denote learned features of the target object. Best viewed with zoom-in. we implement cross-modal distillation on the proposed simulated LiDAR branch in the student model, enabling it to gain knowledge from the LiDAR modality. Cross-modal Distillation CMD aims to align the LiDAR BEV features of the teacher and the simulated LiDAR BEV features of the student. However, due to geometry misalignment and modal difference, directly applying the distillation loss between features generated from different modalities may lead to an incorrect mimic of the noisy features and inaccurate 3D geometry representation. Therefore, we propose a geometry compensation module to address the geometry misalignment and handle the modal difference. Geometry Compensation Module (GCM) A crucial process in the multi-view camera-based detection method is the view projection operation, which transforms camera-view (UV) features into the BEV space. Inaccurate geometry inference in this process leads to geometry misalignment between features learned from images and LiDAR, exacerbating the modality gap. Therefore, we propose to conduct geometry compensation before and after the view projection in the simulated LiDAR branch to learn more accurate geometry features in both UV and BEV space. Deformable Convolutions and Deformable Attentions are known to be effective in enabling neural networks to model spatial transformations and account for geometric deformations or misalignments (Dai et al. 2017; Zhu et al. 2020). Therefore, we adopt deformable self-attention layers to construct GCM, as shown in Figure 3. For geometry compensation in the UV space, we first generate a uniform grid of points Quv as query points for each 2D camera feature F S Cuv. Then, we learn offsets based on each point q(u,v) ∈Quv to generate a set of most related points Puv around it. These learned points Puv are taken as reference points and keys used to sample the value features from the 2D camera features F S Cuv. With the optimization signals gradually improving attentive locations, the module facilitates the model to compensate for geometric transformations in the x-y plane. We apply standard multi-head attention, learning individual offsets for each head, which captures abundant information and improves feature representations for subsequent context learning, depth estimation, and 3D geometry inference. Similarly, we employ a BEV geometry compensation module after transforming the camera-view features to BEV features F S Cuv−bev, which is responsible for correcting the key feature locations in the x-z plane. By doing so, the geometry compensation in the two complementary 2D views can comprehensively improve the feature representation. Overall, the view projection with GCM used in the simulated LiDAR branch is formulated here, with reference to Eq. (2): F S Lbev = ProjL(F S Cuv) = GCbev(ρ(ψ(GCuv(F S Cuv)) × ϕ(GCuv(F S Cuv)))), (5) where GCuv(F S Cuv) = DeformAttn(Quv, Puv, F S Cuv) and GCbev(F S Cuv−bev) = DeformAttn(Qbev, Pbev, F S Cuv−bev), denoting the UV Geometry Compensation and BEV Geometry Compensation, respectively. Qbev and Pbev are query and reference points generated for BEV features F S Cuv−bev. To get the final simulated LiDAR feature for distillation, we also implement a simple yet effective object-aware mask M to select the most informative features at the end of GCM. We generate masks in the BEV space from the ground truth center points and bounding boxes using a heatmap-like approach like BEVDisitll (Chen et al. 2023). Therefore, the CMD loss is formulated as: LCMD = MSE(M ⊙F T Lbev, M ⊙F S Lbev) (6) where ⊙is Hadamard product. The object-aware mask is a technique we utilize together with GCM to improve the ability to overcome the cross-modal gap in CMD and we refrain from attributing it as our original contribution. Multi-modal fusion Distillation In light of the aligned architecture and workflow with the teacher model, the student model also produces multi-modal fusion features as well as detection predictions. To make the fused feature and predictions consistent with those in the teacher model, we devise multi-modal distillation in both feature level (MMD-F) and prediction level (MMD-P). Owing to the proximity of the fusion module and the detection head, MMD-F is expected to distill highly useful multi-modal knowledge that directly contributes to the detection. It is implemented by aligning the fusion feature of the teacher model and the simulated fusion feature of the student model: LMMD−F = MSE(F T Ubev, F S Ubev). (7) After the fusion module, the fused feature in the student model is fed into the detector to output the detection results in the same way as the teacher model. Thus, we also employ MMD-P by taking the predictions from the teacher model as soft labels. We adopt the quality-aware prediction distillation loss LMMD−P (Hong, Dai, and Ding 2022), which consists of the classification loss Lcls for object categories and the regression loss Lreg for 3D bounding boxes: LMMD−P = Lreg + Lcls, = SmoothL1(P T B , P S B) · s + QFL(P T C , P S C) · s, (8) where P T B and P T C (resp. P S B and P S C) denote the predicted bounding boxes and categories by the teacher model (resp. the student model). QFL(·) denotes the quality focal loss (Li The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7464 Methods Modality Backbone Image Size mAP↑NDS↑mATE↓mASE↓mAOE↓mAVE↓mAAE↓ BEVFusion (Liang et al. 2022) LC VoxelNet SwinT 448 × 800 67.9 71.0 BEVFusion (Liu et al. 2023) LC VoxelNet SwinT 256 × 704 68.5 71.4 28.6 25.3 30.0 25.4 18.6 FCOS3D (Wang et al. 2021b) C R101 900 × 1600 29.5 37.2 80.6 26.8 51.1 113.1 17.0 BEVDet (Huang et al. 2021) C R50 256 × 704 29.8 37.9 72.5 27.9 58.9 86.0 24.5 PETR (Liu et al. 2022) C R50 384 × 1056 31.3 38.1 76.8 27.8 56.4 92.3 22.5 DETR3D (Huang and Huang 2022) C R101 900 × 1600 34.9 43.4 71.6 26.8 37.9 84.2 20.0 Set2Set (Li et al. 2022b) C* R50 900 × 1600 33.1 41.0 MonoDistill (Chong et al. 2021) C* R50 900 × 1600 36.4 42.9 UVTR (Li et al. 2022b) C* R50 900 × 1600 36.2 43.1 TiG-BEV (Huang et al. 2022b) C* R50 256 × 704 33.1 41.1 67.8 27.1 58.9 78.4 21.8 UniDistill (Zhou et al. 2023) C* R50 256 × 704 26.5 37.8 BEVDistill (Chen et al. 2023) C* SwinT 256 × 704 36.3 43.6 64.2 27.4 57.6 87.8 28.2 BEVFusion-C (Liu et al. 2023) C SwinT 256 × 704 35.6 41.2 66.8 27.3 56.1 89.6 25.9 SimDistill C* SwinT 256 × 704 40.4 45.3 52.6 27.5 60.7 80.5 27.3 Table 1: Quantitative comparisons on the nuScenes validation Set. L and C in the second column denote the input modality, i.e., LiDAR and camera, while C* means using LiDAR for knowledge distillation during training. et al. 2020). s is a quality score used as the loss weight, obtained by measuring the IoU between the predictions and the ground truth to determine the confidence of the soft label. Discussion It is noteworthy that previous methods have not explored multi-modal fusion distillation due to the absence of a dedicated multi-modal architecture in the student model for aligning fusion features or predictions. Instead, these methods distill information solely by aligning the teacher model’s fusion features or predictions to a single-modal student counterpart, which leads to subpar performance due to the modality gap. Furthermore, no studies have investigated the impact of comprehensive multi-modal distillation, including intra-modal, cross-modal, and multi-modal fusion distillation, simultaneously. Our SimDistill makes progress by effectively performing multi-modal fusion distillation through its simulated multi-modal architecture. This complements intra-modal and cross-modal distillation (Sec. 4.3), resulting in improved performance. Training Objective Apart from the above distillation losses, the student model is also optimized by the common loss of 3D object detection task Ldet. The overall training objective L is defined as: L = LIMD+LCMD+LMMD−F +LMMD−P +Ldet. (9) Experiment Experiment Setting Datasets and Evaluation Metrics We follow the common practice (Huang et al. 2021; Liu et al. 2023; Liang et al. 2022; Li et al. 2023b; Chen et al. 2023) to evaluate our method on the most challenging benchmark, i.e., nuScenes (Caesar et al. 2020). It comprises 700 scenes for training, 150 scenes for validation, and 150 scenes for testing. Each scene includes panoramic LiDAR data and surrounding camera images, which are synchronized to provide convenience for multi-modal-based research. The dataset comprises a total of 23 object categories, and 10 popular classes are considered for computing the final metrics. To align with the official evaluation, we adopt mean Average Precision (mAP) and nuScenes detection score (NDS) as the main metrics with other 5 metrics for reference. Implementation Details Our method is implemented with PyTorch using 8 NVIDIA A100 (40G Memory), based on the MMDetection3D codebase (Contributors 2020). We adopt BEVFusion (Liu et al. 2023) as the default teacher model, which takes images with a size of 256 × 704 and LiDAR point cloud with a voxel size of (0.075m, 0.075m, 0.2m) as input and uses VoxelNet (Zhou and Tuzel 2018) and Swin-T (Liu et al. 2021) as backbones for the two modalities, respectively. During distillation, we utilize the official BEVFusion checkpoint, freeze the teacher model, and train the student model for 20 epochs with batch size 24. The backbone and input resolution are kept the same as BEVFusion-C in both our SimDisitll and our competitor BEVDistill. More implementation details, ablation analysis, and visualizations can be found in Appendices. Main Results We compare our SimDistill with state-of-the-art methods on the nuScenes validation set and present the results in Table 1. We group the methods according to the input modality and present the knowledge distillation-based methods in the bottom part (except for baseline BEVFusion-C) for straightforward comparisons. From the table, we can see that fusion-based methods usually possess a stronger perception ability and achieve better performance. However, the high cost of LiDAR may restrict their practical usage. Compared with the baseline BEVFusion-C, SimDistill boosts the performance significantly by 4.8% mAP and 4.1% NDS, clearly validating the effectiveness of the proposed distillation method. Compared with the concurrent distillation methods BEVDistill and UniDistill, our SimDistill achieves much better performance under the same setting. Ablation Studies Why Choose Multi-modal Architectures? To demonstrate the superiority of the proposed simulated The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7465 Teacher Student Distillation mAP↑ NDS↑ a BEVFusion BEVFusion-C MMD-F 35.94 41.75 b BEVFusion SimDistill MMD-F 38.34 44.15 c BEVFusion-L BEVFusion-C CMD-v 35.88 42.87 d BEVFusion-L SimDistill CMD-v 36.80 42.79 Table 2: Ablation study of the model architecture. CMD-v is the vanilla version of CMD without using GCM here. multi-modal structure, we replace the multi-modal teacher BEVFusion and the simulated multi-modal student SimDistill with their single-modal counterpart BEVFusion-L (i.e., the LiDAR branch of BEVFusion) and BEVFusion-C, respectively. The results are presented in Table 2. We first investigate the influence of using a simulated multi-modal student. In models (a) and (b), we adopt the multi-modal teacher (BEVFusion) but distill the fusion feature to different student architectures. The experiment results show that the simulated multi-modal student (b) outperforms the single-modal one (a) with a clear gain of 2.4 in both mAP and NDS. We then change the teacher to a single-modal one (BEVFusion-L) to verify the performance of the student. Although directly learning from a cross-modal teacher adversely affects performance due to the modality gap, the multi-modal student (d) still achieves better performance in mAP and comparable results in NDS compared with the single-modal student (c). The two groups of comparisons validate the superiority of using a multi-modal student. Besides, the experiments of (b) and (d) both directly distill the learned feature from the teacher model to the student, which validates the importance of using a multi-modal teacher, i.e., with a gain of 1.54% mAP and 1.36% NDS. In summary, it is crucial to employ multi-modal architectures for both teacher and student models to enhance knowledge transfer and achieve better performance. In addition, employing the proposed simulated multi-modal student model maintains the advantage of cost-effective camera-only deployment. How Simulated Multi-modal Distillation Works? To investigate the impact of distillation options, we perform ablation studies and summarize the results in Table 3. Model (a) denotes the baseline model with the proposed simulated multi-modal architecture without any knowledge distillation. We present the gains over Model (a) in the column mAP and NDS. As shown in (b), (c), (e), and (f), employing IMD, vanilla CMD, MMD-F, and MMD-P on the baseline model leads to 1.09%, 1.43%, 2.63%, and 1.02% absolute gains in mAP, respectively, where MMD-F brings the largest gain owing to the rich multi-modality knowledge contained in the fusion features. Interestingly, while the simulated LiDAR branch should possess more accurate 3D geometry than the camera branch, IMD (b) produces a slightly larger gain than vanilla CMD (c). We attribute it to the modality gap between the real LiDAR features of the teacher and the simulated ones of the student. After using the proposed GCM (i.e., Model (d)), we can see that it helps CMD achieve a gain of 4.12% mAP and 2.82% NDS over the baseline in (a), validating the effectiveIMD CMD MMD mAP↑ NDS↑ vanilla GCM -F -P a 35.71 (-) 41.97 (-) b ✓ 37.14 (+1.43) 42.67 (+0.70) c ✓ 36.80 (+1.09) 42.79 (+0.82) d ✓ ✓ 39.83 (+4.12) 44.79 (+2.82) e ✓ 38.34 (+2.63) 44.15 (+2.18) f ✓ 36.73 (+1.02) 42.52 (+0.55) g ✓ ✓ ✓ ✓ ✓ 40.40 (+4.69) 45.31 (+3.34) Table 3: Ablation study of different distillation options. Methods FPS GFlops mAP NDS BEVDet (Huang et al. 2021) 15.6 215.3 31.2 39.2 BEVFormer (Li et al. 2022c) 2.4 1303.5 37.5 44.8 BEVDistill(Chen et al. 2023) 3.7 608.8 36.3 43.6 BEVFusion-C (Liu et al. 2023) 13.4 165.1 35.6 41.2 SimDistill 11.1 219.1 40.4 45.3 Table 4: Comparison of model efficiency. ness of GCM in overcoming the side effect of the modality gap during distillation. After incorporating all the components including simulated multi-modal architecture and all the distillation techniques, we get our SimDistill model in (g), which delivers the best performance of 40.40 mAP and 45.31 NDS, meanwhile achieving an improvement of 4.8% mAP and 4.1% NDS over the baseline model BEVFusion-C. Model Efficiency We compare the model efficiency with other representative methods in Table 4. Our method achieves an inference speed of 11.1 FPS on a single GPU, running much faster than BEVDistill and BEVFormer. It is comparable to BEVFusion-C but a bit slower than BEVDet, mainly due to the additional simulated LiDAR branch in the architecture. Nevertheless, SimDistill significantly outperforms other methods in terms of mAP and NDS. Conclusion In this paper, we propose a novel simulated multi-modal distillation method named SimDistill for multi-view BEV 3D object detection by carefully investigating the architecture design and effective distillation techniques. We identify the importance of the multi-modal architecture for multimodal knowledge distillation and devise a simulated multimodal student model accordingly. Built upon it, we develop a novel simulated multi-modal distillation scheme that supports intra-modal, cross-modal, and multi-modal fusion knowledge distillation simultaneously. Experiments on the challenging nuScenes benchmark have validated the above findings and the superiority of the proposed distillation methods over state-of-the-art approaches. We believe SimDistill is compatible with other multi-modal teacher and diverse student models, which could lead to enhanced performance and remains a subject for future investigation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7466 Acknowledgments This work was supported by Australian Research Council Projects IH-180100002. References Bai, X.; Hu, Z.; Zhu, X.; Huang, Q.; Chen, Y.; Fu, H.; and Tai, C.-L. 2022. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Chen, C.; Chen, Z.; Zhang, J.; and Tao, D. 2022. Sasa: Semantics-augmented set abstraction for point-based 3d object detection. In Proceedings of the AAAI Conference on Artificial Intelligence. Chen, Z.; Li, Z.; Zhang, S.; Fang, L.; Jiang, Q.; and Zhao, F. 2023. BEVDistill: Cross-modal BEV distillation for multiview 3D object detection. In The Eleventh International Conference on Learning Representations. Cho, H.; Choi, J.; Baek, G.; and Hwang, W. 2023. itkd: Interchange transfer-based knowledge distillation for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Chong, Z.; Ma, X.; Zhang, H.; Yue, Y.; Li, H.; Wang, Z.; and Ouyang, W. 2021. MonoDistill: Learning Spatial Features for Monocular 3D Object Detection. In International Conference on Learning Representations. Chu, X.; Deng, J.; Zhao, Y.; Ji, J.; Zhang, Y.; Li, H.; and Zhang, Y. 2023. OA-BEV: Bringing object awareness to bird’s-eye-view representation for multi-camera 3D object detection. arXiv preprint arXiv:2301.05711. Contributors, M. 2020. MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. https://github.com/open-mmlab/mmdetection3d. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Gou, J.; Yu, B.; Maybank, S. J.; and Tao, D. 2021. Knowledge distillation: A survey. International Journal of Computer Vision. Graham, B.; Engelcke, M.; and Van Der Maaten, L. 2018. 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hong, Y.; Dai, H.; and Ding, Y. 2022. Cross-modality knowledge distillation network for monocular 3D object detection. In Proceedings of the European Conference on Computer Vision. Huang, J.; and Huang, G. 2022. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054. Huang, J.; Huang, G.; Zhu, Z.; and Du, D. 2021. Bevdet: High-performance multi-camera 3d object detection in birdeye-view. arXiv preprint arXiv:2112.11790. Huang, K.-C.; Wu, T.-H.; Su, H.-T.; and Hsu, W. H. 2022a. Monodtr: Monocular 3d object detection with depth-aware transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Huang, P.; Liu, L.; Zhang, R.; Zhang, S.; Xu, X.; Wang, B.; and Liu, G. 2022b. TiG-BEV: Multi-view BEV 3D object detection via target inner-geometry learning. arXiv preprint arXiv:2212.13979. Lang, A. H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; and Beijbom, O. 2019. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Li, J.; Lu, M.; Liu, J.; Guo, Y.; Du, L.; and Zhang, S. 2022a. BEV-LGKD: A unified LiDAR-guided knowledge distillation framework for BEV 3D object detection. arXiv preprint arXiv:2212.00623. Li, X.; Wang, W.; Wu, L.; Chen, S.; Hu, X.; Li, J.; Tang, J.; and Yang, J. 2020. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. Advances in Neural Information Processing Systems. Li, Y.; Bao, H.; Ge, Z.; Yang, J.; Sun, J.; and Li, Z. 2023a. Bevstereo: Enhancing depth estimation in multi-view 3d object detection with temporal stereo. In Proceedings of the AAAI Conference on Artificial Intelligence. Li, Y.; Chen, Y.; Qi, X.; Li, Z.; Sun, J.; and Jia, J. 2022b. Unifying voxel-based representation with transformer for 3d object detection. Advances in Neural Information Processing Systems. Li, Y.; Ge, Z.; Yu, G.; Yang, J.; Wang, Z.; Shi, Y.; Sun, J.; and Li, Z. 2023b. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. Proceedings of the AAAI Conference on Artificial Intelligence. Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; and Dai, J. 2022c. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In Proceedings of the European Conference on Computer Vision. Liang, T.; Xie, H.; Yu, K.; Xia, Z.; Lin, Z.; Wang, Y.; Tang, T.; Wang, B.; and Tang, Z. 2022. Bevfusion: A simple and robust lidar-camera fusion framework. Advances in Neural Information Processing Systems. Liu, Y.; Wang, T.; Zhang, X.; and Sun, J. 2022. Petr: Position embedding transformation for multi-view 3d object detection. In Proceedings of the European Conference on Computer Vision. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7467 Liu, Z.; Tang, H.; Amini, A.; Yang, X.; Mao, H.; Rus, D.; and Han, S. 2023. BEVFusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation. In IEEE International Conference on Robotics and Automation. Lu, Y.; Ma, X.; Yang, L.; Zhang, T.; Liu, Y.; Chu, Q.; Yan, J.; and Ouyang, W. 2021. Geometry uncertainty projection network for monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Ma, X.; Zhang, Y.; Xu, D.; Zhou, D.; Yi, S.; Li, H.; and Ouyang, W. 2021. Delving into localization errors for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Park, J.; Xu, C.; Yang, S.; Keutzer, K.; Kitani, K. M.; Tomizuka, M.; and Zhan, W. 2022. Time Will Tell: New Outlooks and A Baseline for Temporal Multi-View 3D Object Detection. In The Eleventh International Conference on Learning Representations. Philion, J.; and Fidler, S. 2020. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Proceedings of the European Conference on Computer Vision. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems. Reading, C.; Harakeh, A.; Chae, J.; and Waslander, S. L. 2021. Categorical depth distribution network for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Shi, S.; Wang, X.; and Li, H. 2019. Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Simonelli, A.; Bulo, S. R.; Porzi, L.; L´opez-Antequera, M.; and Kontschieder, P. 2019. Disentangling monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. 2020. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vora, S.; Lang, A. H.; Helou, B.; and Beijbom, O. 2020. Pointpainting: Sequential fusion for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Wang, C.; Ma, C.; Zhu, M.; and Yang, X. 2021a. Pointaugmenting: Cross-modal augmentation for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021b. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Wang, Y.; Fathi, A.; Kundu, A.; Ross, D. A.; Pantofaru, C.; Funkhouser, T.; and Solomon, J. 2020. Pillar-based object detection for autonomous driving. In Proceedings of the European Conference on Computer Vision. Wang, Z.; Min, C.; Ge, Z.; Li, Y.; Li, Z.; Yang, H.; and Huang, D. 2022. Sts: Surround-view temporal stereo for multi-view 3d detection. arXiv preprint arXiv:2208.10145. Xu, S.; Zhou, D.; Fang, J.; Yin, J.; Bin, Z.; and Zhang, L. 2021. Fusionpainting: Multimodal fusion with adaptive attention for 3d object detection. In IEEE International Intelligent Transportation Systems Conference. IEEE. Yan, J.; Liu, Y.; Sun, J.; Jia, F.; Li, S.; Wang, T.; and Zhang, X. 2023. Cross modal transformer via coordinates encoding for 3D object dectection. arXiv preprint arXiv:2301.01283. Yan, Y.; Mao, Y.; and Li, B. 2018. Second: Sparsely embedded convolutional detection. Sensors. Yang, J.; Shi, S.; Ding, R.; Wang, Z.; and Qi, X. 2022. Towards efficient 3d object detection with knowledge distillation. Advances in Neural Information Processing Systems. Yin, T.; Zhou, X.; and Krahenbuhl, P. 2021. Center-based 3d object detection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhang, J.; and Tao, D. 2020. Empowering things with intelligence: a survey of the progress, challenges, and opportunities in artificial intelligence of things. IEEE Internet of Things Journal. Zhang, L.; Dong, R.; Tai, H.-S.; and Ma, K. 2023. Pointdistiller: Structured knowledge distillation towards efficient and compact 3d detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Zhang, L.; Shi, Y.; Tai, H.-S.; Zhang, Z.; He, Y.; Wang, K.; and Ma, K. 2022. Structured knowledge distillation towards efficient and compact multi-view 3D fetection. arXiv preprint arXiv:2211.08398. Zhao, H.; Zhang, J.; Zhang, S.; and Tao, D. 2022. Jperceiver: Joint perception network for depth, pose and layout estimation in driving scenes. In European Conference on Computer Vision. Zhou, S.; Liu, W.; Hu, C.; Zhou, S.; and Ma, C. 2023. UniDistill: A universal cross-modality knowledge distillation framework for 3D object detection in bird’s-eye view. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhou, Y.; and Tuzel, O. 2018. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7468
2024
829
18,661
Local-Global Multi-Modal Distillation for Weakly-Supervised Temporal Video Grounding Peijun Bao1, Yong Xia*2, Wenhan Yang3, Boon Poh Ng1, Meng Hwa Er1, Alex C. Kot1 1Nanyang Technological University 2Northwestern Polytechnical University 3Peng Cheng Laboratory peijun001@e.ntu.edu.sg, yxia@nwpu.edu.cn, yangwh@pcl.ac.cn, {ebpng, emher, eackot}@ntu.edu.sg Abstract This paper for the first time leverages multi-modal videos for weakly-supervised temporal video grounding. As labeling the video moment is labor-intensive and subjective, the weakly-supervised approaches have gained increasing attention in recent years. However, these approaches could inherently compromise performance due to inadequate supervision. Therefore, to tackle this challenge, we for the first time pay attention to exploiting complementary information extracted from multi-modal videos (e.g., RGB frames, optical flows), where richer supervision is naturally introduced in the weaklysupervised context. Our motivation is that by integrating different modalities of the videos, the model is learned from synergic supervision and thereby can attain superior generalization capability. However, addressing multiple modalities would also inevitably introduce additional computational overhead, and might become inapplicable if a particular modality is inaccessible. To solve this issue, we adopt a novel route: building a multi-modal distillation algorithm to capitalize on the multi-modal knowledge as supervision for model training, while still being able to work with only the single modal input during inference. As such, we can utilize the benefits brought by the supplementary nature of multiple modalities, without undermining the applicability in practical scenarios. Specifically, we first propose a cross-modal mutual learning framework and train a sophisticated teacher model to learn collaboratively from the multi-modal videos. Then we identify two sorts of knowledge from the teacher model, i.e., temporal boundaries and semantic activation map. And we devise a local-global distillation algorithm to transfer this knowledge to a student model of single-modal input at both local and global levels. Extensive experiments on large-scale datasets demonstrate that our method achieves state-of-the-art performance with/without multi-modal inputs. Introduction Given a natural language query and an untrimmed video, the task of temporal video grounding (Gao et al. 2017; Krishna et al. 2017) aims to temporally localize the video moment described by the language query. It is one of the most fundamental tasks in video understanding and has a wide range of real-world applications (Qi et al. 2021; Bao et al. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Single Modal Videos 0 27.3 s Temporal Boundary Annotation Query: A person runs to the window to look. 0 30.6 s Query: A person opens a pantry door. Fully-Sup Weakly-Sup 1) Fully / Weakly-Supervised Setting Student Student Teacher Teacher Cross-Modal Mutual Learning Multi-Modal Videos 2) Local-Global Multi-Modal Distillation Global-level Distillation Global-level Distillation Local-level Distillation Modality #1 Modality #2 Figure 1: 1) Due to lacking temporal boundary annotations, weakly-supervised temporal video grounding faces ineffective supervision compared to fully-supervised scenarios. 2) We alleviate this issue by exploiting complementary multimodal videos as an auxiliary supervisory signal. We propose a local-global multi-modal distillation algorithm that transfers the multi-modal knowledge from the teacher model to a single-modal student model at local and global levels. 2023; Sreenu and Durai 2019; Zhu et al. 2021), such as video localization, video summarization, as well as video surveillance analysis. While achieving remarkable performance, the fully-supervised temporal video grounding (Liu et al. 2018; Zhang et al. 2019a,b, 2020a; Bao, Zheng, and Mu 2021) necessitates laborious manual annotations of temporal moment boundaries. Consequently, the weakly-supervised setting (illustrated in Fig 1) has recently received growing attention (Chen et al. 2020; Tan et al. 2021; Lin et al. 2020; Zheng et al. 2022a,b), where only paired videos and natural language queries are required during training. However, the grounding capability of the existing weakly-supervised methods is still unsatisfactory and lags behind the fully-supervised counterparts because the incomprehensive annotations do not provide sufficient supervisory signals. Different from the prevailing works on weakly-supervised learning only consider RGB frames for video features (Gao et al. 2019; Chen et al. 2020; Lin et al. 2020; Tan et al. 2021; Zheng et al. 2022a,b), we pay attention to exploring the potential of using different modalities of the videos (e.g., RGB frames, optical flow, audio), whose complementary informaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 738 tion can naturally result in the improvement of the grounding accuracy. For instance, the features of RGB frames can capture useful appearances to align the objects and scenes between the sentence and video, while the explicit modeling of the motion is absent. Besides, they are also sensitive to occlusions and lighting conditions. Comparatively, optical flow features can complement this with richer motion information, which facilitates action understanding and improved robustness to occlusions and lighting changes. Therefore, intuitively, it is beneficial to utilize the synergic cues from the multi-modalities of the videos instead of only tackling RGB frames. However, while integrating multiple modalities† can improve the generalization capability and robustness of the model, it also brings about potential negative impacts. First, the additionally introduced model parameters lead to increased computational costs. Second, the use of multiple modalities limits the practicability of the method, both for computational consideration (e.g., the heavy computational burden of optical flows (Dosovitskiy et al. 2015; Lucas and Kanade 1981)) and from the perspective of data availability (e.g., audio modality is often missed in surveillance videos). To this end, we develop a novel technical route to exploit multi-modal data more effectively and flexibly: 1) training the model with the multi-modal complementary input; 2) inference using only single-modal data. As such, the method successfully leads to an improved modeling capacity while maintaining practicality. As illustrated in Fig 1, our idea is to first train a sophisticated teacher model to collaboratively learn from the multi-modal videos. Subsequently, this teacher model is treated as a pseudo annotator to provide a student model with the ground truth of temporal boundaries, as well as the underlying semantic structure between the video and language. Because the student model only digests single-modal videos as input, it maintains the computational cost and eliminates additional multi-modal videos during inference. To the best of our knowledge, this is the first attempt of distilling the multi-modal knowledge to alleviate the challenge of weak supervision in the literature of temporal video grounding. Compared with the conventional knowledge distillation in fully / semi-supervised setting (Hinton, Vinyals, and Dean 2015; Tarvainen and Valpola 2017; Qiao et al. 2018), the case in our situation is more difficult, as the insufficient supervisory signals from incomplete annotations in our weakly-supervised context inherently pose a challenge. Specifically, 1) we first devise a cross-modal mutual learning framework to train the teacher model under the scenarios of inputting multi-modal videos. The supplemental cues from different modality sources are leveraged to explicitly compensate for the errors of each single modality. 2) We then identify two sorts of knowledge from the teacher model, i.e., temporal boundaries and semantic activation map. And we propose a multi-modal distillation algorithm to transfer this knowledge to a student model of single-modal input. At the local level, the semantic activation maps which denote the underlying similarity of video snippets and language are enforced to be †For clarity, single- / multi-modality in this paper refers specifically to the videos, although temporal video grounding itself is a multi-modal task in the definition. consistent between the teacher and student model. At the global level, the predictions of temporal boundaries from the teacher model are regarded as pseudo labels to train the student model. In this way, the student model can exploit the extra knowledge from the multi-modal videos to handle the issue of weak supervisory signals, while keeping the single-modal videos as the input. 3) In addition, we propose a local-global contrastive learning algorithm for a single-modal baseline, where local and global levels of contrastive learning are devised to align the semantics of language and videos. This single-modal baseline model can still outperform stateof-the-art weakly-supervised methods even without touching any multi-modal videos during training or inference. Our contributions are summarized as follows: 1) To the best of our knowledge, we are the first to make use of multimodal videos to mitigate the inadequate supervision problem in weakly-supervised temporal video grounding. A multimodal distillation algorithm is proposed to transfer knowledge to a single-modal student model at both local and global levels. 2) As a byproduct, we also for the first time explore the weakly-supervised temporal video grounding with the input of multi-modal videos. A mutual learning algorithm is crafted to collaboratively learn from different modality sources and compensate each other for reduced grounding errors. 3) We design a novel single-modal baseline with localglobal contrastive learning, avoiding the utilization of multimodal videos in either training or inference. 4) Extensive experiments on two large-scale datasets show that our methods achieve state-of-the-art results, regardless of whether employing multi-modal inputs. Related Works Fully-supervised temporal video grounding. The task of temporal video grounding is first introduced by Gao et al. (2017) which aims to determine the start and end time points of moment given by a query sentence. Liu et al. (2018) advise applying attention mechanism to highlight the crucial part of visual features. An event propagation network is developed in (Bao, Zheng, and Mu 2021) to localize video moments that are semantically related and temporally coordinated. While obtaining promising performance (Mun, Cho, and Han 2020; Wang, Ma, and Jiang 2020; Bao and Mu 2022; Zhang et al. 2019a, 2020a), these fully-supervised methods rely on the labor-intensive annotations of the temporal boundaries. Weakly-supervised temporal video grounding. Existing works (Gao et al. 2019; Zheng et al. 2022a,b; Bao et al. 2024; Chen et al. 2020; Lin et al. 2020; Tan et al. 2021) on weaklysupervised temporal video grounding take the RGB frames of the video as the input. Early works (Mithun, Paul, and RoyChowdhury 2019; Tan et al. 2021) use joint visual-semantic embeddings and text-guided attention to avoid laborious temporal boundary annotations. Recently, Zheng et al. (2022a) design contrastive proposal learning to distinguish the positive video segments from the highly confusing ones within the same video. Different from existing works only considering RGB frames, we innovate to capitalize on synergic multi-modal videos as assistive training guidance to handle the dilemma of incomprehensive annotations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 739 3) Multi-Modal Teacher 2) Single-Modal Student RGB Frames RGB Frames Optical Flow Sentence Query ... ... Cross-Modal Mutual Learning ... ... ... ... Proposal Candidates Global Distillation Local Distillation Semantic Activation Map Proposal Candidates The woman throws a blanket onto the vacuum. The woman [MASK] a blanket onto the [MASK]. Local Contrastive Learning Local Contrastive Learning Global Contrastive Learning Global Contrastive Learning Proposal Generator Proposal Generator ... ... RGB Frames Sentence Query Semantic Activation Map 1) Single-Modal Baseline Masked Query A person walks through house holding a bag. Proposal Candidates Sentence Query A person walks through house holding a bag. Semantic Activation Map Student Network Teacher Network Figure 2: Overview of Local-Global Multi-Modal Distillation (MMDist). It comprises 1) a single-modal baseline using localglobal contrastive learning, 2) a single-modal student model with a multi-modal distillation algorithm at local and global level, and 3) a multi-modal teacher model via cross-modal mutual learning. The proposal candidates that are in dark green represent the ones predicted as positive. Multi-modal temporal video grounding. The only work in temporal video grounding that employs multi-modal videos is (Chen, Tsai, and Yang 2021). Their motivation is concentrated at the feature level: using multi-modal videos to augment the feature representation in the fully-supervised setting. We highlight that our motivation and formulation in the weakly-supervised context are distinct from theirs. Our goal to use multi-modal lies in supervision level i.e. addressing the deficient supervision problem by taking multi-modal as auxiliary supervision. This particular problem is unique to our weakly-supervised scenario and has not appeared in the fully-supervised counterpart. Besides, our formulation diverges from (Chen, Tsai, and Yang 2021) in that we only take multi-modal videos as extra supervision and do not require the multi-modal input during inference. Knowledge distillation. Knowledge distillation is originally innovated in (Hinton, Vinyals, and Dean 2015) to transfer the knowledge acquired by a large, complex model to a smaller, more efficient model. In recent years, knowledge distillation is further applied in domain adaptation (Chen et al. 2019), zero-shot learning (Nayak et al. 2019), and multi-modal learning (Gupta, Hoffman, and Malik 2015; Wang et al. 2020). The most related works to ours are (Yu, Liu, and Chan 2021; Garcia, Morerio, and Murino 2018), which transfer knowledge of skeleton (Yu, Liu, and Chan 2021) or depth frames (Garcia, Morerio, and Murino 2018) to a student network of the RGB modality respectively. In contrast to them, we focus on the temporal grounding, and the identified local and global semantic knowledge to transfer is specific to our task. Local-Global Multi-Modal Distillation Method Overview The proposed method Local-Global Multi-Modal Distillation (MMDist) explores leveraging multi-modal videos for weakly-supervised temporal video grounding (TVG). Our goal is not only to enhance the model with multi-modal input but further to take the multi-modal videos as auxiliary supervisory guidance for training the single-modal model, with the anticipation that it can mitigate the issue of deficient supervision. As illustrated in Fig 2, our methods consist of three parts: a single-modal baseline, a multi-modal teacher model, and a single-modal student model. 1) The single-modal baseline takes only single-modal videos as input. We propose local and global contrastive learning to align the semantic content of videos and sentences, simultaneously considering both local and global viewpoints. 2) The multi-modal teacher model collaboratively learns from multiple modality sources in videos. We devise crossmodal mutual learning to enforce the consistency of the semantic activation maps across different modalities. For each modality of the video, we first compute the semantic activation map between video snippets and query sentence respectively. Then the discrepancies arising from one modality are compensated for through the integration of the other, leading to enhanced overall performance and error mitigation. 3) The single-modal student model has the same network architecture design as the baseline model, but it receives additional supervision from the teacher model during training. More specifically, the multi-modal teacher model predicts more accurate temporal boundaries, whose ground truth is unknown in a weakly-supervised learning context. Also, the teacher model provides a better estimation of semantic activation maps, unveiling the intrinsic semantic relationship between language and videos. To this end, we design globallevel and local-level distillation algorithms, which encourage the student model to mimic the predictions of temporal boundaries and semantic activation maps respectively. The student model is then trained with the supervisory signals from the multi-modal videos while still keeping the singlemodal videos as the input during the inference stage. Here we highlight our innovations. 1) We design localglobal contrastive learning for the single-modal baseline. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 740 Note that this baseline can beat state-of-the-art methods, without touching multi-modal videos during both training and inference. 2) Our student model is the first in the literature to capitalize on multi-modal videos to handle the insufficient supervision obstacle. And we craft a multi-modal distillation algorithm to distill multi-modal knowledge at both local and global scopes. 3) A novel cross-modal mutual learning framework is proposed for the teacher model to mutually compensate for the errors introduced by any single modality. Contrastive Learning at Local and Global Level The single-modal baseline aims to localize the temporal moment described in the sentence by using the single-modal video input in both training and testing. Previous approaches either solely emphasize semantic alignment between the overall proposal and language (Lin et al. 2020; Zheng et al. 2022a,b) i.e., on a global scale, or specifically tackle the local similarity among the video snippets and sentence (Tan et al. 2021; Chen et al. 2020). However, local and global alignment can capture the underlying semantic structure and relationships between the sentence and video from different perspectives. Both of them serve to facilitate multi-modal knowledge transfer in the subsequent stages, thus establishing a foundational framework for the ensuing processes of local and global distillation. To this end, we propose to apply contrastive learning to simultaneously cater to both local and global scopes, formulating local-global contrastive learning. Global contrastive learning. Our global contrastive learning module is similar to CPL network (Zheng et al. 2022b), which encompasses a proposal generator, and a sentence reconstructor. We use the proposal generator to generate a series of proposal candidates. These proposal candidates are defined by the center and width as (ck, wk) where k = 1 . . . K and K is the number of proposal candidates. Then a transformer encoder as in CPL extracts the visual feature for the k-th proposal as vk and sentence feature as q, with each feature vector having a dimension of d. The details of network architecture are omitted here and can be referred to (Zheng et al. 2022b). Then we randomly mask M words wm i (i = 1 . . . W) in the sentence and enforce the reconstructor to reconstruct the masked words based on the video proposals, where W represents the number of words in the sentence. The reconstruction error is formulated as Lrec = W X i=1 Lce(wm i ) (1) The proposal that semantically matches the sentence query is regarded as a positive proposal, while the whole video is considered as a negative one. The positive proposal is assumed to be with a lower reconstruction error of the masked words than the negative proposal. We can heuristically select the positive proposal as the one k∗with the minimum reconstruction error k∗= argmink=1...KLrec[k] (2) The global contrastive learning objective LB global is formulated as LB global = Lrec[k∗] + Lfull rec + max(0, Lrec[k∗] −Lfull rec + ξfull) (3) where the reconstruction losses between the positive proposal and full video are contrasted with a margin of ξfull and Lfull rec denotes the reconstruction loss by the full video. Local contrastive learning. Specifically, we first enhance the local information of video snippet features V ∈RL×d by applying a sequence of convolutional layers accompanied by the ReLU activation function, formulating context-enhanced local features ˆV ∈RL×d. Here L indicates the video snippet number and d is the channel dimension of the video features. Then we compute the semantic activation map m ∈RL×1 which represents the semantic similarity between the video snippet and sentence as ml = ˆVl · q || ˆVl|| · ||q|| (4) where ml signifies the value of semantic activation map for lth video snippet and q denotes the sentence feature. Because the video is untrimmed, the foreground features relevant to the query sentence are intertwined with unrelated background elements. For a more accurate estimation of similarity between the i-th video and j-th sentence lij in a training batch, we adaptively select the top LT values of mij l and take their average, formulated as lij = LT X l=1 ˜mij l LT (5) where ˜mij is a rearranged version of mij, sorted in descending order. Local contrastive learning encourages the model to maximize the similarity between the positive video-sentence pairs while minimizing the mismatched negative pairs. To achieve this, we first compute the probability pi that the i-th video matches to the i-th sentence pi = exp( lii τ ) PN j=1 exp( lij τ ) (6) where τ is the temperature hyperparameter and N denotes the batch size. Then we can define the loss function of local contrastive learning LB local as LB local = −1 N N X j=1 log pj (7) Local-global contrastive learning. The final objective function for local-global contrastive learning is formulated as LB = LB global + αLB local (8) which jointly trains local and global contrastive learning. Here α is a weight hyperparameter to balance LB global and LB local. The final score for the proposal p with start and end point (ps, pe) is computed from the local / global contrastive learning branches as sp = γ pe X l=ps mii l pe −ps + 1 −rp (9) where γ is a weight hyperparameter, mii signifies the semantic activation map of i-th video snippet and its query sentence, and rp indicates the reconstruction error for proposal p as defined in Eq. 1. The proposal with the maximum score from the candidates is selected as the final prediction. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 741 Multi-Modal Distillation at Local and Global Level Assumed that one could train a powerful multi-modal model for weakly-supervised temporal video grounding (detailed in the subsection of “cross-modal mutual learning”). Thanks to utilizing accessory information from different modalities, the multi-modal model enjoys better localization accuracy and generalization capability than the single-modal one. But it also suffers from greater computational complexity and relies on multiple input modalities which might not be available in real-world applications. To alleviate this obstacle, we regard the multi-modality model as a teacher model T and transfer its multi-modal knowledge to a single-modal student model S. The superiority of such multi-modal distillation lies in its ability to train the student model using the supervision of multiple modalities, while maintaining computational efficiency and taking single-modal input. Such a distillation paradigm can effectively cope with the deficient supervision obstacle for the weakly-supervised setting. We identify two sorts of multi-modal knowledge that are specific to our task, i.e., knowledge of temporal boundaries at the global level, and knowledge of semantic activation maps at the local level. And correspondingly, a multi-modal distillation algorithm constituted by global-level distillation and local-level distillation is crafted to transfer these two sorts of knowledge respectively. Global-level distillation. In the weakly-supervised scenarios, only the video-sentence pairs are provided for training, and the ground truth temporal boundaries are not available. The multi-modal teacher model enjoys the advantage of accuracy and robustness in making global-level predictions of temporal boundaries. Therefore, we treat the predictions from the teacher model as pseudo-labels for the student model. Assume that the teacher model selects the kT -th proposal candidate as the prediction. In the design of the single-modal baseline, we heuristically choose the proposal candidate with minimum reconstruction loss as the potential ground truth proposal. However, such selection is often inaccurate due to the lack of sufficient training supervision. So for the student model, instead, we explicitly set the prediction from the teacher model i.e., kT -th proposal candidate as the pseudo ground truth to train the student model. The global-level distillation loss LS global is formulated as LS global = Lglobal[kT ] kT = argmaxksT k (10) where sT k is the prediction score for the k-th proposal candidate evaluated by the teacher model, and Lglobal is the global contrastive learning loss function defined as in the single-modal baseline. Local-level distillation. The semantic activation map m ∈ RL×1 is an intermediate output that estimates the similarity of the query sentence and each snippet of video at the local level. Unlike global-level knowledge of temporal boundaries, the local-level knowledge of the activation map provides a deeper understanding of the underlying data structure and relationships between the language and video. Therefore, mimicking the semantic activation map provides valuable guidance to transfer the multi-modal knowledge from the teacher model to the student model, resulting in improved generalization capabilities for the student model without inputs of multi-modal videos. To achieve this, we devise the local-level distillation loss LS local as a consensus of semantic activation between the teacher and student model: LS local = ϕ(mS, mT ) (11) where ϕ is the distance function of the activation map such as L1 or L2 norm. The final loss LS to train the single-modal student model S consists of both the distillation loss and the original loss for the baseline model, written as LS = LB + β(LS global + αLS local) (12) where β is a hyperparameter to balance the weight between the distillation and baseline losses. Cross-Modal Mutual Learning This subsection describes the cross-modal mutual learning algorithm for the teacher model of multi-modality. The teacher model T digests inputs of multi-modal video features, denoted as V1, V2 ∈RL×d. For the global contrastive module and proposal generator, the video features of different modalities are early fused by concatenation. For the local contrastive module, we first generate the semantic activation maps for the two modalities as m1, m2 ∈RL×1 respectively. The final semantic activation map mT of teacher model T is integrated as the average of the two modalities: mT = m1 + m2 2 (13) Note that different modalities contain complementary information and can thus compensate for the errors of each other. To enable collaborative learning from different modalities, we design a cross-modal mutual learning objective, where discrepancies arising from one modality can be compensated for through the integration of supplemental modalities. In more detail, for the semantic activation map of one modality, we regard the one from the other modality as a reference. Then we enforce the consistency of the semantic activation map and its reference, formulated as Lmutual = ϕ(m1, δ(m2)) + ϕ(m2, δ(m1)) (14) where ϕ represents the distance function of two vectors such as L1 or L2 norm, and δ signifies gradient stopping operation. Experiments Datasets and Evaluation Metrics We validate the performance of the proposed methods against the state-of-the-art approaches on two large-scale datasets: 1) Charades-STA (Gao et al. 2017) includes 9,848 videos of daily indoor activities. The average length of a sentence query is 8.6 words, and the average duration of the video is 29.8 seconds. It is originally designed for action recognition / localization (Sigurdsson et al. 2016), and later extended by Gao et al. (Gao et al. 2017) with language descriptions for temporal video grounding. 2) ActivityNet Captions (Krishna et al. 2017) consists of 19,290 untrimmed videos, whose The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 742 Method Charades-STA ActivityNet Captions R@0.3 R@0.5 R@0.7 R@0.1 R@0.3 R@0.5 SCN (Lin et al. 2020) 42.96 23.58 9.97 71.48 47.23 29.22 BAR (Wu et al. 2020) 44.97 27.04 12.23 − 49.03 30.73 MARN (Song et al. 2020) 48.55 31.94 14.81 − 47.01 29.95 RTBPN (Zhang et al. 2020b) 60.04 32.36 13.24 73.73 49.77 29.63 CCL (Zhang et al. 2020c) − 33.21 15.68 − 50.12 31.07 WSTAN (Wang et al. 2022) 43.39 29.35 12.28 79.78 52.45 30.01 LCNet (Yang et al. 2021) 59.60 39.19 18.87 78.58 48.49 26.33 VCA (Wang, Chen, and Jiang 2021) 58.58 38.13 19.57 67.96 50.45 31.00 CPL (Zheng et al. 2022b) 66.40 49.24 22.39 79.86 53.67 31.24 MMDist Teacher 70.11 54.72 26.00 82.89 58.53 32.98 MMDist Baseline 67.26 51.58 24.22 82.27 56.92 31.80 MMDist Student 68.90 53.29 25.27 83.11 58.69 32.52 Table 1: Comparisons with state-of-the-art methods on two large-scale datasets. contents are diverse and open. The average duration of the video is 117.74 seconds and the average length of the description is 13.16 words. There are 2.4 annotated moments with a duration of 8.2 seconds in each video. Following previous works (Gao et al. 2017; Lin et al. 2020; Zheng et al. 2022b,a), we adopt the evaluation metric “R@m” to evaluate the grounding accuracy of our method. Specifically, we calculate the Intersection over Union (IoU) between the predicted temporal moment and the ground truth. Then “R@m” is defined as the percentage of language queries having correct grounding results with its IoU being larger than m. As previous works, we report the results with m = {0.3, 0.5, 0.7} on Charades-STA dataset, and m = {0.1, 0.3, 0.5} on ActivityNet-Captions dataset. Implementation Details We consider the RGB frames and optical flows as the multimodalities for the input videos. And I3D network (Carreira and Zisserman 2017) and C3D network (Tran et al. 2015) are used to extract RGB features for Charades-STA and ActivityNet-Captions respectively. TV-L1 algorithm (Zach, Pock, and Bischof 2007) and the I3D network are applied to compute the optical flow features. For the query sentence, we use the pre-trained GloVe word2vec (Pennington, Socher, and Manning 2014) to extract word features. We set the maximum description length to 20 on both datasets. The vocabulary size is 8000 on ActivityNet-Captions and 1111 on Charades-STA respectively. We mask 1/3 of words in the query sentence for reconstruction. The dimensions of the hidden state d for both language and visual features are set to be 256. The number of video snippets L is resampled to 200 on both datasets. We use the Adam optimizer (Kingma and Ba 2014) for the model training with a batch size of 32. For multi-modal distillation, we first train the teacher model with 15 epochs with a learning rate of 0.00035, and then distill it to the student model with another 15 epochs with a learning rate of 0.0005. The training of the single-modal baseline is independent of the teacher / student models, where the number of training epochs and learning rate for it are set to 15 and 0.0004 reVariants multi-modality distillation MMDist Teacher   MMDist Baseline   MMDist Student   Table 2: The difference between the variants of our models. spectively. The hyperparameter of α, β, and γ is set to 4.5 0.9, and 3.0 respectively. Performance Comparisons Our methods have three variant models i.e., a multi-modal teacher model (MMDist Teacher), a single-modal baseline (MMDist Baseline), and a single-modal student model (MMDist Student). Table 2 presents their distinctions in the training and testing settings. While MMDist Teacher takes the multi-modal videos of RGB frames and optical flow as input, the other two models only consume the input of singlemodal videos i.e., RGB frames. The MMDist Student differs from the Baseline in that it exploits the distillation algorithm to learn from the teacher model. We verify the capability of the proposed methods on two widely-used datasets i.e., Charades-STA and ActivityNet-Captions. Table 1 illustrates the performance comparison of our methods to previous methods of weakly-supervised TVG. All three proposed models beat the state-of-the-art methods by a clear margin. We denote the previous best method Gaussian-based Contrastive Proposal Learning (Zheng et al. 2022b) as CPL. The details of the comparison are concluded as follows. 1) MMDist Baseline is better than CPL. The MMDist Baseline model exclusively employs RGB frames from videos and refrains from incorporating any multi-modal videos throughout its training and testing phases. This ensures a fair comparison with state-of-the-art methods such as CPL. As indicated, MMDist Baseline consistently surpasses the previous best methods in all evaluation metrics. For instance, our proposed baseline achieves about 2 points higher than The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 743 Method R@0.3 R@0.5 R@0.7 Baseline w/o l-cont 65.83 50.54 23.31 Baseline w/o g-cont 60.42 44.36 20.80 Baseline full 67.26 51.58 24.22 Table 3: Ablation study on the baseline. Method R@0.3 R@0.5 R@0.7 Student baseline 67.26 51.58 24.22 Student w/o l-dis 68.21 51.68 24.45 Student w/o g-dis 68.02 52.85 24.98 Student full 68.90 53.29 25.27 Table 4: Ablation study on our student model. Method R@0.3 R@0.5 R@0.7 Teacher baseline 67.92 52.31 24.64 Teacher w/o mutual 69.73 53.86 24.76 Teacher full 70.11 54.72 26.00 Table 5: Ablation study on the teacher model. CPL in the metric of “R@0.5” on Charades-STA dataset, and 3.5 points higher in “R@0.3” on ActivityNet-Captions. This indicates that despite the absence of any utilization of multimodal videos during training, our local-global contrastive learning baseline still exhibits better grounding ability. 2) MMDist Teacher surpasses CPL / MMDist Baseline. The incorporation of multi-modal inputs consistently leads to significant improvements across all evaluation metrics for the MMDist Teacher. The improvements can be attributed to the supplemental cues contained in multi-modal input sources, helpful in aligning the semantics of the video and the language query. MMDist Teacher demonstrates a 16.1% improvement in the metrics of R@0.7 on the Charades-STA and a 9.1% improvement in the metrics of R@0.3 on the ActivityNet-Captions, compared to CPL. Also, the grounding capability of MMDist Teacher is superior to MMDist Baseline with the extra information from multi-modal videos. This verifies the superiority of enhancing the model with multi-modal videos and the effectiveness of the proposed cross-modal mutual learning. 3) MMDist Student outperforms MMDist Baseline. Even without using multi-modal videos as input, MMDist Student outperforms MMDist Baseline significantly by the multimodal knowledge distilled from the teacher model. And the grounding accuracy of MMDist Student also evidently surpasses CPL. We highlight that the MMDist Student model achieves almost similar accuracy to Teacher in ActivityNetCaptions where the student model is slightly better than the teacher in R@0.1 and R@0.3 (about 0.2 points), while slightly worse in R@0.5(about 0.4 points). Ablation Studies 1) The effectiveness of local-global contrastive learning. We investigate the effectiveness of each proposed module on the model’s performance and conduct ablation studies on the Charades-STA dataset. Table 3 explores the impact of local contrastive learning and global contrastive learning. When either local or global contrastive learning is removed, the model’s performance declines significantly in each evaluation metric. This verifies the effectiveness of local-global contrastive learning and the necessity of learning the semantic alignment between the video and language at both local and global levels. Moreover, the following ablation study on multi-modal distillation further reveals that the semantic activation maps from the local contrastive learning play an important role in transferring the multi-modal knowledge from the teacher to the student model. 2) The benefit of local-global multi-modal distillation. This paper identifies two sorts of knowledge from the multimodal teacher i.e., temporal boundaries at the global level and semantic activation maps at the local level. We correspondingly craft the multi-modal distillation algorithm composed of local and global level distillation to transfer them. Table 4 summarizes the ablation study on local and global distillation. On the one hand, when removing either of them from the full model, the performance decreases evidently. And especially when removing the local distillation part, the metric of “R@0.5” drops more than 1.5 points. On the other hand, both of them are still better than the “Student baseline”. Here “Student baseline” denotes our single-modal baseline. This underscores that both local-level and global-level distillations are effective in leveraging the multi-modal training guidance. 3) The efficacy of cross-modal mutual learning. Here we study the effectiveness of cross-modal mutual learning in the teacher model. Table 5 presents the model accuracy after discarding the loss function of mutual learning. All three evaluation metrics values show about an evident drop. We also design a multi-modal baseline for the teacher model i.e., “Teacher baseline”, which directly concatenates the multimodal features at the input level. The model’s localization accuracy surpasses our single-modal baseline with a clear margin, thanks to the supplementary information offered by the multi-modal videos. However, the teacher baseline model exhibits noticeable performance inferiority as it lacks the capability for collaborative learning from multi-modality. Conclusion This paper for the first time exploits multi-modal videos for weakly-supervised temporal video grounding. Firstly, we propose a cross-modal mutual learning framework to collaboratively train a teacher model with the input of multi-modal videos. Secondly, we devise local and global level distillation algorithms to transfer this knowledge from the teacher model to a single-modal student model. Moreover, we introduce a local-global contrastive learning framework as a baseline where the semantic contents of video and language are simultaneously aligned at both local and global scopes. Extensive experiments demonstrate the effectiveness of our methods on two widely-used datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 744 Acknowledgements This work was carried out at the Rapid-Rich Object Search (ROSE) Lab, School of EEE, NTU, Singapore. The research is supported in part by the NTU-PKU Joint Research Institute (a collaboration between the Nanyang Technological University and Peking University that is sponsored by a donation from the Ng Teng Fong Charitable Foundation). References Bao, P.; and Mu, Y. 2022. Learning Sample Importance for Cross-Scenario Video Temporal Grounding. In ICMR. Bao, P.; Shao, Z.; Yang, W.; Ng, B. P.; Er, M. H.; and Kot, A. C. 2024. Omnipotent Distillation with LLMs for WeaklySupervised Natural Language Video Localization: When Divergence Meets Consistency. In AAAI. Bao, P.; Yang, W.; Ng, B. P.; Er, M. H.; and Kot, A. C. 2023. Cross-modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization. In AAAI. Bao, P.; Zheng, Q.; and Mu, Y. 2021. Dense Events Grounding in Video. In AAAI. Carreira, J.; and Zisserman, A. 2017. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In CVPR. Chen, Y.-C.; Lin, Y.-Y.; Yang, M.-H.; and Huang, J.-B. 2019. CrDoCo: Pixel-Level Domain Transfer With Cross-Domain Consistency. In CVPR. Chen, Y.-W.; Tsai, Y.-H.; and Yang, M.-H. 2021. End-to-end Multi-modal Video Temporal Grounding. In NeurIPS. Chen, Z.; Ma, L.; Luo, W.; Tang, P.; and Wong, K.-Y. K. 2020. Look Closer to Ground Better: Weakly-Supervised Temporal Grounding of Sentence in Video. ArXiv. Dosovitskiy, A.; Fischer, P.; Ilg, E.; H¨ausser, P.; Hazirbas, C.; Golkov, V.; van der Smagt, P.; Cremers, D.; and Brox, T. 2015. FlowNet: Learning Optical Flow with Convolutional Networks. In ICCV. Gao, J.; Sun, C.; Yang, Z.; and Nevatia, R. 2017. Tall: Temporal activity localization via language query. In ICCV. Gao, M.; Davis, L. S.; Socher, R.; and Xiong, C. 2019. WSLLN:Weakly Supervised Natural Language Localization Networks. In EMNLP. Garcia, N. C.; Morerio, P.; and Murino, V. 2018. Modality Distillation with Multiple Stream Networks for Action Recognition. In ECCV. Gupta, S.; Hoffman, J.; and Malik, J. 2015. Cross Modal Distillation for Supervision Transfer. In CVPR. Hinton, G. E.; Vinyals, O.; and Dean, J. 2015. Distilling the Knowledge in a Neural Network. ArXiv, abs/1503.02531. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. In ICLR. Krishna, R.; Hata, K.; Ren, F.; Fei-Fei, L.; and Carlos Niebles, J. 2017. Dense-captioning events in videos. In ICCV. Lin, Z.; Zhao, Z.; Zhang, Z.; Wang, Q.; and Liu, H. 2020. Weakly-Supervised Video Moment Retrieval via Semantic Completion Network. In AAAI. Liu, M.; Wang, X.; Nie, L.; Tian, Q.; Chen, B.; and Chua, T.-S. 2018. Cross-modal moment localization in videos. In ACM MM. Lucas, B. D.; and Kanade, T. 1981. An Iterative Image Registration Technique with an Application to Stereo Vision. In IJCAI. Mithun, N. C.; Paul, S.; and Roy-Chowdhury, A. K. 2019. Weakly Supervised Video Moment Retrieval From Text Queries. In CVPR. Mun, J.; Cho, M.; and Han, B. 2020. Local-Global VideoText Interactions for Temporal Grounding. In CVPR. Nayak, G. K.; Mopuri, K. R.; Shaj, V.; Babu, R. V.; and Chakraborty, A. 2019. Zero-Shot Knowledge Distillation in Deep Networks. In ICCV. Pennington, J.; Socher, R.; and Manning, C. D. 2014. GloVe: Global Vectors for Word Representation. In EMNLP. Qi, M.; Qin, J.; Yang, Y.; Wang, Y.; and Luo, J. 2021. Semantics-Aware Spatial-Temporal Binaries for CrossModal Video Retrieval. TIP. Qiao, S.; Shen, W.; Zhang, Z.; Wang, B.; and Yuille, A. L. 2018. Deep Co-Training for Semi-Supervised Image Recognition. In ECCV. Sigurdsson, G. A.; Varol, G.; Wang, X.; Farhadi, A.; Laptev, I.; and Gupta, A. K. 2016. Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding. In ECCV. Song, Y.; Wang, J.; Ma, L.; Yu, Z.; and Yu, J. 2020. WeaklySupervised Multi-Level Attentional Reconstruction Network for Grounding Textual Queries in Videos. ArXiv:2003.07048. Sreenu, G.; and Durai, M. A. S. 2019. Intelligent video surveillance: a review through deep learning techniques for crowd analysis. Journal of Big Data. Tan, R.; Xu, H.; Saenko, K.; and Plummer, B. A. 2021. LoGAN: Latent Graph Co-Attention Network for WeaklySupervised Video Moment Retrieval. In WACV. Tarvainen, A.; and Valpola, H. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NIPS. Tran, D.; Bourdev, L. D.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning Spatiotemporal Features with 3D Convolutional Networks. In ICCV. Wang, J.; Ma, L.; and Jiang, W. 2020. Temporally Grounding Language Queries in Videos by Contextual Boundary-aware Prediction. In AAAI. Wang, Q.; Zhan, L.; Thompson, P. M.; and Zhou, J. 2020. Multimodal Learning with Incomplete Modalities by Knowledge Distillation. In KDD. Wang, Y.; Deng, J.; gang Zhou, W.; and Li, H. 2022. Weakly Supervised Temporal Adjacent Network for Language Grounding. TMM. Wang, Z.; Chen, J.; and Jiang, Y.-G. 2021. Visual CoOccurrence Alignment Learning for Weakly-Supervised Video Moment Retrieval. In ACM MM. Wu, J.; Li, G.; Han, X.; and Lin, L. 2020. Reinforcement Learning for Weakly Supervised Temporal Grounding of Natural Language in Untrimmed Videos. In ACM MM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 745 Yang, W.; Zhang, T.; Zhang, Y.; and Wu, F. 2021. Local Correspondence Network for Weakly Supervised Temporal Sentence Grounding. TIP. Yu, B. X. B.; Liu, Y.; and Chan, K. C. C. 2021. Multimodal Fusion via Teacher-Student Network for Indoor Action Recognition. In AAAI. Zach, C.; Pock, T.; and Bischof, H. 2007. A duality based approach for realtime tv-l 1 optical flow. In PR. Zhang, D.; Dai, X.; Wang, X.; Wang, Y.-F.; and Davis, L. S. 2019a. Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In CVPR. Zhang, S.; Peng, H.; Fu, J.; and Luo, J. 2020a. Learning 2D Temporal Adjacent Networks for Moment Localization with Natural Language. In AAAI. Zhang, Z.; Lin, Z.; Zhao, Z.; and Xiao, Z. 2019b. CrossModal Interaction Networks for Query-Based Moment Retrieval in Videos. In ACM SIGIR. Zhang, Z.; Lin, Z.; Zhao, Z.; Zhu, J.; and He, X. 2020b. Regularized Two-Branch Proposal Networks for WeaklySupervised Moment Retrieval in Videos. In ACM MM. Zhang, Z.; Zhao, Z.; Lin, Z.; Zhu, J.; and He, X. 2020c. Counterfactual Contrastive Learning for Weakly-Supervised Vision-Language Grounding. In NeurIPS. Zheng, M.; Huang, Y.; Chen, Q.; and Liu, Y. 2022a. Weakly Supervised Video Moment Localization with Contrastive Negative Sample Mining. In AAAI. Zheng, M.; Huang, Y.; Chen, Q.; Peng, Y.; and Liu, Y. 2022b. Weakly Supervised Temporal Sentence Grounding with Gaussian-based Contrastive Proposal Learning. In CVPR. Zhu, W.; Lu, J.; Li, J.; and Zhou, J. 2021. DSNet: A Flexible Detect-to-Summarize Network for Video Summarization. TIP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 746
2024
83
18,662
Large Occluded Human Image Completion via Image-Prior Cooperating Hengrun Zhao*1, Yu Zeng*2, Huchuan Lu†1, Lijun Wang1, 1Dalian University of Technology 2Johns Hopkins University zhaohengrun@mail.dlut.edu.cn, yzeng22@jhu.edu, lhchuan@dlut.edu.cn, ljwang@dlut.edu.cn Abstract The completion of large occluded human body images poses a unique challenge for general image completion methods. The complex shape variations of human bodies make it difficult to establish a consistent understanding of their structures. Furthermore, as human vision is highly sensitive to human bodies, even slight artifacts can significantly compromise image fidelity. To address these challenges, we propose a large occluded human image completion (LOHC) model based on a novel image-prior cooperative completion strategy. Our model leverages human segmentation maps as a prior, and completes the image and prior simultaneously. Compared to the widely adopted prior-then-image completion strategy for object completion, this cooperative completion process fosters more effective interaction between the prior and image information. Our model consists of two stages. The first stage is a transformer-based auto-regressive network that predicts the overall structure of the missing area by generating a coarse completed image at a lower resolution. The second stage is a convolutional network that refines the coarse images. As the coarse result may not always be accurate, we propose a Dynamic Fusion Module (DFM) to selectively fuses the useful features from the coarse image with the original input at spatial and channel levels. Through extensive experiments, we demonstrate our method’s superior performance compared to state-of-the-art methods. Introduction Image completion (a.k.a. image inpainting) refers to the task of reconstructing the missing part of partially visible images based on the information of visible parts in the image. It has been an active research topic in the past decades. Traditional methods (Efros and Freeman 2001; Kwatra et al. 2005) and earlier deep learning-based methods (Yu et al. 2018, 2019) are mainly focused on the background inpainting problem, i.e. completing the background part in an image, which achieved superior performance and have been incorporated in many practical applications. Recent research has started paying more attention to a more difficult object inpainting problem, i.e. completing missing or partially missing objects in an image (Zhao et al. 2021b; Zeng, Lin, and Patel 2022; *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: A demonstration of our method and several representative image completion methods (Zheng et al. 2022a; Zhao et al. 2021a; Suvorov et al. 2022) for completing images under various occlusions. Xie et al. 2023). Compared to background inpainting, object inpainting is a much harder problem and requires a higherlevel semantic understanding of image data. Although recent advance in deep generative models (Suvorov et al. 2022; Zhao et al. 2021a; Zheng et al. 2022a) have shown great promise, object inpainting remains a significant and challenging problem within the field of computer vision. Among all objects, the completion of the human in an image presents unique challenges due to both its intrinsic difficulties and the elevated scrutiny from human viewers. As human bodies have distinct features from the surrounding environment, the traditional inpainting principle based on borrowing features from the background is no longer effective. Due to the presence of clothes, different parts of the human body can possess unique features, which brings additional challenges for modeling. In addition, compared to other objects, human vision is more adapted to perceive the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7469 Figure 2: Example results. We show a collection of images and corresponding human segmentation maps and human areas in image, comprising the original image, the masked image, and the images that were completed by the coarse and refinement networks. human body. Therefore, even small distortions and artifacts can lead to very unpleasant results. Fortunately, while human bodies can make a variety of postures and actions, they still have a roughly fixed form and topology, allowing the utilization of prior information such as segmentation maps (Wu et al. 2019; Zhao et al. 2021b; Han et al. 2019) and posture (Lassner, Pons-Moll, and Gehler 2017; Balakrishnan et al. 2018; Men et al. 2020; Grigorev et al. 2019). Most previous approaches for human image inpainting first complete a partially occluded human parsing map and then use it as a prior to guide the completion of the images. While these approaches have demonstrated promising results, their effectiveness is limited, especially in cases where a large area of the human body is occluded, as shown in Fig. 1. This is because completing a parsing map with a large area missing is not significantly easier for the model than completing the corresponding image. Over-reliance on inpainted parsing maps can lead to worse image inpainting results when the inpainted parsing maps are inaccurate. Furthermore, the domain gap between the parsing map and the image makes it difficult for the model to exploit the information in the parsing map to guide the image completion process, which limits the guidance effect of the parsing map. To tackle these challenges, we propose a two-stage deep learning network with an image-prior cooperative completion strategy for human image completion. The example of completion results is shown in Fig. 2. Different from traditional inpainting methods that rely solely on the guidance of pre-completed prior information(Wu et al. 2019; Zhao et al. 2021b; Nazeri et al. 2019; Yang and Guo 2020), the proposed method completes the image and human body segmentation map simultaneously and uses the completed segmentation map to provide additional supervision on the human body area in the image. As shown in Fig. 3, compared to the unilateral guidance of the segmentation map for image completion, our cooperative completion strategy allows the information of the segmentation map to always interact with the image information in the process of the simultaneous completion of the segmentation map and the image. The cooperative completion strategy enables the network to better learn and understand the relationship between people and the environment in the image with the help of the segmentaFigure 3: Illustration of prior guided completion process. (a) Classical prior guided methods such as (Wu et al. 2019; Zhao et al. 2021b; Han et al. 2019) usually complete the prior information first, and then complete the image. (b) Our proposed LOHC completes both prior and image simultaneously, enabling a bidirectional interaction between the prior and image information. tion map, which ultimately leads to better human completion results. In addition, We found that the existing discriminators cannot provide satisfactory guidance on both the local texture details and the human body structure. Therefore, we use two discriminators that focus on the texture details and the global human body structure in the image, respectively, ensuring that the generated human image has both a reasonable structure and realistic details. We summarize our contributions as follows: • We proposed an innovative human image completion strategy based on the prior of the human body segmentation map, allowing the network to fully leverage the prior information to complete the image more accurately. • We introduce a human image completion network that can realistically complete the human body in an image, even in cases where large areas are occluded. Code is available at https://github.com/ZhaoHengrun/LOHC. • We develop a set of training strategies for human image completion that effectively incorporate both human and environmental factors, improving the overall quality of the completed images. Related Work General Image Completion Traditional image completion methods, such as patchbased and color diffusion-based methods(Efros and Freeman 2001; Kwatra et al. 2005; Barnes et al. 2009; Ballester et al. 2001; Chan and Shen 2001; Criminisi, P´erez, and Toyama 2004), often copy or propagate existing image content to fill in occluded areas. These methods may produce blurry and unrealistic results when applied to complex visual scenes. In the past few years, most of the mainstream works have focused on deep learning-based solutions that incorporate higher-level image understanding (Suvorov et al. 2022; Zheng et al. 2022b; Zhao et al. 2021a; Iizuka, Simo-Serra, and Ishikawa 2017; Park et al. 2020; Pathak et al. 2016; Yi et al. 2020; Yu et al. 2018, 2019; Zeng et al. 2021). These methods leverage generative adversarial networks (GANs) (Goodfellow et al. 2020) to generate complex structures and high-resolution details that are perceptually indistinguishable from real images. By combining advanced visual features with semantic understanding, GAN-based methods significantly improve the effectiveness of image completion, achieving high-quality results even in the presence The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7470 of large occluded areas and complex structures. Diffusion models (Sohl-Dickstein et al. 2015) have recently demonstrated amazing image generation capabilities and have also achieved excellent results in the field of image completion (Rombach et al. 2022; Lugmayr et al. 2022). Nevertheless, these methods often demand substantial computational resources, thereby constraining their research and practical application. Object Image Completion Completing objects in images is a more challenging task compared to completing general content. Xiong et al. (Xiong et al. 2019) propose a foreground-aware image inpainting method that utilizes explicit contour guidance for image completion. Ke et al. (Ke, Tai, and Tang 2021) propose a video object inpainting network that models the shape and boundary of the object separately in the frame sequence to achieve both shape completion and texture generation of the object. Zeng et al. (Zeng, Lin, and Patel 2022) proposed a Contextual Object Generator (CogNet), which innovatively completes the occluded area by generating an object based on contextual content and the shape of the occluded mask. Xie et al. (Xie et al. 2023) proposed SmartBrush for completing a missing region in an image with an object using both text and shape guidance. The human body is a highly intricate object, making its completion even more challenging. In recognition of this, several researchers have focused on the human image completion task. Han et al. (Han et al. 2019) proposed Fashion Inpainting Networks (FiNet), which can reconstruct missing clothing parts in fashion portrait images based on partially missing parsing maps. Wu et al. (Wu et al. 2019) propose a two-stage deep learning framework for portrait image completion that extracts a complete human body structure using a human parsing network in the first stage and fills the unknown area in the image using an image completion network in the second stage. Zhao et al. (Zhao et al. 2021b) propose a prior-based human image completion method (PBHC) that uses structure and structure-texture correlation priors to recover a reasonable human shape and compensate for occluded texture. However, their methods are limited by the effectiveness of human parsing map completion. Due to the lack of involvement of advanced semantic information, the completion of the human parsing map often fails to produce satisfactory results when there are large occluded areas or the character has complex postures or self-occlusions, resulting in poor quality of the final generated image. Method Image-Prior Cooperative Completion Given an image I with a binary mask M indicating the area to inpaint, a image completion model G typically generates a completed image Ig based on the masked image Im = I⊙M and the mask M: Ig = G(Im, M), (1) where ⊙represents element-wise multiplication. In object inpainting, object prior (such as segmentation map) is often used as guidance. These prior-based object completion methods typically complete the segmentation prior Sm first and then proceed to complete the image based on the completed prior using separate models GS and GI: Sg = GS(Im, Sm, M) (2) Ig = GI(Im, Sg, M) (3) Taking inspiration from recent studies (Wang et al. 2023a,b; Ye and Xu 2022; Jain et al. 2023; Xi et al. 2022) that highlight the benefits of multitask joint learning compared to separate single-task learning in unified perceptual models, our work delves into the image-prior cooperative completion strategy for human image completion. Instead of performing a prior pre-completion process using a separate network, we model the joint completion of a masked image and prior using an image-prior co-completion process. Given an incomplete image Im and the corresponding incomplete segmentation map Sm, our co-completion process aims to produce the competed image Ig and segmentation prior Sg simultaneously with a unified model G: Ig, Sg = G(Im, Sm, M) (4) The image-prior cooperative completion strategy allows for the prior information to be incorporated throughout the entire joint completion process and combined more effectively with image features, leading to a stronger understanding of human body representation information and ultimately improving the quality of completed human images. Overall Architecture We model the image-prior cooperative completion process with a large occluded human image completion (LOHC) network. It consists of a coarse network and a refinement network, as depicted in Fig. 4. The coarse network aims to complete the overall structure of the missing area at a lower resolution. It takes the downsampled occluded image Im, segmentation map Sm and mask M as input and generates a low-resolution complete image Ic and a segmentation map Sc. Inspired by the excellent contextual interaction capabilities of Masked Autoencoder (MAE) (He et al. 2022), we use an auto-regressive transformer architecture that operates at image patches. Subsequently, the refinement network generates a full-resolution complete image Ig and human body segmentation map Sg based on the Ic and Sc. Coarse Network The coarse network is a transformer-based encoder-decoder network. In order to enable the network to encode the human body area more effectively, we utilize two encoders (Encoder I and H in Fig. 4) to capture the information of the human body area and the environment separately. Then the two sets of encoder features are concatenated and decoded by a single large decoder into image patches. To reduce the computing cost and focus on the overall structure of the image, we scale Im, Sm, and M to 64 × 64 and split them into non-overlapping 4 × 4 patches to be processed by the coarse network. We remove all of the patches that have been partially or completely occluded by M from the encoders’ input, retaining only the patches that are completely unoccluded. The decoder then learns to predict the full set of patches from the ground-truth unoccluded image and segmentation map. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7471 Figure 4: The pipeline of our proposed human image completion network. Given an incomplete image and the corresponding incomplete segmentation map, the coarse network generates a coarse completed image and segmentation map simultaneously at low resolution. The refinement network then completes the final high-resolution image and segmentation map based on the coarse result and the original incomplete image. Figure 5: Mask example. In cases where the mask is small and dense, the patch mask caused by MAE may lead to significant expansion of the mask. Nevertheless, thanks to the dynamic fusion module, the refinement network can still selectively utilize effective information to generate realistic content. Figure 6: Architecture of our proposed dynamic fusion module (DFM), which includes two parts: channel attention (blue area) and spatial attention (pink area). Refinement Network Our refinement network employs an encoder-decoder structure and utilizes Fast Fourier Convolution (FFC) blocks (Suvorov et al. 2022). The capability to build long-range dependency of transformers makes the coarse network suitable for processing largely occluded images. However, as depicted in Fig. 5, its patch-wise processing scheme ignores useful information in partially occluded patches, resulting in a significant deterioration in the quality of the completed image, especially when the mask is small and dense. Fortunately, the refinement network based on convolution blocks has inherent advantages in completing small and spotty occluded areas. In order to combine the coarse network’s large-area construction capability with the refinement network’s nearby pixel completion capability, we selectively fuse the features from Ic and Im at the pixel level with a Dynamic Fusion Module (DFM) before processing by the refinement network. As shown in Fig. 6, this module comprehensively considers factors such as image content, mask, and segmentation map, and applies weighted aggregation separately to the channels and spatial dimensions. Specifically, the channel attention mechanism in DFM generates a pixel-wise attention weight for each channel to modulate the features from Ic before the fusion operation: Fc = WCA ⊙Fc, (5) where the weight WCA is calculated as: WCA = Φc ⊙Φm (6) Φi = ϕI i ⊙ϕS i ⊙ϕM i (7) where ϕI i , ϕS i , ϕM i are computed as follows, ϕI i = F(Ii) (8) ϕS i = F(Sc, Sm, |Sc −Sm|) (9) ϕM i = F(Mp, M, |Mp −M|) (10) where F represents convolutional blocks, Sc represents the segmentation map completed by the coarse network, Sm represents the original occluded segmentation map, and Mp represents the mask expanded by patching in the coarse network. The spatial attention is modulated by two pixel-by-pixel weights that are multiplied and added to the features Ffus fused through the channel attention, respectively: FSA = W α SA ⊙Ffus + W β SA (11) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7472 where W α SA and W β SA are obtained by embedding Ψ, which calculated as: Ψ = ψI ⊙ψS ⊙ψM (12) Where ψS, and ψM are mask embedding and segmentation embedding, ψI is the spatial attention embedding of the features before fusion, and obtained similar to (Woo et al. 2018). Loss Functions Following common practice in the previous studies, we design our loss function by combining an L1-based loss, an adversarial loss and a perceptual loss. We apply L1 loss between the completed image / segmentation map and the ground truth. To provide additional supervision for the human area, we apply an additional L1 loss for the human area. Our L1-based loss term is as follows, Lrec = (∥I −Ig∥1 + ∥S −Sg∥1 + (13) ∥H −Hg∥1) ⊙(1 −M) (14) H = S ⊙I and Hg = Sg ⊙Ig refer to the human area in the ground-truth image and the completed image, respectively. For adversarial loss LD, we use two PatchGAN discriminators (Isola et al. 2017). One discriminator Dlocal only takes the image as input and focuses on matching the local statistics to the ground-truth image patches. The other discriminator Dglobal takes both the image and the segmented human area and segmentation map as inputs to improve global human body structure. Please refer to (Isola et al. 2017; Goodfellow et al. 2020) for the specific definition of the adversarial loss. We also adopt the feature matching loss proposed (Wang et al. 2018), and the total adversarial loss is computed as follows, Ladv = Llocal G + Lglobal G + 10Llocal fm + 10Lglobal fm , (15) where Llocal G , Llocal fm are the vanilla adversarial loss and feature matching loss for Dlocal, and Lglobal G , Lglobal fm are those for Dglobal. We use an architecture similar to those in (Isola et al. 2017) for the discriminators. The total loss is defined as follows: L = 10Lrec + 5Ladv + 60Lpl, (16) where Lpl represents the high receptive field perceptual loss proposed in (Suvorov et al. 2022). We apply the same loss terms to both the coarse and refinement networks except for Lpl, which is only applied to the refinement network. Experiments Datasets Currently, there are no datasets specifically designed for human image completion tasks. Previous works utilize the LIP dataset (Gong et al. 2017) and some fashion portrait parsing datasets (Liang et al. 2015; Lassner, Pons-Moll, and Gehler 2017) for training and evaluation. However, only the rectangular area of the human body is preserved in the images of the LIP dataset. This restriction can make it hard for image completion methods to learn and complete specific human structures based on the image content since the aspect ratio of the image becomes closely tied to the approximate posture and position of the person in the image. Moreover, if the image is uniformly scaled to a consistent size, it may disrupt the original scale and structure of the image content, further complicating the image completion process. As for the fashion portrait analysis datasets, the human posture is too simple and singular, which is also not suitable for our task. As a result, we opted to use the AHP dataset (Zhou et al. 2021), which consists of 56,599 images collected from several large-scale instance segmentation and detection datasets, including COCO (Lin et al. 2014), VOC (Everingham et al. 2010), LIP (Gong et al. 2017), Objects365 (Shao et al. 2019) and OpenImages (Kuznetsova et al. 2020). To prepare the dataset, we first cropped the square area where the human was located in the image, uniformly scaled to a size of 256 × 256. For testing, we used the original dataset’s validation set, which amounted to 3400 pieces, while for training, we utilized the original dataset’s training set of 53199 images. To generate masks for our training and testing data, we utilized the same methods as TFill (Zheng et al. 2022a) for generating central square masks, random regular masks, and random irregular masks. In addition, we also adopted the mask generation method used in Deepfill v2 (Yu et al. 2019) to generate a set of irregular masks. We randomly generated these four types of masks in real time during training. We evaluated the performance of methods on the central square masks, rectangular masks and three sets of object masks provided by (Zeng et al. 2020). Implementation Details Our network training procedure consists of three distinct stages. First, we pre-train the coarse network refer to (He et al. 2022). To encourage long-distance pixel associations, we employ a random mask with a 90% masking ratio, as the adjacent visible pixels under a 75% masking ratio random mask are still relatively close to each other. We then retain the entire pre-trained coarse network and subject it to normal training. Finally, we train the refinement network. Thanks to the reduced computational requirements at lower resolutions, we were able to design a deeper coarse network without sacrificing performance. Specifically, our coarse network includes two encoders, each with 32 layers and a width of 128. The decoder has 32 layers and a width of 256, which matches the combined width of the two encoder tokens. Similar to LaMa (Suvorov et al. 2022), our refinement network employs 4x downsampling and upsampling for image features, and utilizes 9 FFC blocks to complete image features at lower scales. We obtain the segmentation map of human body through U-Net (Ronneberger, Fischer, and Brox 2015). In our training process, we utilized Adam as the optimizer for all components. We trained the coarse network with a batch size of 128 and set the learning rate to 1e-4. For the refinement network, we used a smaller batch size of 16 and set the learning rate to 1e-3. The learning rates of all discriminators are set to 1e-6. We use the Pytorch framework for our implementation and train on an Nvidia A100 GPU. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7473 Figure 7: Visual comparison of completion images of our method and other methods. Zoom-in to see the details. Mask Metric Whole Image Human Area Mask Ratio CR-Fill PBHC TFill Co-Mod LaMa Ours Mask Ratio CR-Fill PBHC TFill Co-Mod LaMa Ours Center PSNR 75.00% 12.920 14.930 14.818 14.543 14.517 14.931 42.88% 21.721 22.729 22.973 22.946 22.877 23.722 SSIM 0.4568 0.5204 0.5311 0.5359 0.5249 0.5486 0.9176 0.9210 0.9225 0.9255 0.9248 0.9281 LPIPS 0.4256 0.3975 0.3779 0.3579 0.3351 0.3165 0.0804 0.0796 0.0715 0.0668 0.0646 0.0610 FID 93.894 74.791 55.430 56.192 40.649 37.468 25.280 42.565 17.103 15.368 13.192 11.957 Rectangle PSNR 75.26% 12.772 14.270 14.093 14.180 14.400 14.626 57.13% 20.532 21.828 21.710 21.865 21.881 22.382 SSIM 0.4452 0.5058 0.4997 0.5213 0.5139 0.5257 0.8870 0.8937 0.8906 0.8972 0.8965 0.8994 LPIPS 0.4477 0.4488 0.4230 0.4049 0.3686 0.3515 0.1134 0.1124 0.1084 0.0995 0.0950 0.0923 FID 105.173 89.363 57.808 61.248 45.974 40.246 38.341 51.193 26.697 24.921 19.849 19.615 VOC PSNR 42.27% 18.392 19.605 19.299 19.603 19.684 20.458 57.51% 22.550 23.153 23.489 23.736 23.932 24.016 SSIM 0.7077 0.7256 0.7088 0.7398 0.7278 0.7501 0.8922 0.8954 0.8921 0.9019 0.8991 0.9053 LPIPS 0.2384 0.2309 0.2234 0.1867 0.1896 0.1669 0.1070 0.1046 0.1004 0.0919 0.0901 0.0857 FID 40.236 42.597 31.967 19.882 21.697 18.606 32.771 47.093 23.560 19.173 18.537 18.484 XPIE PSNR 34.94% 19.125 20.309 20.264 20.388 20.476 21.309 54.10% 22.937 23.479 24.074 24.159 24.376 24.543 SSIM 0.7588 0.7729 0.7610 0.7854 0.7747 0.7952 0.9007 0.9032 0.9014 0.9097 0.9069 0.9131 LPIPS 0.2001 0.1928 0.1792 0.1519 0.1560 0.1357 0.0991 0.0961 0.0893 0.0827 0.0821 0.0776 FID 31.221 33.806 24.488 15.083 16.849 14.571 29.036 42.545 20.120 15.512 15.731 15.217 HKU IS PSNR 32.72% 20.303 21.693 21.486 21.727 21.660 22.665 40.30% 26.048 26.661 27.041 27.358 27.346 26.994 SSIM 0.7858 0.7975 0.7869 0.8102 0.7997 0.8193 0.9282 0.9292 0.9282 0.9353 0.9327 0.9369 LPIPS 0.1779 0.1608 0.1577 0.1289 0.1331 0.1152 0.0739 0.0699 0.0660 0.0595 0.0596 0.0563 FID 25.676 26.053 20.517 12.534 13.565 12.146 17.034 24.236 12.746 9.770 9.680 9.502 Table 1: Quantitative comparison of complete image and human area in image. Bold for best results. Performance of Our Approach To evaluate the effectiveness of our proposed method, we compared it with several state-of-the-art image completion methods, including LaMa (Suvorov et al. 2022), Co-ModGAN (Zhao et al. 2021a), TFill (Zheng et al. 2022a), and CR-Fill (Zeng et al. 2021), as well as a human body completion method PBHC (Zhao et al. 2021b). We quantitatively measured the performance of each method using metrics such as PSNR, SSIM, FID, and LPIPS. Quantitative evaluation. For the human visual experience, the quality of the human area in the image has a more significant impact on the authenticity of the image’s appearance. Therefore, we conducted additional evaluations on the human area in the image based on the segmentation map of the original image. As illustrated in Table 1, LOHC achieves state-of-the-art performance on the AHP dataset. In particular, LOHC exhibits significantly superior performance over other methods when completing images with large areas of occluded human body parts. Visual quality. We provide a visual comparison between our method and existing approaches. As shown Fig. 7 , our method is able to more completely and reasonably complete the human body, while maintaining a clear boundary between the character and the environment. User study. We randomly selected 50 images from the AHP validation set and occluded them with 10 masks. We invited 10 human scorers for evaluation, the scorers were presented with the completed images by different methods in random order and asked to select the best result. The number of user preferences for each method is shown in Table 2. Our method was most frequently preferred by the human scorers. Ablation Study In this part, we study the specific effect of each part of our method. The following experiments all use unified model parameters and experimental settings. To speed up the experiment, we no longer conduct pre-training on the coarse network and only test the images on VOC masks. The exThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7474 Mask Center Rectangle VOC XPIE HKU IS Mask Ratio 75.00% 75.26% 42.27% 34.94% 32.72% Ours 82 56 44 72 37 LaMa 11 26 20 8 40 Co-Mod-GAN 3 12 26 14 14 TFill 4 6 6 5 5 PBHC 0 0 1 1 2 CR-Fill 0 0 3 0 2 Table 2: Subjective comparison. Bold for best results. Category Option PSNR SSIM LPIPS FID Baseline 19.8768 0.7375 0.2009 32.4197 Prior I 18.9394 0.7149 0.2181 39.0026 II 19.1940 0.7152 0.2115 38.4703 Coarse network III 17.9160 0.7096 0.2364 39.7605 IV 18.1886 0.7122 0.2315 37.8431 V 19.0473 0.7324 0.2142 33.0043 VI 19.5710 0.7257 0.2085 32.9269 DFM VII 17.2023 0.7188 0.2757 35.9658 VIII 19.3467 0.7210 0.2133 33.2503 IX 19.7791 0.7273 0.1990 33.3295 Discriminator X 19.7978 0.7216 0.2229 38.3030 Table 3: Ablation study on different configurations of LOHC. perimental results were shown in Table 3. Human segmentation prior. To examine the effect of human body segmentation prior to image completion quality, we first fully exclude the segmentation information from the baseline network (Option I). Here, only the occluded image is used as input into the network to generate the completed image. The input of the global discriminators and the human image encoder in the coarse network is replaced by the complete image, and the loss function related to segmentation information is removed. Furthermore, we analyzed on the effects of unprocessed occluded prior on image completion. We input both the occluded image and the occluded segmentation map into the model, without providing any supervision or completing the segmentation map (Option II). The results indicate that the integration of prior information on human body segmentation has enhanced the network’s image completion performance to some extent. Moreover, the proposed cooperative completion strategy can efficiently utilize segmentation information, resulting in a significant improvement in the network’s performance. Coarse network. We verified the effectiveness of the coarse network and dual encoder settings by conducting experiments. We first completely remove the coarse network in the baseline, and the channel attention part used in the DRM for fusion was also removed (Option III). Next, to illustrate the effect of separately encoding the human part of the image in the coarse network, we constructed three comparative schemes to replace the encoders of the coarse network in the baseline: encoding only the image with a single large encoder (Option IV); encoding both the image and the human part of the image simultaneously with the large encoder (Option V); and using the original two encoders in the baseline but making them both encode only the image (Option VI). All three methods have the same parameters as the baseline. The results show that the coarse completed images generated by the coarse network can significantly improve the structural authenticity of the finally completed images. Although the human image part without a background has less information than the complete image, it can enable the decoder to better analyze the shape of the person and generate a more realistic human body, especially when the human part is separately encoded. Dynamic fusion module. We conducted experiments to separately exclude the complete dynamic fusion module (Option VII) and its channel attention (Option VIII) and spatial attention (Option IX) parts in the refinement network to assess their impact on network performance. The experimental results indicate that directly concatenating coarse images with occluded images substantially limits the performance of refinement networks. Both channel attention and spatial attention in the dynamic fusion module significantly improve the performance of the network, which is essential for achieving higher-quality output. Dual discriminator. To evaluate the efficacy of the global discriminators, we removed them from both the coarse and refinement networks (Option X). The experimental results show that the global discriminator plays a crucial role in guiding the overall structure of images in both coarse and refinement networks, and its removal significantly impacts the network’s performance. Moreover, both the human part image and the complete image are indispensable for the effective operation of the global discriminator. Conclusion In this paper, we have investigated the completion of large occluded human images. we proposed a two-stage human image completion network based on an image-prior cooperative completion strategy. Our study highlights that integrating prior completion with the image completion process can be a more effective approach for utilizing prior information to generate more realistic images. We demonstrate the importance of providing additional supervision on human body parts during training for human body image completion tasks. Achieving adequate attention to both human structure and detailed texture using a single discriminator can be challenging, but our findings suggest that this issue can be effectively addressed by employing two discriminators - one for supervising global features and the other for supervising local features. Finally, extensive experimental results indicate our method performs better than state-of-theart methods in the human image completion task. In addition, the application of our strategy to other models of backbone structure is still feasible in theory, and human image completion based on other backbones such as the diffusion model deserves further study in the future. Acknowledgements The paper is supported by the National Natural Science Foundation of China under grant No.62293540, 62293542, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7475 62276045. References Balakrishnan, G.; Zhao, A.; Dalca, A. V.; Durand, F.; and Guttag, J. 2018. Synthesizing images of humans in unseen poses. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8340–8348. Ballester, C.; Bertalmio, M.; Caselles, V.; Sapiro, G.; and Verdera, J. 2001. Filling-in by joint interpolation of vector fields and gray levels. IEEE transactions on image processing, 10(8): 1200–1211. Barnes, C.; Shechtman, E.; Finkelstein, A.; and Goldman, D. B. 2009. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph., 28(3): 24. Chan, T. F.; and Shen, J. 2001. Nontexture inpainting by curvature-driven diffusions. Journal of visual communication and image representation, 12(4): 436–449. Criminisi, A.; P´erez, P.; and Toyama, K. 2004. Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on image processing, 13(9): 1200–1212. Efros, A. A.; and Freeman, W. T. 2001. Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 341–346. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision, 88: 303–338. Gong, K.; Liang, X.; Zhang, D.; Shen, X.; and Lin, L. 2017. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. In Proceedings of the IEEE conference on computer vision and pattern recognition, 932–940. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Grigorev, A.; Sevastopolsky, A.; Vakhitov, A.; and Lempitsky, V. 2019. Coordinate-based texture inpainting for pose-guided human image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12135–12144. Han, X.; Wu, Z.; Huang, W.; Scott, M. R.; and Davis, L. S. 2019. Compatible and diverse fashion image inpainting. arXiv preprint arXiv:1902.01096. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16000–16009. Iizuka, S.; Simo-Serra, E.; and Ishikawa, H. 2017. Globally and locally consistent image completion. ACM Transactions on Graphics (ToG), 36(4): 1–14. Isola, P.; Zhu, J.-Y.; Zhou, T.; and Efros, A. A. 2017. Imageto-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1125–1134. Jain, J.; Li, J.; Chiu, M. T.; Hassani, A.; Orlov, N.; and Shi, H. 2023. Oneformer: One transformer to rule universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2989– 2998. Ke, L.; Tai, Y.-W.; and Tang, C.-K. 2021. Occlusion-aware video object inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 14468–14478. Kuznetsova, A.; Rom, H.; Alldrin, N.; Uijlings, J.; Krasin, I.; Pont-Tuset, J.; Kamali, S.; Popov, S.; Malloci, M.; Kolesnikov, A.; et al. 2020. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7): 1956–1981. Kwatra, V.; Essa, I.; Bobick, A.; and Kwatra, N. 2005. Texture optimization for example-based synthesis. In ACM SIGGRAPH 2005 Papers, 795–802. Lassner, C.; Pons-Moll, G.; and Gehler, P. V. 2017. A generative model of people in clothing. In Proceedings of the IEEE international conference on computer vision, 853– 862. Liang, X.; Xu, C.; Shen, X.; Yang, J.; Liu, S.; Tang, J.; Lin, L.; and Yan, S. 2015. Human parsing with contextualized convolutional neural network. In Proceedings of the IEEE international conference on computer vision, 1386–1394. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Lugmayr, A.; Danelljan, M.; Romero, A.; Yu, F.; Timofte, R.; and Van Gool, L. 2022. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11461–11471. Men, Y.; Mao, Y.; Jiang, Y.; Ma, W.-Y.; and Lian, Z. 2020. Controllable person image synthesis with attributedecomposed gan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5084– 5093. Nazeri, K.; Ng, E.; Joseph, T.; Qureshi, F. Z.; and Ebrahimi, M. 2019. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212. Park, T.; Zhu, J.-Y.; Wang, O.; Lu, J.; Shechtman, E.; Efros, A.; and Zhang, R. 2020. Swapping autoencoder for deep image manipulation. Advances in Neural Information Processing Systems, 33: 7198–7211. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; and Efros, A. A. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2536–2544. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7476 Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer. Shao, S.; Li, Z.; Zhang, T.; Peng, C.; Yu, G.; Zhang, X.; Li, J.; and Sun, J. 2019. Objects365: A large-scale, highquality dataset for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 8430–8439. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, 2256–2265. PMLR. Suvorov, R.; Logacheva, E.; Mashikhin, A.; Remizova, A.; Ashukha, A.; Silvestrov, A.; Kong, N.; Goka, H.; Park, K.; and Lempitsky, V. 2022. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2149–2159. Wang, T.-C.; Liu, M.-Y.; Zhu, J.-Y.; Tao, A.; Kautz, J.; and Catanzaro, B. 2018. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8798–8807. Wang, X.; Wang, W.; Cao, Y.; Shen, C.; and Huang, T. 2023a. Images speak in images: A generalist painter for in-context visual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6830–6839. Wang, X.; Zhang, X.; Cao, Y.; Wang, W.; Shen, C.; and Huang, T. 2023b. Seggpt: Segmenting everything in context. arXiv preprint arXiv:2304.03284. Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I. S. 2018. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), 3–19. Wu, X.; Li, R.-L.; Zhang, F.-L.; Liu, J.-C.; Wang, J.; Shamir, A.; and Hu, S.-M. 2019. Deep portrait image completion and extrapolation. IEEE Transactions on Image Processing, 29: 2344–2355. Xi, T.; Sun, Y.; Yu, D.; Li, B.; Peng, N.; Zhang, G.; Zhang, X.; Wang, Z.; Chen, J.; Wang, J.; et al. 2022. UFO: unified feature optimization. In European Conference on Computer Vision, 472–488. Springer. Xie, S.; Zhang, Z.; Lin, Z.; Hinz, T.; and Zhang, K. 2023. Smartbrush: Text and shape guided object inpainting with diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22428– 22437. Xiong, W.; Yu, J.; Lin, Z.; Yang, J.; Lu, X.; Barnes, C.; and Luo, J. 2019. Foreground-aware image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5840–5848. Yang, Y.; and Guo, X. 2020. Generative landmark guided face inpainting. In Pattern Recognition and Computer Vision: Third Chinese Conference, PRCV 2020, Nanjing, China, October 16–18, 2020, Proceedings, Part I 3, 14–26. Springer. Ye, H.; and Xu, D. 2022. Inverted pyramid multi-task transformer for dense scene understanding. In European Conference on Computer Vision, 514–530. Springer. Yi, Z.; Tang, Q.; Azizi, S.; Jang, D.; and Xu, Z. 2020. Contextual residual aggregation for ultra high-resolution image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7508–7517. Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; and Huang, T. S. 2018. Generative image inpainting with contextual attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5505–5514. Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; and Huang, T. S. 2019. Free-form image inpainting with gated convolution. In Proceedings of the IEEE/CVF international conference on computer vision, 4471–4480. Zeng, Y.; Lin, Z.; Lu, H.; and Patel, V. M. 2021. Cr-fill: Generative image inpainting with auxiliary contextual reconstruction. In Proceedings of the IEEE/CVF international conference on computer vision, 14164–14173. Zeng, Y.; Lin, Z.; and Patel, V. M. 2022. Shape-guided object inpainting. arXiv preprint arXiv:2204.07845. Zeng, Y.; Lin, Z.; Yang, J.; Zhang, J.; Shechtman, E.; and Lu, H. 2020. High-resolution image inpainting with iterative confidence feedback and guided upsampling. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIX 16, 1–17. Springer. Zhao, S.; Cui, J.; Sheng, Y.; Dong, Y.; Liang, X.; Chang, E. I.; and Xu, Y. 2021a. Large scale image completion via co-modulated generative adversarial networks. arXiv preprint arXiv:2103.10428. Zhao, Z.; Liu, W.; Xu, Y.; Chen, X.; Luo, W.; Jin, L.; Zhu, B.; Liu, T.; Zhao, B.; and Gao, S. 2021b. Prior based human completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7951–7961. Zheng, C.; Cham, T.-J.; Cai, J.; and Phung, D. 2022a. Bridging global context interactions for high-fidelity image completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11512–11522. Zheng, H.; Lin, Z.; Lu, J.; Cohen, S.; Shechtman, E.; Barnes, C.; Zhang, J.; Xu, N.; Amirghodsi, S.; and Luo, J. 2022b. Image inpainting with cascaded modulation GAN and object-aware training. In European Conference on Computer Vision, 277–296. Springer. Zhou, Q.; Wang, S.; Wang, Y.; Huang, Z.; and Wang, X. 2021. Human de-occlusion: Invisible perception and recovery for humans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3691– 3701. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7477
2024
830
18,663
Recognizing Ultra-High-Speed Moving Objects with Bio-Inspired Spike Camera Junwei Zhao1,2, Shiliang Zhang1 †, Zhaofei Yu1,2 †, Tiejun Huang1,2 1National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University 2Institute for Artificial Intelligence, Peking University {jwz, slzhang.jdl, yuzf12, tjhuang}@pku.edu.cn Abstract Bio-inspired spike camera mimics the sampling principle of primate fovea. It presents high temporal resolution and dynamic range, showing great promise in fast-moving object recognition. However, the physical limit of CMOS technology in spike cameras still hinders their capability of recognizing ultra-high-speed moving objects, e.g., extremely fast motions cause blur during the imaging process of spike cameras. This paper presents the first theoretical analysis for the causes of spiking motion blur and proposes a robust representation that addresses this issue through temporal-spatial context learning. The proposed method leverages multi-span feature aggregation to capture temporal cues and employs residual deformable convolution to model spatial correlation among neighbouring pixels. Additionally, this paper contributes an original real-captured spiking recognition dataset consisting of 12,000 ultra-high-speed (equivalent speed > 500 km/h) moving objects. Experimental results show that the proposed method achieves 73.2% accuracy in recognizing 10 classes of ultra-high-speed moving objects, outperforming existing spike-based recognition methods. Resources will be available at https://github.com/Evin-X/UHSR. Introduction Conventional frame cameras suffer from visual cue loss and severe motion blur in high-speed scenarios due to their limited frame rate (e.g., 30 fps) and single-exposure imaging principle, as shown in Fig. 1 (a). In contrast, bio-inspired spike cameras simulate the sensing principle of retinal photosensitive cells, where each pixel perceives light independently and generates spikes asynchronously (Zheng et al. 2023c). This unique imaging principle allows spike cameras to achieve a sampling frequency that is 1000× higher than human vision (Huang et al. 2022), enhancing their capability in capturing high-speed moving objects, as shown in Fig. 1 (b). Previous efforts (e.g., Zhao et al. (2021b), Hu et al. (2022), Zhang et al. (2022a), Zhao et al. (2022)) have shown promising advantages of spike cameras in recording and recognizing high-speed objects over frame cameras. Current spike cameras are implemented using CMOS technology. The photo-electric conversion time of each pixel † Corresponding authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. sets a limit on the maximum sampling frequency of spike signals (El-Desouki et al. 2009). If the speed of a moving object exceeds a theoretical upper threshold (e.g., 500 km/h), the spike signals will become distorted, resulting in information loss during the imaging process. Consequently, the brightness intensity maps generated from spike streams will present blur, as depicted in Fig. 1 (c). We designate this phenomenon as spiking motion blur. In spiking vision community, research on spiking motion blur is still in its early stage. This work is hence motivated to explore the capability of spike cameras in recording ultra-high-speed motions that exceed their physical limit. Moreover, the lack of datasets has hindered the research on ultra-high-speed moving object recognition. Existing spiking datasets, as shown in Table 1, mainly consist of synthetic data generated from video frames rather than real data. The speeds of moving objects in synthetic data are constrained by video frame rates, which are much lower than the sampling frequency of spike cameras. Additionally, the majority of these datasets are designed for pixel-level vision tasks (Zheng et al. 2023b) such as depth estimation, optical flow estimation, and reconstruction, making them unsuitable for instance-level tasks. Although Zhao et al. (2023a) and Zhao et al. (2023b) introduced datasets for neuromorphic recognition, the speeds of moving objects in these datasets are lower than ultra-high speed. Therefore, a dataset featuring ultra-high-speed motions is required. We theoretically analyze the causes of motion blur in the spike camera imaging process. First, we establish the relationship between moving objects and spike signal generation. Then, we analyze the distortion conditions of spike signal sampling based on the Shannon sampling theorem and find that spiking motion blur is caused by temporal undersampling and spatial misalignment. Based on findings of the analysis, we propose a robust representation learning method that utilizes the temporal-spatial contexts of spike streams to address the issues of spiking motion blur. We employ multi-span dilated convolution in the temporal domain to extract temporal features and perform re-weighting aggregation on the extracted temporal features from different spans. In the spatial domain, we utilize cascaded residual deformable convolution to capture the correlation among neighbouring pixels. Finally, we integrate the temporal and spatial features through cross-attention fusion. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7478 Figure 1: Comparison of fast-motion recordings. (a) is captured by a frame camera, suffering from significant motion blur. (b)(c) are generated from spike streams recorded by a spike camera, which presents stronger capability in capturing high-speed motions, but is still hindered by the physical limit in capturing ultra-high speeds. (Details in the Experiment Section) Spiking Dataset Vision Task Reference Year Sim or Real Data? High-Level Vision Task? Ultra-High Speed? S-DENSE (Zhang et al. 2022a) Depth Estimation ECCV 2022 Sim × × S-KITTI (Wang et al. 2022) Depth Estimation ICME 2022 Sim × × RSSF (Zhao et al. 2022) Flow Estimation NeurIPS 2022 Sim × × SPIFT (Hu et al. 2022) Flow Estimation CVPR 2022 Sim × × Spk-Vimeo (Xiang et al. 2021) Reconstruction T-CSVT 2021 Sim × × Spk-REDS (Zhao et al. 2021a) Reconstruction CVPR 2021 Sim × × PKU-Recon (Zhu et al. 2020) Reconstruction CVPR 2020 Real × × SpiReco (Zhao et al. 2023b) Recognition T-CSVT 2023 Real × HSSR (Zhao et al. 2023a) Recognition ACM MM 2023 Real × UHSR (Ours) Recognition AAAI 2024 Real Table 1: Differences between the UHSR and other existing datasets. Besides above methods, this paper contributes a spiking dataset for Ultra-High-Speed object Recognition, named UHSR dataset. Specifically, we construct the UHSR dataset using a motion platform similar to (Zhao et al. 2023b), which allows us to capture ultra-high-speed objects in a laboratory environment. The platform provides motions with equivalent speeds exceeding 500 km/h, significantly faster than existing datasets. Experimental results show that, our method boosts the baseline by 8.3% accuracy on recognizing 101 classes of ultra-high-speed moving objects. Besides, our method achieves state-of-the-art performances on the high-speed SpiReco and ultra-high-speed UHSR datasets. Our contributions can be summarized as follows: • To the best of our knowledge, we present the first theoretical analysis for underlying causes of spiking motion blur, which reveals the physical limit of current spike cameras in recording high-speed motions. • We propose an original method for recognizing ultrahigh-speed moving objects. Our method effectively addresses the issue of spiking motion blur through temporal-spatial context learning. • We contribute a new spiking recognition dataset featuring ultra-high-speed motions, where objects are recorded at equivalent speeds exceeding 500 km/h at a cameraobject distance of 10 meters. Related Work Overview of Spike Camera Model This section briefly introduces the working principle of the spike camera. Each pixel consists of a photon-receptor, integrator, and comparator. The photon-receptor records photons, which are accumulated by the integrator and converted into voltage. The comparator continuously compares the voltage with a threshold θ. When the voltage exceeds θ, a spike is emitted and the accumulation is reset. The spike generation process on a pixel can be expressed as, R t+nTr t λI(t)dt ≥θ, (1) where I(t) is the brightness intensity of a pixel at time t, λ denotes photoelectric conversion rate, and Tr is the temporal resolution (e.g., 50µs). Spike stream s is a binary array with the dimension of RT×H×W , where T is the time length, H and W are the height and width of the sensor. More details of spike cameras can be found in (Huang et al. 2022). Neuromorphic Recognition Methods In recent years, Spiking Neural Networks (SNNs) have gained attention for processing neuromorphic data, with various proposed models such as (Zheng et al. 2021; Fang et al. 2021; Meng et al. 2022; Wang et al. 2023b). However, there is a lack of research evaluating the performance of SNNs on spiking data recorded by spike cameras. Currently, several efforts have been proposed for processing spike camera data. Zheng et al. (2023c) recovered brightness using spike signal intervals, while Zhao et al. (2022) introduced a spiking representation based on spike firing time differentials. Moreover, Zhao et al. (2023b) developed a denoised and motion-enhanced framework for recognizing high-speed moving objects. However, none of these approaches tackled the challenge of spiking motion blur resulting from ultra-high-speed motions. Unlike spike cameras inspired by the foveal retina, event cameras are inspired by the peripheral retina (Gallego et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7479 Figure 2: (a) ultra-high-speed motion results in (b) spiking motion blur. (c) illustrates the relationship between moving objects and their correlated spike streams. 2020). Each pixel in event cameras operates asynchronously and generates events when brightness change exceeds a threshold (Zheng et al. 2023a). Several event-based processing methods including (Wang et al. 2023a; Peng et al. 2023; Sun et al. 2023) rely on precise timestamps and polarity information, which are not available in spike streams. This makes them unsuitable for processing spiking data. Analysis of Spiking Motion Blur This section establishes the relationship between a moving object and its corresponding spike signals to analyze the cause of spiking motion blur. Based on the spike generation principle in Eq. (1), the time ∆t required for emitting a spike can be calculated as, ∆t = nTr ≥θ λ¯I , (2) where ¯I is the average brightness intensity in a short time interval. The perceived ¯I by each pixel can be estimated as, ¯I = θ λ∆ts , (3) where ∆ts is the time interval between two adjacent spikes. As shown in Fig. 2, assuming an object at distance D moves at speed v, and projects an inverted image on the sensing chip through the lens. The projected pixels keep recording the brightness independently and emit a spike once the voltage exceeds a threshold. According to the law of convex lens imaging, the relationship between the length of the object L and its image H can be represented as, F H = D L , (4) where F denotes the focal lens. Given that, a pixel of size H0×H0 captures the brightness of a high-speed moving area L0 × L0 (denoted as L0-area) located at a distance D0 from the lens. Following the Nyquist–Shannon sampling theorem, a pixel needs to emit a minimum of two spikes to ensure sufficient information sampling for the L0-area. Hence, the Eq. (4) can be rewritten as, F H0 = D v(2∆t) . (5) Substituting Eq. (2) into Eq. (5) yields the formula, v ≤η ¯ID/F , (η = λH0/2θ) , (6) Figure 3: The overall architecture of the proposed TSC method. The MTDC module and RDC module learn the temporal and spatial context of spike streams, respectively. The generated features are fused by the CAFA module. where η is a constant determined by sensor parameters of λ, H0 and θ. Eq. (6) states that in a given scene with parameters such as ¯I, D, and F, if a spike camera is capable of capturing the moving objects clearly, there exists an upper bound on the moving speed. We denote this speed upper bound as, V ∗=η ¯ID/F. (7) When the object speed exceeds V ∗, it causes temporal undersampling of brightness intensity, leading to spatial misalignment between the object and its spike signals. Additionally, we find that V ∗is influenced by scene parameters, especially the brightness intensity ¯I, when D and F are fixed. A higher ¯I results in a larger V ∗, suggesting a correlation between spike camera performance and scene brightness intensity. In summary, given a camera and scene, if the object speed exceeds the upper bound or the scene brightness is too weak, motion blur may occur in the recorded spiking data. Proposed Method Given a spike stream s∈RT×H×W , our goal is to accurately recognize objects from s, which can be formulated as, Ω∗= arg min Ω L(F(S, Ω), Y ) , (8) where Ωrepresents parameters of the recognition model F, L denotes the loss function of F, and S = {s0, s1, ..., sN} denotes the spiking dataset with labels Y . Based on the spiking motion blur analysis, we propose a learning-based spiking representation R to tackle the issue of spiking motion blur by considering both the Temporal and Spatial Context (TSC), as illustrated in Fig 3. Therefore, the model F can be represented as F = B(R(·)), where B denotes the Backbone such as ResNet (He et al. 2016) and VGGNet (Simonyan and Zisserman 2014). In the temporal domain, considering the continuity of motion process, errors in the recorded brightness intensity at time ti can be compensated by incorporating the brightness information from neighbouring time steps. However, determining appropriate length of the neighboring time window is challenging due to various motion speeds of objects. To address this, we introduce the Multi-span Temporal-Dilated Convolution (MTDC) module that extracts temporal features at multi-scales and performs learnable re-weighted fusion of these features. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7480 Figure 4: Structure of the MTDC module. In the spatial domain, the high speed of moving objects leads to spatial misalignment of spikes, causing the brightness information of moving objects to be encoded in a neighbourhood of pixels rather than a single pixel. Determining the appropriate spatial neighbourhood is difficult due to diverse motion patterns. To tackle this, we introduce a spatial feature extractor based on Residual Deformable Convolution (RDC) blocks, and employ a cascaded architecture to enhance the feature extraction capability of this module. To model the feature-level correlation between temporal and spatial domains, we employ Cross-Attention Feature Aggregation (CAFA) module to integrate temporal-spatial features, generating a robust spiking representation for improving recognition accuracy. The structures of MTDC, RDC, and CAFA modules are detailed in the following sub-sections. We optimize the weights of representation learning modules in an end-toend manner, jointly training the backbone and representation modules using cross-entropy loss. Multi-span Temporal-Dilated Convolution We estimate brightness intensity from spike streams at each time step t using Eq. (3). The intensity maps Is are fed into multiple dilated convolution blocks, shown in Fig. 4, where each block conducts dilated convolutions with different dilated rates d to capture features at multiple temporal scales. Notably, the dilation is only applied along the temporal axis. In contrast to 3D convolution kernels, our algorithm mitigates the information overlap in the temporal domain and reduces parameter size, which facilitates network training. The generated feature map G is calculated as, Gd(t, h, w) = X τ X i,j κd(ˆε+τ, ˆε+i, ˆε+j)⊙Is(t+dτ, h+i, w+j), (9) where κd ∈Rε×ε×ε are weights of a dilated convolution kernel with the dilated rate d, ˆε = ⌊ε 2⌋, −ˆε ≤τ, i, j ≤ˆε, and t=1, 2, .., T is the time length of a input spike stream. Experimental results illustrated in Fig. 9 show that the temporal correlation in spike streams weakens as the temporal span increases. Therefore, the feature maps Gd, corresponding to different dilated rates d, are combined using element-wise multiplication ⊙with learnable masks Md. This enables the aggregation of features across various temporal spans with adaptive weightings. In this way, the temFigure 5: Structure of the RDC module. (a) RDC module. (b) RDC block. (c) Deformable convolution. poral feature map T can be obtained by, T = fa( X d Md ⊙Gd), (10) where fa(·) represents the operation pipeline consisting of 3×3 convolution, Batch Normalization (BN), and ReLU. Residual Deformable Convolution The RDC module takes brightness intensity maps Is as input and extracts spatial context from neighbouring pixels. The input Is is passed through a 3×3 convolutional layer and then processed by several cascaded RDC blocks as illustrated in Fig. 5 (a). The cascaded structure is adopted to enhance the capability of local feature extraction. Each RDC block, as shown in Fig. 5 (b), introduces deformable convolution (Zhu et al. 2019) to achieve a flexible perception range for modelling various motion cues. For a deformable convolution kernel with K sampling locations, as depicted in Fig. 5 (c), the weight and offset for the k-th location are denoted as ωk and zk, respectively. The output feature map U at each position z =(h, w) can be computed as, U(t, z)= X k ωk·Is(t, z + zk + ∆zk) · ∆mk, (11) where ∆zk and ∆mk are the learnable offsets and modulation scalar for the k-th location, respectively. Accordingly, the feature map V produced by the i-th RDC block can be obtained by, Vi = ReLU(Vi−1 ⊕fb(Ui)), (12) where fb(·) represents the operation pipeline consisting of BN, ReLU, 3 × 3 convolution, and another BN. The feature map Vi−1 generated by the previous RDC block is added ⊕ via a residual connection. Cross-Attention Feature Aggregation We employ cross-attention to model the relation between temporal and spatial domains. As shown in Fig. 3, the robust spiking representation R is generated by aggregating the temporal features T and spatial features V. The computation process can be formulated as, R=Softmax(fc(T ) ⊗fc(V)T ) ⊗fc(T ) ⊕fd(V), (13) where fc(·) represents the operation pipeline consisting of 3×3 convolution, BN, 1×1 convolution, and fd(·) represents the operation pipeline consisting of 3×3 convolution, BN. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7481 Figure 6: (a) High-speed motion platform for constructing the UHSR. (b)-(c) Visualization of spikes in 3D and 2D. Figure 7: Visualization to the brightness intensity maps under each experimental setting recorded in Table 2. Experiment Dataset (i) the SpiReco (Zhao et al. 2023b) is a collection of highspeed moving object recognition datasets captured using a spike camera. It consists of various motion patterns with different speeds. The sub-datasets of SpiReco include SCIFAR (S-CIF) with 10 classes of 10,000 samples and SCALTECH (S-CAL) with 101 classes of 8,710 samples. The test sets for each sub-dataset contain 1,500 samples. (ii) the UHSR proposed in this study is a pioneering dataset for ultra-high-speed spiking recognition, as there is currently no dedicated dataset in this field. The related dataset, i.e., SpiReco, only covers motion speeds lower than 500 km/h, making it challenging to evaluate methods for ultrahigh-speed motions. To collect UHSR, we employ a similar data collection platform to as shown in Fig. 6 (a), which simulates objects moving at speeds equivalent to 500∼700 km/h. The speed calculation follows the approach described in (Zhao et al. 2023b). We randomly select 6,000 images from CIFAR-10 and 6,000 images from CALTECH-101 for annotation, creating the Ultra-high-speed CIFAR (U-CIF) and Ultra-high-speed CALTECH (U-CAL) datasets, respectively. U-CIF contains 10 classes, and U-CAL consists of 101 classes. Training and testing sets are spilt as a ratio of 5:1. UHSR facilitates the evaluation of methods designed for ultra-high-speed spiking recognition. Experimental Settings Evaluation Results No. Scene ¯I (lx) v (km/h) TDE ↑ BIQI ↓ 1-1 Plane ∼5000 7.9 (0.6V ∗) 11.26 52.01 1-2 13.0 (1.0V ∗) 11.19 53.94 1-3 18.0 (1.4V ∗) 10.53 67.40 1-4 ∼2000 3.3 (0.6V ∗) 10.34 59.96 1-5 5.4 (1.0V ∗) 10.31 61.70 1-6 7.6 (1.4V ∗) 9.84 76.35 2-1 Car ∼5000 7.9 (0.6V ∗) 11.14 56.33 2-2 13.0 (1.0V ∗) 11.09 58.82 2-3 18.0 (1.4V ∗) 10.23 74.36 2-4 ∼2000 3.3 (0.6V ∗) 10.17 61.47 2-5 5.4 (1.0V ∗) 10.03 63.95 2-6 7.6 (1.4V ∗) 9.35 79.84 Table 2: Experimental settings and evaluation results. Figure 8: Evaluation results of spiking motion blur. Implementation We adopt the ImageNet pre-trained ResNet-18 as the backbone. Model parameters are trained using the SGD optimizer without data augmentation. The initial learning rate is set to 2e-4 with the LambdaLR scheduler. The training process runs for 30 epochs for each dataset with a batch size of 16. Input samples are downsampled to 124 × 124 using average pooling. θ/λ is set to 1, constraining the value of brightness intensity maps within [0, 1]. The framework is implemented in PyTorch and trained on NVIDIA RTX 4090 GPUs. Validation of Spiking Motion Blur Analysis We experimentally validate the analysis of spiking motion blur. High-speed motions are provided by the motion platform shown in Fig. 6 (a), driving the monitor at speeds ranging from 0 to 18 km/h. The camera-monitor distance is D =0.35m, and the focal length is F =8mm. The constant η≈1.67e-5 in Eq. (7) is calculated based on the spiking sensor parameters from (Huang et al. 2022). Experiments are conducted under two luminance conditions with two moving objects, i.e., an airplane and a racing car. The upper bound of moving speed V ∗is calculated for each setup according to Eq. (7). We set the monitor motion speeds from 0.6 V ∗to 1.4 V ∗and record the spike streams. Brightness intensity maps are generated using Eq. (3), as depicted in Fig. 7. This approach estimates brightness intensity without optimization or reference information, and the quality of the reconstructed maps reflects the quality of spike streams, assuming the impact of spike noise remains stable. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7482 Figure 9: Spatial and temporal correlation analysis of spiking data in the UHSR and SpiReco datasets. We quantitatively evaluate the reconstructed maps using Two-Dimensional Entropy (TDE) (Xi, Guosui, and Ni 1999) and Blind Image Quality Index (BIQI) (Moorthy and Bovik 2009) as metrics. TDE indicates better image quality with higher values, while BIQI indicates better image quality with lower values. The evaluation results shown in Fig. 8 and summarized in Table 2 indicate that, brightness intensity maps generated from scenes with v ≤V ∗exhibit higher quality compared to those from scenes with v >V ∗, and the quality decreases with increasing of v. Because spike signals become distorted when the motion speed exceeds V ∗. Additionally, scenes with lower luminance, such as 2000 lx, manifest more prominent motion blur as the spike camera is sensitive to scene brightness. Fig. 7 visualizes those reconstructed maps, illustrating that scenes with motion speed v >V ∗present motion blur. These experimental results validate the effectiveness of the spiking motion blur analysis. Ablation Study of Proposed Method We experimentally investigate the temporal-spatial correlation of spike streams by randomly selecting 300 samples from UHSR and SpiReco, respectively. For each sample, we compute the similarity of inter-spike intervals within local pixel regions in the spatial domain, and within a certain time Model Module SpiReco UHSR M1 M2 S-CIF S-CAL U-CIF U-CAL Baseline 55.1% 69.3% 65.7% 59.2% Ours (a) ✓ 57.6% 71.7% 68.3% 63.4% Ours (b) ✓ 58.9% 73.6% 70.5% 65.1% Ours (c) ✓ ✓ 60.8% 74.6% 73.2% 67.5% Table 3: Recognition accuracy of ablation experiments on MTDC (M1) and RDC (M2) modules. Figure 10: Visualization to ablation experimental results on the UHSR dataset. window for the same pixel in the temporal domain. Results are shown in Fig. 9, where each sub-figure displays the original data, the original data curve, and the curve fitted using the Laplace distribution. The Probability Density Function (PDF) of Laplace distribution can be expressed as, f(x|µ, σ) = 1 2σ exp  −|x−µ| σ  , (14) where x denotes the difference in inter-spike intervals, µ is the mean, and σ is the scale parameter. When σ is small, the PDF presents a steeper peak, leading to a more concentrated distribution and smaller deviation of variables around the µ. Experimental results in Fig. 9 exhibit significant correlations in spike interval distributions within certain spatial (e.g., r ≤3) and temporal (e.g., t ≤3) ranges, with r indicating spatial region radius and t denoting temporal span. Specifically, the correlation decreases with the increasing of r and t. For instance, as shown in Fig. 9 (a), when r = 1, σ = 4.9, and when r = 3, σ = 5.4. Similarly, in Fig. 9 (c), when t = 1, σ = 2.0, and when t = 3, σ = 2.3. In our model design, the RDC module employs a 3-level cascaded structure, and the MTDC module adopts a dilated rate of d = 3. Moreover, the results also demonstrate that the spatial and temporal correlation on U-CAL is weaker than S-CAL, due to objects in U-CAL moving at higher speeds. We then validate the effectiveness of MTDC and RDC modules, and summarize results in Table 3. Experimental setups are described in the Implementation Section. The baseline model is ResNet-18 and input streams are uniformly 3 ms. Fig. 10 visualizes the ablation experimental results on the UHSR dataset averaged by 3 runs. The results demonstrate that both MTDC and RDC modules effectively improve recognition accuracy. Notably, our method achieves a more substantial performance improvement on the UHSR, with an 8.3% increase on U-CAL and a 5.3% increase on S-CAL compared to the baseline. These results demonstrate that temporal-spatial context learning effectively improves the accuracy of recognizing ultra-high-speed objects. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7483 Type Method Reference Backbone SpiReco UHSR Structure S-CIF S-CAL U-CIF U-CAL SNN TDBN (Zheng et al. 2021) AAAI’21 LIF Res-19 52.8% 62.2% 60.3% 53.1% SEWR (Fang et al. 2021) NeurIPS’21 SEW Res-18 53.6% 63.7% 61.5% 53.9% DSR (Meng et al. 2022) CVPR’22 LIF Res-18 53.2% 64.1% 61.2% 54.5% Event EtoF (Ahmad et al. 2022) WACV’22 ResNet-18 53.9% 66.3% 61.8% 55.7% BEI (Cohen et al. 2018) T-NNLS’18 ResNet-18 50.7% 61.8% 56.6% 50.3% SBNE (Zhang et al. 2022b) CVPR’22 ResNet-18 54.2% 67.4% 62.3% 56.5% Spike TSR (Zhao et al. 2022) NeurIPS’22 ResNet-18 56.3% 70.4% 65.9% 60.4% TFP (Zheng et al. 2023c) T-PAMI’23 ResNet-18 55.6% 69.8% 63.1% 56.7% ISI (Zheng et al. 2023c) T-PAMI’23 ResNet-18 55.1% 69.3% 65.7% 59.2% DMER (Zhao et al. 2023b) T-CSVT’23 ResNet-18 57.9% 71.2% 67.8% 62.4% TSC (Ours) AAAI’24 ResNet-18 60.8% 74.6% 73.2% 67.5% Table 4: Comparison with SoTA methods on SpiReco (high-speed) and UHSR (ultra-high-speed) datasets. Spike U-CIF U-CAL Methods 1 ms 3 ms 5 ms 1 ms 3 ms 5 ms TSR 51.9% 65.9% 69.3% 43.2% 60.4% 64.5% TFP 40.5% 63.1% 64.2% 30.4% 56.7% 61.9% ISI 29.2% 65.7% 68.7% 17.6% 59.2% 63.3% DMER 53.7% 67.8% 70.5% 46.3% 62.4% 64.7% Ours 62.4% 73.2% 73.9% 51.6% 67.5% 68.4% Table 5: Evaluation to the robustness of different methods. Comparison with State-of-The-Art Methods As summarized in Table 4, we compare with SoTA spikebased methods including TSR, TFP, ISI and DMER. We also compare with SNNs and event-based methods. SNNs include TDBN, SEWR and DSR. For SNNs, spike streams are processed as sequential inputs with T time steps. Many event-based methods require precise timestamps for processing (Kim et al. 2021), which are not available in spiking data. Hence, we compare with those methods not requiring precise timestamps, including EtoF, BEI, and SBNE. To ensure a fair comparison, the input spike streams have a uniform time length of 3 ms and a spatial size of 124×124. Training is done for 30 epochs using ResNet-18 as the backbone for each method. Learning rates and optimizers follow recommendations from the official code of each method. Training SNNs on spiking data poses optimization challenges, limiting their performances. Additionally, spike and event streams have different data representations, which may degrade the performance of event-based methods on spiking datasets even under an identical experimental setup. Compared to SoTA spike-based processing methods like TSR, TFP, ISI, and DMER, our method achieves substantial improvements in recognition accuracy, with a 5.4% and 5.1% increase on UHSR, and a 3.4% and 2.9% increase on SpiReco, respectively. These results demonstrate the effectiveness of our proposed method in addressing spiking motion blur, leading to enhanced recognition accuracy for ultrahigh-speed moving objects. We conduct robustness testing on spike-based methods to evaluate their performances under different input lengths of Figure 11: Visual inspection of TSR, DMER, and Ours using Grad-CAM (Selvaraju et al. 2017). Best viewed in color. spike streams, as the length of captured data is different due to diverse motion speeds and limited camera view angles in real scenarios. We compare with others using 1 ms, 3 ms, and 5 ms spike streams as input, while keeping the other experimental settings consistent with those in Table 4. The results in Table 5 demonstrate that our method consistently outperforms other methods, showcasing its robustness. We further apply Grad-CAM on spiking feature maps to show the interest area of models. We compare with learningbased spike processing methods, i.e., TSR and DMEM. The visualization in Fig. 11 shows that our method pays more attention to distinctive regions of moving objects compared with other methods. Additionally, results also show that the models trained on UHSR focus on the features of moving objects rather than the moving screen. Conclusion In this paper, we present the first theoretical analysis of spiking motion blur caused by ultra-high-speed motions, and validate the analysis through extensive experiments. Based on the analysis, we propose a robust spiking representation that learns the temporal-spatial context of spike streams, effectively improving recognition accuracy for ultra-highspeed objects. Experimental results demonstrate the superior accuracy and enhanced robustness of our method. Additionally, we construct an original spiking dataset for ultra-highspeed object recognition to facilitate further research. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7484 Acknowledgments This work is supported in part by the National Natural Science Foundation of China under Grant No. 62088102, U20B2052, 61936011, in part by the Okawa Foundation Research Award. References Ahmad, S.; Scarpellini, G.; Morerio, P.; and Del Bue, A. 2022. Event-driven Re-Id: A New Benchmark and Method Towards Privacy-Preserving Person Re-Identification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 459–468. Cohen, G.; Afshar, S.; Orchard, G.; Tapson, J.; Benosman, R.; and van Schaik, A. 2018. Spatial and temporal downsampling in event-based visual classification. IEEE Transactions on Neural Networks and Learning Systems, 29(10): 5030–5044. El-Desouki, M.; Jamal Deen, M.; Fang, Q.; Liu, L.; Tse, F.; and Armstrong, D. 2009. CMOS image sensors for high speed applications. Sensors, 9(1): 430–444. Fang, W.; Yu, Z.; Chen, Y.; Huang, T.; Masquelier, T.; and Tian, Y. 2021. Deep residual learning in spiking neural networks. In Advances in Neural Information Processing Systems, volume 34, 21056–21069. Gallego, G.; Delbr¨uck, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A. J.; Conradt, J.; Daniilidis, K.; et al. 2020. Event-based vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1): 154–180. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778. Hu, L.; Zhao, R.; Ding, Z.; Ma, L.; Shi, B.; Xiong, R.; and Huang, T. 2022. Optical Flow Estimation for Spiking Camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17844–17853. Huang, T.; Zheng, Y.; Yu, Z.; Chen, R.; Li, Y.; Xiong, R.; Ma, L.; Zhao, J.; Dong, S.; Zhu, L.; et al. 2022. 1000× faster camera and machine vision with ordinary devices. Engineering. Kim, J.; Bae, J.; Park, G.; Zhang, D.; and Kim, Y. M. 2021. N-ImageNet: Towards robust, fine-grained object recognition with event cameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2146–2156. Meng, Q.; Xiao, M.; Yan, S.; Wang, Y.; Lin, Z.; and Luo, Z.Q. 2022. Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12444–12453. Moorthy, A.; and Bovik, A. 2009. A modular framework for constructing blind universal quality indices. IEEE Signal Processing Letters, 17: 7. Peng, Y.; Zhang, Y.; Xiao, P.; Sun, X.; and Wu, F. 2023. Better and Faster: Adaptive Event Conversion for Event-Based Object Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2056–2064. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, 618–626. Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Sun, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2023. Asynchronous Event Processing with Local-Shift Graph Convolutional Network. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2402–2410. Wang, D.; Jia, X.; Zhang, Y.; Zhang, X.; Wang, Y.; Zhang, Z.; Wang, D.; and Lu, H. 2023a. Dual Memory Aggregation Network for Event-Based Object Detection with Learnable Representation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2492–2500. Wang, Q.; Zhang, T.; Han, M.; Wang, Y.; Zhang, D.; and Xu, B. 2023b. Complex dynamic neurons improved spiking transformer network for efficient automatic speech recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 102–109. Wang, Y.; Li, J.; Zhu, L.; Xiang, X.; Huang, T.; and Tian, Y. 2022. Learning stereo depth estimation with bio-inspired spike cameras. In 2022 IEEE International Conference on Multimedia and Expo, 1–6. IEEE. Xi, L.; Guosui, L.; and Ni, J. 1999. Autofocusing of ISAR images based on entropy minimization. IEEE Transactions on Aerospace and Electronic Systems, 35(4): 1240–1252. Xiang, X.; Zhu, L.; Li, J.; Wang, Y.; Huang, T.; and Tian, Y. 2021. Learning super-resolution reconstruction for high temporal resolution spike stream. IEEE Transactions on Circuits and Systems for Video Technology. Zhang, J.; Tang, L.; Yu, Z.; Lu, J.; and Huang, T. 2022a. Spike Transformer: Monocular Depth Estimation for Spiking Camera. In European Conference on Computer Vision, 34–52. Springer. Zhang, K.; Che, K.; Zhang, J.; Cheng, J.; Zhang, Z.; Guo, Q.; and Leng, L. 2022b. Discrete time convolution for fast event-based stereo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8676– 8686. Zhao, J.; Xiong, R.; Liu, H.; Zhang, J.; and Huang, T. 2021a. Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11996–12005. Zhao, J.; Xiong, R.; Xie, J.; Shi, B.; Yu, Z.; Gao, W.; and Huang, T. 2021b. Reconstructing clear image for high-speed motion scene with a retina-inspired spike camera. IEEE Transactions on Computational Imaging, 8: 12–27. Zhao, J.; Ye, J.; Zhang, S.; Yu, Z.; and Huang, T. 2023a. Recognizing High-Speed Moving Objects with Spike Camera. In Proceedings of the ACM International Conference on Multimedia, 7657–7665. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7485 Zhao, J.; Zhang, S.; Yu, Z.; and Huang, T. 2023b. SpiReco: Fast and Efficient Recognition of High-Speed Moving Objects with Spike Cameras. IEEE Transactions on Circuits and Systems for Video Technology. Zhao, R.; Xiong, R.; Zhao, J.; Yu, Z.; Fan, X.; and Huang, T. 2022. Learning Optical Flow from Continuous Spike Streams. In Advances in Neural Information Processing Systems, volume 35, 7905–7920. Zheng, H.; Wu, Y.; Deng, L.; Hu, Y.; and Li, G. 2021. Going deeper with directly-trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 11062–11070. Zheng, X.; Liu, Y.; Lu, Y.; Hua, T.; Pan, T.; Zhang, W.; Tao, D.; and Wang, L. 2023a. Deep learning for event-based vision: A comprehensive survey and benchmarks. arXiv preprint arXiv:2302.08890. Zheng, Y.; Zhang, J.; Zhao, R.; Ding, J.; Chen, S.; Xiong, R.; Yu, Z.; and Huang, T. 2023b. SpikeCV: Open a Continuous Computer Vision Era. arXiv preprint arXiv:2303.11684. Zheng, Y.; Zheng, L.; Yu, Z.; Huang, T.; and Wang, S. 2023c. Capture the Moment: High-speed Imaging with Spiking Cameras through Short-term Plasticity. IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhu, L.; Dong, S.; Li, J.; Huang, T.; and Tian, Y. 2020. Retina-like visual image reconstruction via spiking neural model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1438–1446. Zhu, X.; Hu, H.; Lin, S.; and Dai, J. 2019. Deformable Convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9308–9316. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7486
2024
831
18,664
Rethinking Two-Stage Referring Expression Comprehension: A Novel Grounding and Segmentation Method Modulated by Point Peizhi Zhao1, Shiyi Zheng1, Wenye Zhao1, Dongsheng Xu1, Pijian Li1, Yi Cai3,4, Qingbao Huang1,2* 1School of Electrical Engineering, Guangxi University, Nanning, China 2Guangxi Key Laboratory of Multimedia Communications and Network Technology 3School of Software Engineering, South China University of Technology, Guangzhou, China 4Key Laboratory of Big Data and Intelligent Robot (SCUT), MOE of China 2212391086@st.gxu.edu.cn, qbhuang@gxu.edu.cn Abstract As a fundamental and challenging task in the vision and language domain, Referring Expression Comprehension (REC) has shown impressive improvements recently. However, for a complex task that couples the comprehension of abstract concepts and the localization of concrete instances, one-stage approaches are bottlenecked by computing and data resources. To obtain a low-cost solution, the prevailing two-stage approaches decouple REC into localization (region proposal) and comprehension (region-expression matching) at regionlevel, but the solution based on isolated regions cannot sufficiently utilize the context and is usually limited by the quality of proposals. Therefore, it is necessary to rebuild an efficient two-stage solution system. In this paper, we propose a point-based two-stage framework for REC, in which the two stages are redefined as point-based cross-modal comprehension and point-based instance localization. Specifically, we reconstruct the raw bounding box and segmentation mask into center and mass scores as soft ground-truth for measuring point-level cross-modal correlations. With the soft groundtruth, REC can be approximated as a binary classification problem, which fundamentally avoids the impact of isolated regions on the optimization process. Remarkably, the consistent metrics between center and mass scores allow our system to directly optimize grounding and segmentation by utilizing the same architecture. Experiments on multiple benchmarks show the feasibility and potential of our point-based paradigm. Our code available at https://github.com/VILANLab/PBREC-MT. Introduction Referring Expression Comprehension (REC) aims to predict a referred target in an image according to a corresponding expression, which can be regarded as a coupling of the comprehension of visual and linguistic abstract concepts and the localization of concrete visual instance. Depending on the prediction paradigm, the comprehension of the referring expression has two manifestations: 1) Referring Expression Grounding (REG) (Yu et al. 2016; Mao et al. 2016), where the referred instance is localized by a bounding box. 2) Referring Expression Segmentation (RES) (Hu, Rohrbach, and *Corresponding author: Qingbao Huang Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Darrell 2016; Liu et al. 2017) separates the foreground and background of the image based on the referring expression. As a fundamental cross-modal task, REC focuses on mining fine-grained visual and linguistic information, which facilitates numerous downstream studies, such as autonomous driving (Kim et al. 2019), image captioning (Chen et al. 2020), and visual question answering (Wang et al. 2020b). Depending on the solution process, existing REC methods can be broadly divided into one-stage and two-stage frameworks as shown in Fig. 1. The one-stage approaches (Sun, Xiao, and Lim 2021; Deng et al. 2021) treat the REC as an object detection with online classification, i.e., using referring expressions to define categories instead of a predefined set of categories. By extending object detectors, the conventional one-stage approaches (Sun, Xiao, and Lim 2021; Huang et al. 2021) utilize multi-head networks to model the comprehension (cross-modal confidence) and localization (instance detection) processes, cf., Fig. 1 (a). Inspired by DETR (Carion et al. 2020), transformer-based approaches (Deng et al. 2021) have recently received widespread attention as a flexible and effective framework. Leveraging the attention mechanism, these methods achieve deep crossmodal alignment and query-based localization, cf., Fig. 1 (b). One-stage methods, regardless of grounding or segmentation, are multi-objective implicitly coupled processes, i.e., the ability to comprehend abstract concepts is measured indirectly by their performance in the physical visual space (bounding box or segmentation mask). Although the onestage framework achieves significant improvement by sufficiently exploiting the visual and linguistic context, these methods are limited by computation and data resources due to their complex optimization. Two-stage approaches (Yu et al. 2018; Chen et al. 2021) attempt to build a matching and ranking process, which is a more natural scheme. As shown in Fig. 1 (c), conventional two-stage approaches usually merge the results of a pre-trained detector and cross-modal matching module at region-level, and search for the most relevant region proposal via a ranking process. Unfortunately, the conventional framework suffers from two inherent defects: 1) Sparse region proposals destroy the complete spatial context. 2) The ground-truth used during training and the proposals predicted by the detector form a gap, which leads to a subThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7487 Proposals An apple between a banana and some lemons in a bowl An apple between a banana and some lemons in a bowl An apple between a banana and some lemons in a bowl An apple between a banana and some lemons in a bowl CNN CNN CNN Word Embedding Word Embedding Word Embedding Word Embedding Proposals Ranking Extended Detector Instance Detection Cross-modal Conf. Cross-modal Alignment Target Localization Transformer Patch-level Ranking object detector object detector (c) Conventional two-stage framework (d) Ours two-stage framework (a) Conventional one-stage framework (b) Transformer-based one-stage framework gt gt soft gt loss loss loss gt loss Figure 1: A comparison of (a) conventional one-stage framework, where Conf. means a confidence branch, (b) transformerbased one-stage framework, (c) conventional two-stage framework, and (d) our proposed point-based two-stage framework. Our method can leverage whole context information and merge comprehension and localization processes on the feature space. optimal generalization of the model. In addition, the previous two-stage segmentation methods are a compromise implementation. The indirect solution for segmentation via the bounding box limits their performance. In this paper, we propose a point-based two-stage framework for REC to address the aforementioned problems. As shown in Fig. 1 (d), instead of using the search space composed of loose and unordered regions, we propose to apply a regular set of points to support the ranking process. Specifically, relying on cross-modal fusion representations and point-based detectors (Tian et al. 2019), we reformulate the comprehension and localization processes of REC. To measure the correlation between visual points and referring expression, we construct soft ground-truth, e.g., center-ness and mass-ness matrices, based on the bounding boxes and segmentation masks. Then we establish a shape-independent classification process as the comprehension stage, which is an end-to-end trainable module optimized by the soft ground-truth. To convert the points from the comprehension stage into bounding boxes or segmentation masks, we introduce an IoU-based non-maximum suppression, which enables concise and efficient post-processing of predictions from the detector. Importantly, the shape-independent comprehension allows consistent modeling of grounding and segmentation tasks, so our framework supports multi-task learning without attaching any additional head network. Our contributions can be summarized as: 1) We propose a point-based two-stage framework for REC. By approximating REC as a binary classification task, our framework can leverage the complete visual and linguistic context at a lower training cost. 2) We introduce soft ground-truth as the optimization objective of the cross-modal comprehension. Relying on the consistency of soft ground-truth in grounding and segmentation, our framework can naturally support multitask learning, i.e., REG and RES. 3) Extensive experimental results on widely used benchmarks demonstrate the feasibility of the point-based paradigm. Our framework has significant improvements over conventional two-stage methods on both referring expression grounding and segmentation. Related Work Referring expression comprehension (REC) is originally described as retrieving a visual instance referred by a sentence from a set of region annotations. Thereby, early works (Yu et al. 2016) usually formulate the task as a ranking problem. Two-stage inference frameworks replace the high-quality ground-truth for ranking with the region proposals of pretrained object detectors, e.g., Faster-RCNN (Ren et al. 2015), to realize an automatic localization. However, most of the two-stage methods are motivated to reconstruct the context between regions because sparse proposals destroy the visual information. Module-based methods (Yu et al. 2018; Hu et al. 2017) decompose the alignment of multi-modal representations into several components. Yu et al. implicitly models the subject, relationship, and location by introducing different heuristic priors to different modules. Considering the multi-hop relationships between visual and linguistic instances, graph-based methods (Wang et al. 2019; Yang, Li, and Yu 2020; Sun et al. 2023) propose to construct regions and expressions as a scene graph or tree, which allows cross-modal representations to be aligned under the same structure. Two-stage frameworks reduce the training cost of REC by decoupling tasks. However, the incomplete visual semantics, especially discarded spatial context, hinder the alignment and fusion of multi-modal representations. Furthermore, the prediction proposals have a region shift compared to the high-quality annotations. This data discrepancy makes the ranking model struggle to generalize. One-stage methods (Sadhu, Chen, and Nevatia 2019; Hu, Rohrbach, and Darrell 2016) recommend using an end-toend process to solve REC, i.e., directly predicts the referred instance from the entire image and expression, which can eliminate the noise caused by the region proposals to the reasoning system. The conventional one-stage methods (Liao The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7488 Expression: A banana is under three oranges cat ... ... N × Self-attention Layer localization stage comprehension stage optimization objective Image: Outputs: MLP Merge CNN BERT Mask Head Box Head center mass 1.0 0.8 0.8 0.6 0.9 0.6 0.0 0.0 0.0 1.0 0.8 0.9 0.4 0.0 0.5 0.0 0.0 0.0 Loss Figure 2: The overall framework of our model. It consists of two stages in inference: (1) a trainable cross-modal comprehension stage which is optimized by center-ness or mass-ness score, and (2) a frozen vision localization stage. et al. 2020; Jing et al. 2021) generally implement grounding or segmentation by extending the natural language component to YOLOv3(Redmon and Farhadi 2018) or FCN (Long, Shelhamer, and Darrell 2015), respectively. For these crossdomain extended inference systems, effective fine-grained modeling is the key to localization. Yang et al. establish an iterative reasoning process for complex long expressions via sub-query. With the efficient cross-modal representations ability of the attention mechanism, Deng et al. proposed a transformer-based solution. Most recent works (Yang et al. 2022; Du et al. 2022; Zhu et al. 2022; Ye et al. 2022) have followed this new paradigm. Despite the significant performance improvement, these methods suffer from a long optimization process, usually requiring around 100 epochs. Therefore, the cost of data and computation is one of the most obvious limitations of transformer-based methods. Approach We present a point-based two-stage method for REC, a unified framework for the grounding and segmentation tasks based on the cross-modal comprehension and vision detection. As shown in Fig. 2, given an image I ∈RH×W ×3 and a referring expression Q ∈RL, the task of REG is to predict the bounding box ˆb ∈R4 of the referred instance, and the task of RES is to predict the segmentation mask ˆs ∈RH×W . Problem Reformulation The central idea of our point-based two-stage framework is to reformulate REC as an approximate binary classification problem. In conventional two-stage methods, anchor-based detectors, e.g., Faster R-CNN (Ren et al. 2015), cause a series of critical defects. The survey by Chen et al. (2021) shows that the recall of the region proposals obtained by the prevailing methods in inference is only 80.77%. In addition, previous methods usually crudely combine localization and comprehension at region-level. This makes it difficult for the comprehension module to generalize via the sub-optimal predictions. Inspired by FCOS (Tian et al. 2019), an object detector that projects points to bounding boxes in a one-toone manner, we propose direct metrics of referring expression comprehension at point-level as soft ground-truth, e.g., expression-aware center-ness and mass-ness scores. Expression-aware center-ness matrix is constructed by the ground-truth bounding box b = (xl, yt, xr, yb), where (xl, yt) and (xr, yb) are the coordinates of the left-top and right-bottom corners. Concretely, we denote the feature maps extracted from the input image by visual backbone as Fv ∈R H g × W g ×C, where g is the width of a visual grid. The kth feature point in Fv represent the visual information collected from the grid (i, j), where k = j · W g + i. Similar to semantic segmentation, we assign categories to each point to indicate whether it is related to the referred instance. Since the bounding box belongs to low-precision localization, we define the points falling in the central area of the target box as positive samples to prevent noise from the irrelevant periphery. The central area is defined as a square box centered at ( xl+xr 2 , yt+yb 2 ) and the width is 3g. According to the coordinates (xk, yk) of the kth point projected on the raw image, the relative positional relationship between the positive point and the target bounding box can be defined as: l = xk −xl, r = xr −xk, t = yk −yt, b = yb −yk, (1) where (l, r, t, b) is the left, right, top, and bottom distances from the point to the ground-truth bounding box. We compute the center-ness score by: ck = s min(l, r) max(l, r) · min(t, b) max(t, b). (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7489 The center-ness score measures the degree how much the point deviates from the center of the target box, which allows the model to focus more on the grids near the center. Expression-aware mass-ness matrix is roughly the same as center-ness, converted by the ground-truth segmentation mask s ∈RH×W , which is a boolean matrix used to segment the referred instance. Similarly, we define a score for each point on the feature map to measure its importance for prediction. As the segmentation mask is the best approximation to the instance shape at pixel-level, we assign positive samples to wider regions. Concretely, we first compute the centroid (xm, ym) of the target by the mask: xm = 1 N PN i=1 xi, ym = 1 N PN i=1 yi, (3) where N is the total number of foreground pixels in the segmentation mask, and (xi, yi) is the pixel coordinate belonging to the foreground. Taking the centroid of the foreground as the center, all points falling in the area with a radius of 2g are considered as positive samples. Since the segmentation mask defines a binary class label for each pixel, we calculate the mass-ness as follows: mk = 1 g2 g X x=1 g X y=1 sk(x, y), (4) where sk is the kth grid of the segmentation mask. We use mk to quantify the foreground enrichment of each grid. Both the center-ness and mass-ness of all negative points are set to 0. Using the above formulations, grounding and segmentation are approximated as a binary classification problem at the same scale, which makes the learning process of the model simpler and enables the same framework to solve multiple tasks. Network Architecture An overview of our model is shown in Fig. 2. The core design for the point-based two-stage framework is that two parallel reasoning stages are implemented by constructing soft ground-truth, i.e., point-based cross-modal comprehension and point-based localization. For the feature encoding, given an image and a referring expression, we first extract the visual features Fv by a convolutional network (e.g., ResNet-101) and extract a sequence of textual tokens Fq ∈RL×C by BERT (Devlin et al. 2019). Then we utilize the point-based cross-modal comprehension module to align and fuse the multi-modal representations. As the key component in our model, the architecture of the comprehension stage is concise and elegant. For the uni-modal representations of Fv and Fq, which are usually inconsistent in the channel dimension, we apply two linear layers to project them into the same embedding space. We denote the initial embedding as F 0 v and F 0 q . Then we flatten and concatenate them as F 0 vq = {F 0 v ; F 0 q }. To perform efficient intra- and inter-modal context interactions, we propose a visual-language transformer encoder that stacks a set of multi-head self-attention layers and feed-forward networks. The procedure in the encoder is formulated as: F ′ v = Ftrans(F 0 vq, ev)|0:g2, (5) IoU-based NMS conf: 0.8 conf: 0.5 conf: 0.7 conf: 0.6 conf: 0.9 Figure 3: Our proposed IoU-based Non-Maximum Suppression (NMS), which can selects the final prediction by computing the IoU between a set of proposals. where ev ∈R HW g2 ×C is a flattened 2D-aware position embedding to compensate the absolute position information of the visual representation which is corrupted by convolutional translation invariance. We exploit the deep interactive visual state F ′ v for classification prediction. Two shared MLP heads are utilized to obtain the center-ness prediction ˆc and mass-ness prediction ˆm. Finally, the output of the prediction head is normalized by Sigmoid. Consistent with conventional methods, our localization stage includes a generic object detection or segmentation model. However, the anchor-based methods, do not have a one-to-one mapping between predictions and feature points, which does not match our central idea. Therefore, we replace the box head and mask head with point-based models, e.g., FCOS (Tian et al. 2019) and SOLO (Wang et al. 2020a). We compare the ceiling of performance provided by different proposed methods in experiments. Optimization and Inference For the backward, we use expression-aware center-ness and mass-ness scores as the objectives to optimize our comprehension stage. As a multi-label binary classification task, our loss function is as follows: L(ˆc, ˆm) = 1 n2 n X i=1 (λcLBCE(ci, ˆci) +λmLBCE(mi, ˆ mi)), (6) where n = HW g2 , which is the number of grids, LBCE is binary cross entropy loss function, λc and λm are boolean values used to adjust the task type. Relying on sparse proposals, conventional methods can take the top-1 region as the final prediction. However, our search space composed of points is a dense proposal set with a large amount of overlap. To take advantage of overlapping characteristics, we propose an IoU-based non-maximum suppression (NMS) as the post-process. As shown in Fig. 3, although expression-related regions tend to score higher, it is still possible that some high-confidence proposals are incorrect predictions. Therefore we recommend finding the maximum overlapping proposal as a reliable prediction. Specifically, for a set of proposals ˆP = {ˆp1, ˆp2, . . . , ˆpk}, which is the top-k bounding box or segmentation mask ranked by The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7490 Detectors RefCOCO RefCOCO+ RefCOCOg val testA testB val testA testB val test Faster R-CNN 98.25 99.40 98.45 98.38 99.41 98.77 97.39 97.22 Mask R-CNN 97.60 97.81 96.58 97.79 97.78 96.99 97.18 96.91 FCOS-P5 97.90 98.90 97.43 98.04 98.85 97.59 96.47 96.75 Mask R-CNN † 88.86 93.94 80.77 89.33 93.96 81.45 85.97 86.10 Table 1: Comparison of the recall (%) of different object detectors on RefCOCO, RefCOCO+, and RefCOCOg, †denotes the real case used in the state-of-the-art two-stage REC methods. Models Venue Backbone Epochs RefCOCO RefCOCO+ RefCOCOg val testA testB val testA testB val test One-stage: FAOA ICCV’2019 DarkNet-53 72.54 74.35 68.50 56.81 60.23 49.60 61.33 60.36 MCN CVPR’2020 DarkNet-53 45 80.08 82.29 74.98 67.16 72.86 57.31 66.46 66.01 ReSC-Large ECCV’2020 DarkNet-53 100 77.63 80.45 72.30 63.59 68.36 56.81 67.30 67.20 TransVG ICCV’2021 ResNet-101 180 81.02 82.72 78.35 64.82 70.70 56.94 68.67 67.73 RefTR NeurIPS’2021 ResNet-101 82.23 85.59 76.57 71.58 75.96 62.16 69.41 69.40 RED AAAI’2022 DarkNet-53 100 80.97 83.20 77.66 69.48 73.80 62.20 71.11 70.67 Word2Pix TNNLS’2022 ResNet-101 180 81.20 84.39 78.12 69.74 76.11 61.24 70.81 71.34 Two-stage: MAttNet CVPR’2018 ResNet-101 5 76.65 81.14 69.99 65.33 71.62 56.02 66.58 67.27 CM-Att CVPR’2019b ResNet-101 5 78.35 83.14 71.32 68.09 73.65 58.03 67.99 68.67 Ref-NMS AAAI’2021 ResNet-101 5 80.70 84.00 76.04 68.25 73.68 59.42 70.55 70.62 RvG-Tree TPAMI’2022 ResNet-101 75.06 78.61 69.85 63.51 67.45 56.66 66.95 66.51 A-ATT TPAMI’2022 VGG16 60 80.87 71.55 65.13 55.01 63.84 PBREC ours ResNet-101 15 82.20 85.26 79.21 72.63 78.96 64.74 73.92 73.18 PBREC-MT ours ResNet-101 15 82.94 86.31 80.81 74.85 79.53 65.60 73.86 74.13 Table 2: Comparison with the state-of-the-art REG approaches on RefCOCO, RefCOCO+, and RefCOCOg in terms of top-1 accuracy (%). The best and second best performances are in bold and underline, respectively. center-ness or mass-ness, we compute the overlap score as: ri = k X j=1 IoU(ˆpi, ˆpj). (7) Finally, we take the proposal with the highest overlap score as the final output. The detailed settings of IoU-based NMS are in the ablation experiments. Experiments Datasets and Evaluation Metrics Follow Chen et al. (2021), we verify the effectiveness of our method on RefCOCO (Yu et al. 2016), RefCOCO+ (Yu et al. 2016), and RefCOCOg (Mao et al. 2016). The images of these datasets are collected from MSCOCO (Lin et al. 2014). The three datasets have different challenges. The average sentence lengths of RefCOCO and RefCOCO+ are 3.50, 3.53, but RefCOCO+ prohibits the description of absolute positional relationships. RefCOCOg provides more realistic and complex expressions, and the average sentence length reaches 8.46. RefCOCOg has two types of splits, we use umd split which contains val and test set. Following Deng et al. (2021), we evaluate REG by accuracy. When the IoU between the predicted bounding box and the ground truth is greater than 0.5, the prediction is deemed accurate. For the RES, we choose overall IoU as the metric which is obtained by computing the average IOU between the predicted mask and the ground-truth for all cases. Implementation Details We resize and pad all the images to 640 × 640, and follow Deng et al. (2021) to augment the raw data. We use ResNet101 (He et al. 2016) as the vision backbone, and use the output of C5 block as the visual feature map, i.e., g = 32. For the tokenization, we set the max token length to 30 (RefCOCO, RefCOCO+, ReferItGame) and 40 (RefCOCOg). For the comprehension stage, we use a 6-layer transformer encoder as our neck network. During training, we set batch size to 64, set the initial learning rate to 1 × 10−4 for comprehension module, set a lower initial learning rate 1×10−6 for ResNet and BERT. The model is dynamically optimized for 15 epochs by AdamW (Loshchilov and Hutter 2019) and CosineAnnealing (Loshchilov and Hutter 2017). During inference, we take the P5 block of FCOS (Tian et al. 2019) and P4 block of SOLO (Wang et al. 2020a) as the grounding and segmentation proposal source, and set k to 12 in the IoU-based NMS. We provide two versions of the model, i.e., PBREC optimized for single task and PBREC-MT optimized for multi-task. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7491 Models Venue Backbone RefCOCO RefCOCO+ RefCOCOg val testA testB val testA testB val test One-stage: LTS CVPR’2021 DarkNet-53 65.43 67.76 63.08 54.21 58.32 48.02 54.40 54.25 VLT ICCV’2021 DarkNet-53 65.65 68.29 62.73 55.50 59.20 49.36 52.99 56.65 RefTR NeurIPS’2021 ResNet-101 70.56 73.49 66.57 61.08 64.69 52.73 58.73 58.51 ResTR CVPR’2022 ViT-16 67.22 69.30 64.45 55.78 60.44 48.27 54.48 CRIS CVPR’2022 ResNet-101 70.47 73.18 66.10 62.27 68.08 53.68 59.87 60.36 SeqTR ECCV’2022 DarkNet-53 67.26 69.79 64.12 54.14 58.93 48.19 55.67 55.64 Two-stage: MAttNet CVPR’2018 ResNet-101 56.51 62.37 51.70 46.67 52.39 40.08 47.64 48.61 NMTree ICCV’2019a ResNet-101 56.59 63.02 52.06 47.40 53.01 41.56 46.59 47.88 CM-Att CVPR’2019b ResNet-101 58.23 64.60 53.14 49.65 53.90 41.77 49.10 50.72 Ref-NMS AAAI’2021 ResNet-101 61.46 65.55 57.41 49.76 53.84 42.66 51.21 51.90 PBREC ours ResNet-101 71.11 72.89 70.12 62.99 66.67 56.64 62.14 61.56 PBREC-MT ours ResNet-101 71.44 73.21 70.11 63.76 67.10 56.63 62.93 62.61 Table 3: Comparison with the state-of-the-art RES approaches on RefCOCO, RefCOCO+, and RefCOCOg in terms of overall IoU (%). The best and second best performance are in bold and underline, respectively. Grounding Segmentation hard soft hard soft Single task 71.93 61.20 72.63 62.99 Multi task 72.17 62.25 71.96 62.45 72.75 62.04 74.85 63.76 Table 4: Ablation study of center-ness and mass-ness. Comparison with State-of-the-art Models Compared with prevailing two-stage methods, one notable difference is that we use a different pre-trained detector. To make a more convincing performance comparison, we follow the statistics of Chen et al. (2021), to compare the ceiling of performance that different detectors can provide. As shown in Table 1, we compute the recall of region proposals in several different scenarios, i.e., the proportion that the proposals contain correct prediction. We have the following observations: 1) When using the top-100 GreedyNMS (cf. Row 1 and Row 2), which is a usual practice for most downstream tasks, the recall of anchor-based detectors can reach about 97%. 2) The predictions at P5 level using FCOS improve the performance ceiling by less than 1%. This is reasonable, since we provide more proposals (typically 400 boxes). 3) To reduce the gap between training and inference, prevailing two-stage methods usually use sparse proposals (e.g., less than 10) in the real case. This is an obvious performance bottleneck, e.g., it is impossible for these methods on RefCOCO testB to exceed 80.77%. To evaluate our method, we compare it with other state-ofthe-art methods on grounding and segmentation tasks. The REG performance is shown in Table 2. Compared with the emerging conventional one-stage method RED (Huang et al. 2022), our model obtains absolute improvements by 1.97%Top-k REG RES center center+conf mass 1 73.55 73.24 63.34 4 74.14 74.13 64.07 8 74.33 74.46 64.08 12 74.51 74.85 63.76 16 74.25 74.64 62.23 Table 5: Ablation study of IoU-based NMS. 3.15%, 3.40%-5.73%, and 2.81%-3.46% on RefCOCO, RefCOCO+, and RefCOCOg, respectively. When comparing to TransVG (Deng et al. 2021), a transformer-based method most similar to our neck architecture, our method requires shorter training epochs (15 vs 180) to achieve better performance, with 3.59%/ 10.03%/ 6.40% on RefCOCO (testA), RefCOCO+ (val), and RefCOCOg (test), respectively. That means our approximate classification greatly reduces the learning difficulty of the task and achieves significant performance improvement by relying on task decoupling. Our model also achieves obvious improvements compared with all two-stage methods. Specifically, our model outperforms the recent state-of-the-art method Ref-NMS (Chen et al. 2021) by 4.77%, 6.60%, and 3.51% on the three datasets. Notably, on the RefCOCO(testB), the limit performance of the conventional method is 80.77% (cf. Row 4 of Table 1), while our method can reach 80.81%. For the RES task, we summarize the performance comparison in Table 3. Compared with two-stage methods, our model has an absolute advantage with 9.98%/ 7.66%/ 12.70%, 14.00%/ 13.26%/ 13.98%, and 11.72%/ 10.71% on RefCOCO, RefCOCO+, and RefCOCOg, respectively. The significant performance gap shows that previous segmentation methods are limited by the bounding box, while our mass-ness metric is a reasonable solution. Our method is also competitive with one-stage methods. Compared with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7492 Q: Blue jeans in background Q: Half of sandwich nearest you (a) Visualization of the solution process (b) The case of IoU-based NMS (c) Comparison with regional proposals center-ness: attention map: outputs: mass-ness: Q: The chair under a dog Q: Young girl in white Q: Giraffe behind tree Q: Meter in back Figure 4: Visualization of cases,the red box is the top-k proposals, the blue box is the top-1 prediction, and the green box is the correct prediction by our model. CRIS (Wang et al. 2022), which is a CLIP-based knowledge transfer model, our performance improves at most 4.02%. Ablation Study We use PBREC-MT to conduct ablation studies, and all performance changes are verified on RefCOCO+ (val). Soft ground-truth: To verify the rationality of our designed center-ness and mass-ness scores, we try a number of different combinations of metrics. Table 4 shows the performance changes of the model in a single task training or multi-task joint training, where hard means that the classification task is directly modeled with 0-1, and soft means that the category is represented by our designed scores. Compared with hard classification, our metrics improve grounding and segmentation performance by +0.70%/ +1.79% and +2.68%/ +1.51% in single-task and multi-task, respectively. Furthermore, we observe a noteworthy phenomenon in multi-task optimization. Concretely, for the baseline performance 72.17%/ 62.25%, using only center-ness brings a change of +0.58%/ -0.21%, and mass-ness brings a change of -0.21%/ +0.20%. In this case, soft ground-truth not only fails to bring significant improvement, but also leads to another task performance penalty. This means the right combination is even more important for multi-task learning. IoU-based NMS: To find an appropriate post-processing setting, we compare the performance of several different schemes. As shown in Table 5, for the REG task, since FCOS also uses the degree of center deviation to measure the prediction confidence, we try two ranking schemes, i.e., using center-ness alone and using the product of center-ness and FCOS confidence. The SOLO model uses the quality of the prediction mask as confidence, which does not fit our motivation, so we only use mass-ness as the ranking basis. Similar to conventional methods, direct top-1 ranking can provide considerable performance. There is a further increase in performance when performing IoU-based NMS on the top-k predictions. Taking REG as an example, the improvement reaches maximum at top-12 (+1.61%), but further expansion of the group will lead to performance degradation. This improvement is not obvious on RES because the overall IoU is a pixel-level metric and is not easily affected by noise. Finally, without loss of generality, we use top-12 post-processing as the unified setting. Qualitative Results We illustrate the qualitative results with case visualizations. As shown in Fig. 4 (a), we visualize the inference process. The attention map is taken from the scores of the textual [CLS] token and all visual grids in the last layer of the comprehension stage. It can be observed that our model comprehends the abstract concept of sandwich, and more focus on the nearest sandwich in the image. Both the center-ness and mass-ness score predictions are successfully focused on the referred instance. The visualization in Fig. 4 (b) shows the effect of our IoU-based NMS from four cases: 1) The left top one shows that the noise generated by occlusion causes the top-1 prediction area to become larger, while our NMS can refine the predictions; 2) the left bottom one shows another refinement process, i.e., the prediction box is extended outward; 3) the right top one shows that a few parts of the box only perceives ‘white’ but not the ‘girl’, which leads to an incomplete cross-modal comprehension, however, our NMS is a majority filter, which can correct this error; 4) the right bottom one demonstrates the ability of our NMS to mine samples that are difficult to locate. The case in Fig. 4 (c) verifies a key conclusion. Specifically, this case is to localize a ‘meter’ which is behind the other. For conventional methods (top), the first stage can only provide three expression-independent boxes as proposals when using 0.65 as the confidence threshold. In fact, the target is included in proposals with even lower confidence thresholds. According to the idea of doing comprehension first and then localizing, our method (bottom) uses the expression-aware metrics as the basis for ranking, leading to more accurate localization. Conclusions In this paper, we propose a novel two-stage REC paradigm, which achieves a point-based modulate localization by approximating grounding or segmentation as a classification problem. With the parallel inference framework and pointlevel metrics, e.g., center-ness and mass-ness, we overcome the inherent defects of prevailing two-stage methods, thereby breaking through performance bottlenecks. Extensive experiments demonstrate the feasibility of our method. In the future, we plan to develop our point-based two-stage paradigm in the open domain, zero-shot, etc. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7493 Acknowledgments This work was supported by National Natural Science Foundation of China (62276072), the Guangxi Natural Science Foundation (No. 2022GXNSFAA035627), National Natural Science Foundation of China (62076100), Guangxi Scientific and Technological Bases and Talents Special Projects (guikeAD23026230 and guikeAD23026213), Guangxi Natural Science Foundation Key Project (Application No. 2023JJD170015), the Open Research Fund of Guangxi Key Laboratory of Multimedia Communications and Network Technology, the Fundamental Research Funds for the Central Universities, SCUT (D2230080), Innovation Project of Guangxi Graduate Education, CAAI-Huawei MindSpore Open Fund and the Science and Technology Planning Project of Guangdong Province (2020B0101100002). References Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-End Object Detection with Transformers. In Computer Vision - ECCV 2020 - 16th European Conference, volume 12346 of Lecture Notes in Computer Science, 213–229. Chen, L.; Ma, W.; Xiao, J.; Zhang, H.; and Chang, S. 2021. Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, 1036–1044. Chen, S.; Jin, Q.; Wang, P.; and Wu, Q. 2020. Say As You Wish: Fine-Grained Control of Image Caption Generation With Abstract Scene Graphs. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, 9959–9968. Deng, C.; Wu, Q.; Wu, Q.; Hu, F.; Lyu, F.; and Tan, M. 2022. Visual Grounding Via Accumulated Attention. IEEE Trans. Pattern Anal. Mach. Intell., 44(3): 1670–1684. Deng, J.; Yang, Z.; Chen, T.; Zhou, W.; and Li, H. 2021. TransVG: End-to-End Visual Grounding with Transformers. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, 1749–1759. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, 4171–4186. Ding, H.; Liu, C.; Wang, S.; and Jiang, X. 2021. VisionLanguage Transformer and Query Generation for Referring Segmentation. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, 16301–16310. Du, Y.; Fu, Z.; Liu, Q.; and Wang, Y. 2022. Visual Grounding with Transformers. In IEEE International Conference on Multimedia and Expo, ICME 2022, 1–6. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, 770–778. Hong, R.; Liu, D.; Mo, X.; He, X.; and Zhang, H. 2022. Learning to Compose and Reason with Language Tree Structures for Visual Grounding. IEEE Trans. Pattern Anal. Mach. Intell., 44(2): 684–696. Hu, R.; Rohrbach, M.; Andreas, J.; Darrell, T.; and Saenko, K. 2017. Modeling Relationships in Referential Expressions with Compositional Modular Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 4418–4427. Hu, R.; Rohrbach, M.; and Darrell, T. 2016. Segmentation from Natural Language Expressions. In Computer Vision ECCV 2016 - 14th European Conference, volume 9905 of Lecture Notes in Computer Science, 108–124. Huang, B.; Lian, D.; Luo, W.; and Gao, S. 2021. Look Before You Leap: Learning Landmark Features for One-Stage Visual Grounding. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, 16888– 16897. Huang, J.; Qin, Y.; Qi, J.; Sun, Q.; and Zhang, H. 2022. Deconfounded Visual Grounding. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, 998–1006. Jing, Y.; Kong, T.; Wang, W.; Wang, L.; Li, L.; and Tan, T. 2021. Locate Then Segment: A Strong Pipeline for Referring Image Segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, 9858–9867. Kim, J.; Misu, T.; Chen, Y.; Tawari, A.; and Canny, J. F. 2019. Grounding Human-To-Vehicle Advice for SelfDriving Vehicles. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, 10591–10599. Kim, N.; Kim, D.; Kwak, S.; Lan, C.; and Zeng, W. 2022. ReSTR: Convolution-free Referring Image Segmentation Using Transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 18124–18133. IEEE. Li, M.; and Sigal, L. 2021. Referring Transformer: A Onestep Approach to Multi-task Visual Grounding. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, 19652–19664. Liao, Y.; Liu, S.; Li, G.; Wang, F.; Chen, Y.; Qian, C.; and Li, B. 2020. A Real-Time Cross-Modality Correlation Filtering Method for Referring Expression Comprehension. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, 10877–10886. Lin, T.; Maire, M.; Belongie, S. J.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In Computer Vision - ECCV 2014 - 13th European Conference, volume 8693 of Lecture Notes in Computer Science, 740–755. Liu, C.; Lin, Z.; Shen, X.; Yang, J.; Lu, X.; and Yuille, A. L. 2017. Recurrent Multimodal Interaction for Referring Image Segmentation. In IEEE International Conference on Computer Vision, ICCV 2017, 1280–1289. Liu, D.; Zhang, H.; Zha, Z.; and Wu, F. 2019a. Learning to Assemble Neural Module Tree Networks for Visual Grounding. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, 4672–4681. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7494 Liu, X.; Wang, Z.; Shao, J.; Wang, X.; and Li, H. 2019b. Improving Referring Expression Grounding With Cross-Modal Attention-Guided Erasing. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, 1950– 1959. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, 3431–3440. Loshchilov, I.; and Hutter, F. 2017. SGDR: Stochastic Gradient Descent with Warm Restarts. In 5th International Conference on Learning Representations, ICLR 2017. Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. In 7th International Conference on Learning Representations, ICLR 2019. Luo, G.; Zhou, Y.; Sun, X.; Cao, L.; Wu, C.; Deng, C.; and Ji, R. 2020. Multi-Task Collaborative Network for Joint Referring Expression Comprehension and Segmentation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, 10031–10040. Mao, J.; Huang, J.; Toshev, A.; Camburu, O.; Yuille, A. L.; and Murphy, K. 2016. Generation and Comprehension of Unambiguous Object Descriptions. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, 11–20. Redmon, J.; and Farhadi, A. 2018. YOLOv3: An Incremental Improvement. CoRR, abs/1804.02767. Ren, S.; He, K.; Girshick, R. B.; and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, 91–99. Sadhu, A.; Chen, K.; and Nevatia, R. 2019. Zero-Shot Grounding of Objects From Natural Language Queries. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, 4693–4702. Sun, M.; Xiao, J.; and Lim, E. G. 2021. Iterative Shrinking for Referring Expression Grounding Using Deep Reinforcement Learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, 14060–14069. Sun, M.; Xiao, J.; Lim, E. G.; and Zhao, Y. 2023. Cycle-Free Weakly Referring Expression Grounding With Self-Paced Learning. IEEE Trans. Multim., 25: 1611–1621. Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. FCOS: Fully Convolutional One-Stage Object Detection. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, 9626–9635. IEEE. Wang, P.; Wu, Q.; Cao, J.; Shen, C.; Gao, L.; and van den Hengel, A. 2019. Neighbourhood Watch: Referring Expression Comprehension via Language-Guided Graph Attention Networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, 1960–1968. Wang, X.; Kong, T.; Shen, C.; Jiang, Y.; and Li, L. 2020a. SOLO: Segmenting Objects by Locations. In Vedaldi, A.; Bischof, H.; Brox, T.; and Frahm, J., eds., Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVIII, volume 12363 of Lecture Notes in Computer Science, 649–665. Springer. Wang, X.; Liu, Y.; Shen, C.; Ng, C. C.; Luo, C.; Jin, L.; Chan, C. S.; van den Hengel, A.; and Wang, L. 2020b. On the General Value of Evidence, and Bilingual SceneText Visual Question Answering. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, 10123–10132. Wang, Z.; Lu, Y.; Li, Q.; Tao, X.; Guo, Y.; Gong, M.; and Liu, T. 2022. CRIS: CLIP-Driven Referring Image Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 11676–11685. IEEE. Yang, L.; Xu, Y.; Yuan, C.; Liu, W.; Li, B.; and Hu, W. 2022. Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, 9489–9498. Yang, S.; Li, G.; and Yu, Y. 2020. Graph-Structured Referring Expression Reasoning in the Wild. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, 9949–9958. Yang, Z.; Chen, T.; Wang, L.; and Luo, J. 2020. Improving One-Stage Visual Grounding by Recursive Sub-query Construction. In Computer Vision - ECCV 2020 - 16th European Conference, volume 12359, 387–404. Yang, Z.; Gong, B.; Wang, L.; Huang, W.; Yu, D.; and Luo, J. 2019. A Fast and Accurate One-Stage Approach to Visual Grounding. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, 4682–4692. Ye, J.; Tian, J.; Yan, M.; Yang, X.; Wang, X.; Zhang, J.; He, L.; and Lin, X. 2022. Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, 15481–15491. Yu, L.; Lin, Z.; Shen, X.; Yang, J.; Lu, X.; Bansal, M.; and Berg, T. L. 2018. MAttNet: Modular Attention Network for Referring Expression Comprehension. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, 1307–1315. Yu, L.; Poirson, P.; Yang, S.; Berg, A. C.; and Berg, T. L. 2016. Modeling Context in Referring Expressions. In Computer Vision - ECCV 2016 - 14th European Conference, volume 9906 of Lecture Notes in Computer Science, 69–85. Zhao, H.; Zhou, J. T.; and Ong, Y.-S. 2022. Word2Pix: Word to Pixel Cross-Attention Transformer in Visual Grounding. IEEE Transactions on Neural Networks and Learning Systems, 1–11. Zhu, C.; Zhou, Y.; Shen, Y.; Luo, G.; Pan, X.; Lin, M.; Chen, C.; Cao, L.; Sun, X.; and Ji, R. 2022. SeqTR: A Simple Yet Universal Network for Visual Grounding. In Computer Vision - ECCV 2022 - 17th European Conference, volume 13695 of Lecture Notes in Computer Science, 598–615. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7495
2024
832
18,665
Optical Flow for Spike Camera with Hierarchical Spatial-Temporal Spike Fusion Rui Zhao1,2, Ruiqin Xiong1,2*, Jian Zhang3, Xinfeng Zhang4, Zhaofei Yu1,2,5, Tiejun Huang1,2,5 1National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University 2National Engineering Research Center of Visual Technology, School of Computer Science, Peking University 3School of Electronic and Computer Engineering, Peking University 4School of Computer Science and Technology, University of Chinese Academy of Sciences 5Institute for Artificial Intelligence, Peking University ruizhao@stu.pku.edu.cn, {rqxiong, zhangjian.sz, yuzf12, tjhuang}@pku.edu.cn, xfzhang@ucas.ac.cn Abstract As an emerging neuromorphic camera with an asynchronous working mechanism, spike camera shows good potential for high-speed vision tasks. Each pixel in spike camera accumulates photons persistently and fires a spike whenever the accumulation exceeds a threshold. Such high-frequency finegranularity photon recording facilitates the analysis and recovery of dynamic scenes with high-speed motion. This paper considers the optical flow estimation problem for spike cameras. Due to the Poisson nature of incoming photons, the occurrence of spikes is random and fluctuating, making conventional image matching inefficient. We propose a Hierarchical Spatial-Temporal (HiST) fusion module for spike representation to pursue reliable feature matching and develop a robust optical flow network, dubbed as HiST-SFlow. The HiST extracts features at multiple moments and hierarchically fuses the spatial-temporal information. We also propose an intra-moment filtering module to further extract the feature and suppress the influence of randomness in spikes. A scene loss is proposed to ensure that this hierarchical representation recovers the essential visual information in the scene. Experimental results demonstrate that the proposed method achieves state-of-the-art performance compared with the existing methods. The source codes are available at https: //github.com/ruizhao26/HiST-SFlow. Introduction With the development of computer vision, high-speed vision applications attract increasing attention in areas such as autonomous driving and unmanned aerial vehicle. Neuromorphic cameras (NeuCams) are a kind of emerging camera that can handle vision tasks in high-speed scenarios. NeuCams can be roughly divided into event cameras (Lichtsteiner, Posch, and Delbruck 2008; Moeys et al. 2017; Huang, Guo, and Chen 2017) and spike cameras (Dong, Huang, and Tian 2017; Huang et al. 2022a). Both two kinds of cameras work asynchronously at pixel level and enjoy the advantages of high speed and low latency. Event cameras are inspired by the retinal periphery and are equipped with a differential sampling model. They detect light intensity change at each pixel in the logarithmic *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Y X Flow EsƟmaƟon Spike Sub-Stream Binary Spikes (in 2D plane) SCFlow Target Source Different Network Architectures Different Network Architectures Flow Color Coding Flow Color Coding HiST-SFlow (Ours) CRAFT FlowFormer Figure 1: Illustration of spike-based optical flow. The scene content is capturing a high-speed train moving to the right on another train in the opposite direction. On the top-left is a binary spike stream in a spatial-temporal coordinate. On the top-right are spike sub-streams in the spatial plane, where a black point means a spike. The input of the spike-based optical flow is two sub-streams around the source and target time, respectively. As shown at the bottom, our method can better preserve the edges of the motion. All the methods use spikes as inputs and are trained in the same setting. domain, and an event will be fired whenever the change exceeds the threshold. Different from event cameras, spike cameras are inspired by the retinal fovea and work with an integral sampling model. Each pixel of the spike camera continuously accumulates photons, and a spike will be fired whenever the accumulation exceeds the threshold. Compared with event cameras, spike cameras can better recover the scene, especially for regions with fewer textures and motion. Many pixel-level tasks are researched for spike cameras, such as reconstruction (Zhao et al. 2021b; Zheng et al. 2021; Zhao et al. 2021c), optical flow (Hu et al. 2022) and depth estimation (Wang et al. 2022; Zhang et al. 2022). Optical flow is the pixel-level correspondences among frames (Horn and Schunck 1981), which has been a critical task in the computer vision area. Hu et al. (Hu et al. 2022) propose the first deep learning approach for optical flow estimation for spike cameras, i.e., spike-based optical flow. They propose a lightweight pyramidal network SCFlow with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7496 binary spikes as the input. The differences in the data format introduce challenges to spike-based optical flow. There are multiple kinds of noises in the imaging of the spike camera. First, the arrival of the photons follows the Poisson process. Then, the circuits have thermal noise. Finally, the spike readout is synchronous with quantization noise. Thus, obtaining brightness with high reliability from spikes is difficult. The factors mentioned above introduce fluctuations in the spike streams. The length of the spike accumulation period may be different even if the light intensity is the same. The randomness of the spike streams causes ambiguities in the correlation between features of different moments, making the feature matching inaccurate. The previous spike-based optical flow methods often have difficulty preserving the edges of the objects and the spatial consistency of the motion, especially in real-captured data. In this paper, we propose a Hierarchical Spatial-Temporal (HiST) fusion module for spike representation in the spikebased optical flow network HiST-SFlow. The motivation of HiST is to suppress the fluctuations in order to extract stable features from the binary spike streams. In spike-based vision tasks, we usually use a series of spike frames in a period to represent the brightness information of the scene. Previous works fuse all the temporal information with a single operation before embedding them into high-dimensional features. Unlike previous works, we first fuse local temporal information for multiple moments and then extend the scope of the fusion. This hierarchical fusion module can adaptively use the correlated information for spike representation. The contributions of this paper can be summarized as follows. (1) A HiST-SFlow is proposed for spike-based optical flow. In HiST-SFlow, the spikes are represented by the HiST module and extracted to features for correlation. The optical flow is estimated by a recurrent optimizer. (2) An inter-moment hierarchical fusion (InterF) module and an intra-moment filtering (IntraF) module are proposed to suppress the randomness in the spikes. A scene loss is proposed to constrain high-fidelity representation to contain the brightness information of the scene. (3) Experimental results demonstrate that our method gets the state-of-the-art performance on both the PHM dataset (Hu et al. 2022) and real-captured data. Related Work Optical Flow Estimation. FlowNet (Dosovitskiy et al. 2015) is the first end-to-end deep neural network for optical flow estimation. Subsequent works introduce the knowledge of traditional methods to the network (Ranjan and Black 2017; Sun et al. 2018; Hui, Tang, and Loy 2018; Hur and Roth 2019; Hui, Tang, and Loy 2020), such as pyramid and warping. RAFT (Teed and Deng 2020) combines the advantages of the above methods, which constructs an all-pairs correlation and recurrently optimizes the optical flow. Many works are proposed based on RAFT. GMA (Jiang et al. 2021a), Separable Flow (Zhang et al. 2021), and KPAFlow (Luo et al. 2022) focus on the matching of the features to improve accuracy. SCV (Jiang et al. 2021b), Flow1D (Xu et al. 2021b), and DIP (Zheng et al. 2022) reduce the computational complexity with sparse cost volume, orthogonal attention, and inverse patch match, respectively. Recently, transformers are used in optical flow networks (Xu et al. 2022; Huang et al. 2022b; Zhao et al. 2022; Sui et al. 2022). Spike Camera. Many works around spike cameras are sprung up recently. Image reconstruction is a popular topic among these works. Zhu et al. (Zhu et al. 2019) reconstruct images with the count of spikes and spike intervals, respectively. Subsequent methods reconstruct images from spikes with filtering (Zhao, Xiong, and Huang 2020; Dong et al. 2022), neuron models (Zhu et al. 2020, 2022a; Zheng et al. 2021), optimization (Zhao et al. 2021c), and deep neural networks (Zhao et al. 2021b; Zhu et al. 2021). MGSR (Zhao et al. 2021a), SpikeSRNet (Zhao et al. 2023), and Xiang et al. (Xiang et al. 2021) estimate super-resolved images from spikes based on the fusion of multi frames. Han et al. (Han et al. 2020) and Zhou et al. (Zhou et al. 2020) use spike cameras to realize high dynamic range imaging. Xia et al. (Xia et al. 2023) use spikes to assist video frame interpolation. Besides getting images, various tasks have been developed. Zhu et al. (Zhu et al. 2022b) and Li et al. (Li et al. 2022) propose object detection methods based on spike streams. SCFlow (Hu et al. 2022) estimates optical flow directly from the binary spike streams based on a pyramidal network. SSDEFormer (Wang et al. 2022) and Spike Transformer (Zhang et al. 2022) estimate binocular and monocular depth for spike cameras with transformers, respectively. Preliminary of the Spike Camera The spike camera mimics the retinal fovea in an integrateand-fire (IF) manner. As shown in Fig. 2. Each pixel of the spike camera has three key components: photon-receptor, integrator, and comparator. The photon-receptor receives photons from the scene and converts them to photoelectrons. The integrator accumulates the photoelectrons, and a spike will be fired whenever the accumulation exceeds the threshold. At the same time, the accumulation in the integrator will be reset. It is noticeable that each pixel implements the IF cycle independently, i.e., each pixel of the spike camera works asynchronously. The spikes’ reading is synchronous with an ultra-high speed of up to 40kHz. Thus, the spike camera generates an H×W binary spike frame at each reading moment, where the spike density corresponds to the light intensity. tatb tc td tatb tc td PhotonReceptor PhotonReceptor Integrator Integrator Comparator (≥θ?) Comparator (≥θ?) Photons from the Scene Light Intensity Accumulation θ Spike Time Reset One Pixel in Spike Camera Output Time Threshold Reading Time Spike Accumulation Light Intensity tatb tc td tatb tc td Figure 2: The “integrate-and-fire” working mechanism of a single pixel in the spike camera. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7497 Feature Encoder Feature Encoder Context Encoder Share Weights Share Weights All-Pairs Cost Volume Recurrent OpƟmizer Query and Look-up X Y T Spike Stream Source Sub-Stream Target Sub-Stream MoƟon to EsƟmate OpƟcal Flow Hierarchical SpaƟal-Temporal fusion for Spike ReresentaƟon HiST HiST CorrelaƟon Volume ComputaƟon CorrelaƟon Volume ComputaƟon CorrelaƟon Volume ComputaƟon HiST HiST Figure 3: The overall architecture of the HiST-SFlow. Two spike sub-streams represent the scene’s brightness at the source and target time, respectively. The hierarchical spatial-temporal (HiST) fusion is used for the two spike sub-streams as representation before being extracted to matching features and context feature. The two matching features construct all-pairs cost volume. The recurrent optimizer estimates the flow based on the context feature and cost volume. If we denote the incoming light at pixel (x, y) and time t for the spike camera is I = I(x, y, t), the accumulation in the integrator A = A(x, y, t) can be formulated as: A(x, y, t) = Z t 0 I(x, y, t) dt mod θ, (1) where θ is the firing threshold adapted to make no more than one spike fired in each reading interval. Approaches Overall Architecture Problem definition. Suppose the spike stream output by the spike camera is S(x, t) ∈BH×W ×T , where x = (x, y) and B is the binary domain. The spike-based optical flow estimation is to predict the pixel level correspondence w(x; ti, tj) of the scene captured between moment ti and tj based on S. The correspondence can be formulated as: I(S) (x, ti) ←I(S) (x + w(x; ti, tj), tj) , (2) where I(S) is the scene behind the spike stream S, and ← means pixel-level registration. Network architecture. As shown in Fig. 3, to estimate the flow field from source time ti to the target time tj, we clip two spike sub-streams Hi and Hj for representing the scene at time ti and tj, respectively. The Ht is a set of continuous spike frames with t as the central time, which can be formulated as follows: Ht =  S(t + to) | to ∈[−T half s , T half s ], to ∈Z , (3) where we omit the spatial coordinate x. Z means the integer domain. T half s is the half length of the spike sub-strteam. The network first embeds the sub-stream Hi and Hj into representations Ri and Rj with our proposed HiST fusion module. The following network has a similar architecture with RAFT (Teed and Deng 2020). First, matching features Fi and Fj are extracted from the spike representations through a feature encoder. The context feature FC i is extracted from the source representation Ri. We construct the 4D all-pairs correlation volume in a similar way to CRAFT (Sui et al. 2022). The target feature Fj is first filtered using a semantic smoothing transformer. The allpairs correlation is based on multiple query and key projections (Li et al. 2021) with K modes. The multi-projected correlations are aggregated using a softmax along the K modes. The recurrent optimizer estimates the residual of the flow based on local cost volume that is looked up according to the current estimated flow. Hierarchical Spatial-Temporal Fusion Module As shown in Fig. 4, the structure of the HiST module can be divided into three parts: inter-moment hierarchical fusion (InterF), intra-moment filtering (IntraF), and global temporal aggregation (GTA). In the imaging procedure of spike cameras, multiple kinds of noises are introduced. The photons’ arrival follows a Poisson process, introducing Poisson noises. The dark currents in circuits introduce thermal noises, and the reading mechanism of the spikes introduces quantitative noises. Due to the noises, the binary spikes have fluctuations and randomness. Extracting effective features from binary spikes is a new challenge. For effective pixel-level dense matching based on binary spikes, we propose an InterF module to concentrate the spatial-temporal information in spike streams in a spatial-temporal hierarchical way. In the concentration procedure, we design an IntraF module to reduce the randomness of features at each moment. The InterF and IntraF are implemented alternatively. We also propose a scene loss to constrain the representation with the scene brightness. Inter-Moment Hierarchical Fusion As shown in the top of Fig. 4, different from previous works that fuse temporal information of spikes in a single operation (Hu et al. 2022; Wang et al. 2022), we retain the time information in the feature extraction procedure. The hierarchical fusion strategy is inspired by video restoration tasks (Maggioni et al. 2021; Isobe et al. 2020; Xu et al. 2021a; Chan et al. 2022; Liu et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7498 X Y T X Y T 3D Conv 3D Conv 3D Conv 3D Conv 3D Conv 3D Conv 2D Convs 2D Convs Temporal AggregaƟon Temporal AggregaƟon Temporal AggregaƟon Temporal AggregaƟon Temporal AggregaƟon Temporal AggregaƟon Deconv Deconv Deconv Deconv Conv Conv Kernel Size (5, 3, 3) Stride (2, 2, 2) Kernel Size (5, 3, 3) Stride (2, 2, 2) Kernel Size (5, 3, 3) Stride (1, 2, 2) HiST Rep. Shared Weights Cm x Hm x Wm Tm (Tm x Cm) x Hm x Wm Cm x Hm x Wm Cm x Hm x Wm Cm x Hm x Wm Cm-1 x Hm-1 x Wm-1 Tm Tm-1 Tm Co m x Hm x Wm Spike Sub-Stream T2 x C2 x H2 x W2 T3 x C3 x H3 x W3 T1 x C1 x H1 x W1 T2 x C2 x H2 x W2 T3 x C3 x H3 x W3 Channel-Wise Concatenate Global Temporal AggregaƟon Global Temporal AggregaƟon Intra-Moment Filtering Intra-Moment Filtering Inter-Moment Hierarchical Fusion Inter-Moment Hierarchical Fusion Deconv Deconv 2D Convs 2D Convs 2D Convs 2D Convs Time-Series Features C C C 3( ) A x 2( ) A x 1( ) A x 3( ) U x 2( ) U x 1( ) U x ( ) R x ( ) H x T1 x C1 x H1 x W1 scene scene scene scene 3 3 { ( , ), } t t K x 2 2 { ( , ), } t t K x 1 1 { ( , ), } t t K x 1 1 { ( , ), } t t J x 2 2 { ( , ), } t t J x 3 3 { ( , ), } t t J x 3D Conv 2D Conv Temporal AggregaƟon Figure 4: Illustration of the hierarchical spatial-temporal (HiST) fusion for spike representation. A spike sub-stream is extracted as time-series features through passing the inter-moment hierarchical fusion (InterF) and intra-moment filtering (IntraF) module alternatively. The features from all the levels of the IntraF module are aggregated temporally to represent the central time of the input spike sub-stream. The aggregated features are fused to be the final spike representation. 2022) that process a video frame with adjacent frames as references. In the InterF, we construct a pyramid of time-series features using 3D convolutional layers with activation. If we denote the time-series feature output by InterF and IntraF at level m are Jm(x, t) and Km(x, t), respectively. The InterF can be formulated as follows: Jm(x, t) = Jm   Km−1(x, τ) τ ∈Tm−1  , (4) Tm−1 =  Tc −T half m−1, . . . , Tc, . . . Tc + T half m−1 , (5) where Jm is the m-th level InterF’s operation. It’s a 3D convolution with activation whose kernel size and stride are (kt m, kh m, kw m) and (st m, sh m, sw m), respectively. Tm−1 is the temporal domain of definition of Jm−1 and Km−1. Tc is the central time of spike sub-stream and time-series features. T half m−1 is the half window length of Jm−1 and Km−1. We don’t pad in the temporal axis since it does not make sense. We use the raw spike sub-stream as the input of the InterF at the first level, i.e., K0(x, t) = Hτ(x, t) τ=Tc and T half 0 = T half s . In the pyramid, the spatial and temporal information is concentrated through the hierarchical fusion scheme. In InterF, we simultaneously extract features at different moments t ∈Tm around the central moment Tc. In both spatial and temporal domains, the information in spikes is fused in multiple steps. The fusion procedure in each hierarchy aims to extract the spatial-temporal information structure in a small local neighborhood. With the increasing of hierarchy level, the spatial-temporal information is concentrated. After each hierarchy of fusion, the number of features can be reduced in both spatial and temporal domains Intra-Moment Filtering We aim to reduce the influence of spikes’ fluctuations through pixels with similar distributions. However, due to motion and occlusions, features in other moments cannot always offer effective references through InterF. Thus, we propose IntraF to model the spatial similarity of itself for features at each moment. In the m-th level time-series features Jm, the feature Jm(ti) corresponds to the scene at moment ti. For Jm(t) at each moment, we propose to filter them using themselves with weight-shared layers, which can be formulated as: Km(x, t) = Km [Jm(x, t)] , t ∈Tm, (6) where Km and Km are the output and the operation of the IntraF module at m-th level, respectively. The Km is a residual block for filtering features at each moment. Note that the Km(x, ti) is filtered from Jm(x, ti) rather than the whole {Jm(x, t) | t ∈Tm}. The interF and intraF are alternatively implemented to enhance the time-series features in each hierarchy of the HiST. The collaboration of the InterF and IntraF modules enables us to use the context in diverse scales of scopes for better restoring the scene’s brightness. This strategy has been proven to be efficient in video processing tasks such as restoration tasks (Maggioni et al. 2021; Isobe et al. 2020; Xu et al. 2021a; Chan et al. 2022; Liu et al. 2022) and compression (Sullivan et al. 2012; Lainema et al. 2012). Global Temporal Aggregation In our network, the goal for representation is to describe the scenes’ brightness at the source and target time. In the modules mentioned above, we obtain features at moments t ∈Tm at different levels. To represent the scene at central time Tc of the input spike sub-stream, we aggregate the information of features {Km(t)} | t ∈Tm; m ∈{1, 2, 3}}. In each level m, We concatenate {Km(t) | t ∈Tm} at all the moments in a channel-wise manner and fuse them: Am(x) = Am [Cat {Km(x, τ) | τ ∈Tm}] , (7) where Am and Am are the output and operation of the temporal aggregation operation. Am means convolutional layers and Cat means the channel-wise concatenation along different moments. As shown in the bottom of Fig. 4, Am is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7499 Architecture Ball Cook Dice Doll Fan Hand Jump Poker Top Average ∆t = 10 SCFlow 0.51 / 20.3 1.34 / 38.6 1.10 / 30.7 0.22 / 5.6 0.24 / 10.7 1.30 / 57.3 0.11 / 3.0 0.80 / 41.1 2.14 / 17.7 0.863 / 25.00 RAFT 0.46 / 12.5 1.32 / 43.7 0.95 / 29.3 0.24 / 6.7 0.28 / 12.7 1.11 / 45.1 0.11 / 3.0 0.67 / 37.1 2.19 / 19.7 0.813 / 23.30 GMA 0.61 / 21.7 1.84 / 74.7 1.13 / 34.2 0.39 / 9.4 0.36 / 12.1 2.13 / 80.6 0.17 / 2.8 0.88 / 43.5 2.29 / 23.6 1.087 / 33.63 Flow1D 0.79 / 51.4 1.28 / 50.8 1.15 / 47.9 0.27 / 6.3 0.28 / 11.0 1.86 / 83.1 0.13 / 3.4 0.85 / 50.1 2.19 / 17.7 0.979 / 35.76 KPA-Flow 0.47 / 14.9 1.41 / 45.9 0.87 / 29.9 0.27 / 7.1 0.29 / 12.7 1.19 / 47.7 0.12 / 3.0 0.65 / 36.6 2.19 / 19.4 0.827 / 24.12 GMFlow 0.76 / 42.4 1.29 / 61.0 1.54 / 81.7 0.31 / 8.4 0.43 / 14.1 1.83 / 65.0 0.30 / 3.7 0.95 / 54.2 2.29 / 23.3 1.077 / 39.33 GMFlowNet 0.45 / 12.1 1.22 / 43.8 1.02 / 32.9 0.35 / 7.8 0.25 / 10.7 1.53 / 65.3 0.12 / 3.2 0.65 / 31.5 2.18 / 17.5 0.863 / 24.98 CRAFT 0.61 / 15.0 1.28 / 43.5 0.93 / 27.6 0.19 / 5.0 0.25 / 10.2 1.67 / 73.3 0.10 / 2.6 0.56 / 23.1 2.15 / 15.1 0.860 / 23.94 FlowFormer 0.52 / 13.5 1.48 / 58.7 0.98 / 31.0 0.25 / 6.7 0.29 / 11.5 1.82 / 84.5 0.14 / 3.6 0.94 / 54.9 2.22 / 19.5 0.959 / 31.54 HiST-SFlow 0.28 / 7.8 0.80 / 27.4 0.85 / 23.3 0.20 / 5.6 0.27 / 12.8 0.64 / 21.7 0.08 / 2.5 0.53 / 23.9 2.11 / 14.8 0.640 / 15.54 ∆t = 20 SCFlow 0.94 / 27.1 3.00 / 50.6 1.72 / 33.2 0.41 / 8.1 0.46 / 13.6 3.71 / 71.3 0.19 / 5.9 1.57 / 53.7 4.25 / 18.9 1.804 / 31.37 RAFT 0.78 / 18.6 2.75 / 54.4 1.57 / 30.1 0.43 / 9.3 0.50 / 14.6 2.81 / 59.9 0.21 / 5.8 1.31 / 46.7 4.30 / 21.2 1.628 / 28.94 GMA 1.01 / 22.1 4.95 / 96.4 1.52 / 35.9 1.00 / 59.6 1.19 / 98.4 6.66 / 99.5 0.81 / 84.4 1.39 / 45.2 4.64 / 64.9 2.575 / 67.38 Flow1D 1.19 / 51.6 4.52 / 96.3 1.58 / 50.7 0.78 / 53.3 1.01 / 82.1 6.65 / 99.2 0.72 / 73.1 1.39 / 52.3 4.75 / 79.7 2.510 / 70.90 KPA-Flow 0.80 / 20.9 2.93 / 55.6 1.48 / 31.4 0.45 / 9.6 0.52 / 14.5 2.86 / 62.5 0.22 / 5.6 1.31 / 48.4 4.28 / 19.7 1.649 / 29.81 GMFlow 1.49 / 80.3 2.64 / 80.1 2.72 / 91.8 0.54 / 15.3 0.77 / 22.0 3.79 / 81.5 0.55 / 27.8 1.78 / 75.3 4.45 / 32.5 2.080 / 56.28 GMFlowNet 0.92 / 31.4 2.61 / 70.4 2.17 / 42.7 0.61 / 27.5 0.56 / 13.9 3.30 / 93.2 0.21 / 4.5 1.33 / 53.4 4.33 / 25.3 1.782 / 40.25 CRAFT 1.16 / 85.5 2.68 / 61.0 1.99 / 46.8 0.39 / 7.8 0.48 / 12.5 3.53 / 87.1 0.20 / 3.6 1.23 / 38.9 4.31 / 22.0 1.775 / 40.57 FlowFormer 0.91 / 13.8 4.41 / 96.3 1.40 / 32.6 0.80 / 54.8 1.03 / 90.0 6.54 / 99.3 0.74 / 75.8 1.47 / 57.4 4.59 / 61.9 2.432 / 64.67 HiST-SFlow 0.55 / 8.8 2.04 / 33.6 1.64 / 26.3 0.38 / 7.2 0.51 / 13.9 2.00 / 34.7 0.17 / 5.0 1.28 / 33.1 4.18 / 15.1 1.417 / 19.73 Table 1: Comparison on average end-point error (AEPE) and percent of outliers (PO%) with comparable methods on PHM datasets in the ∆t = 10 and ∆t = 20 tracks (AEPE / PO%). All the methods use spike stream as input and are retrained in the same setting on SPIFT. The best results for each scene and the best average results are marked in bold. upsampled through a deconvolutional layer Um to be Um: Um = Um [Cat {Am, Um−1}] , (8) The representation R is obtained based on U1. Scene Loss To ensure the HiST for spike representation contain the scene’s brightness information with high fidelity at moment Tc, we propose a scene loss Lscene. The SPIFT dataset (Hu et al. 2022) offers the brightness ground truth of the scenes based on a graphics simulator. We propose to use a series of simple 3-layer convolutional layers {Pm}3 m=0 for the representation RTc and aggregation feature Am in each level to predict the brightness Iscene(x, Tc) at moment Tc. The scene loss can be formulated as: Lscene = ∥Iscene(x, Tc) −P0(RTc(x))∥1 + 3 X m=1 λm∥σm(Iscene(x, Tc)) −Pm(Am(x))∥1, (9) where σm is the resize operator to interpolate the Iscene to the resolution of Am, and λm is the weight of each level. Based on the scene loss, RTc can better focus on the brightness information at the moment Tc. It is noticeable that all the Pm are used only during training and not for inference. Loss Function The loss function for the proposed network is composed of two parts: the flow loss and the scene loss. Suppose the recurrent optimizer of the network has N iterations, and the estimated flow fields of each iteration are {w1, . . . , wN}. The flow loss can be formulated as follows: Lflow = N X i=1 γN−i∥wi(x) −wgt(x)∥1, (10) where γ is the decay factor and we set it as 0.8 following RAFT. wgt is the ground truth of optical flow. We construct the scene loss for representations at both the source and target time. Both the Lflow and Lscene are spatially averaged for training based on eq. (10) and eq. (9), respectively. The total loss can be formulated as follows: L = Lflow + λ(Lsrc scene + Ltgt scene), (11) where λ is set as 0.5. Experiments Implementation Details Model details. In the experiments, we set the input spike frame number as 25 following the SCFlow (Hu et al. 2022), i.e., T half s = 12. The temporal kernel size and stride of {J1, J2, J3} is {5, 5, 5} and {1, 2, 2}, respectively. Thus, the temporal lengths of {J1, J2, J3} are {T1, T2, T3} = {21, 9, 3}. The weights {λ1, λ2, λ3} are set as {0.5, 0.25, 0.125}, respectively. In the correlation volume computation, we set the number of embed modes as 2. The iteration number of the recurrent optimizer is 12. Datasets. SPIFT (Hu et al. 2022) is a dataset that is designed for the training of spike-based optical flow. The scenes of SPIFT are generated with random contents using a graphics-based simulator. PHM (Hu et al. 2022) is a dataset that is designed for the evaluation of spike-based optical flow. It’s also generated through the graphics-based simulator, and the scenes are specially designed with photorealistic contents and diversified motion. For each of these two datasets, there are two tracks: ∆t = 10 and ∆t = 20. The ∆t = 10 means the distance of the central frame of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7500 Scene SCFlow FlowFormer KPA-Flow CRAFT HiST-SFlow (Ours) Ground Truth Cook Hand AEE: 0.562 PO%: 25.35 AEE: 0.453 PO%: 30.84 AEE: 0.645 PO%: 34.55 AEE: 0.495 PO%: 29.51 AEE: 0.305 PO%: 17.18 AEE: 0.991 PO%: 48.68 AEE: 1.095 PO%: 50.59 AEE: 0.991 PO%: 41.85 AEE: 1.095 PO%: 47.64 AEE: 0.489 PO%: 12.10 Figure 5: Visual results on PHM dataset in ∆t = 10 track. The meaning of each column is on the top. The performance of each sample is below each color-coded flow. The Scenes are the gray version of the ideal scene in PHM. Scene Spike SCFlow FlowFormer KPA-Flow CRAFT HiST-SFlow (Ours) Viaduct Mask Figure 6: Visual results on real-captured data captured by spike cameras (∆t = 10). The Scenes are the temporal average of the spikes with gamma transform. The Spikes are the spike frame in the source time of the flow, and a black point means a spike. two moments for optical flow estimation is 10 spike frames, similarly for ∆t = 20. Traning details. Similar to SCFlow (Hu et al. 2022), we use SPIFT as the training set and use PHM as the evaluation set. In the training procedure, we randomly crop the spike stream and the flow ground truth to 320 × 448 in the spatial domain. To balance the motion in the training set, we randomly flip the data horizontally and vertically. Different from previous methods, we mix the “∆t = 10” and “∆t = 20” tracks of data during the training. The batch size is set as 6. We use an Adam optimizer (Kingma and Ba 2015) with β1 = 0.9 and β2 = 0.999. The model is trained for 50 epochs. The learning rate is initialized as 1e-4 and scaled by 0.8 every 10 epochs. Comparable Experiments We compare the proposed HiST-SFlow with comparable methods on the PHM dataset (Hu et al. 2022) and realcaptured data. The comparable methods can be divided into two parts: (a) method designed for spike-based optical flow, and (b) methods straightforwardly adapted from optical flow networks for traditional images. Part (a) includes SCFlow (Hu et al. 2022). Part (b) includes adapted RAFT (Teed and Deng 2020), GMA (Jiang et al. 2021a), Flow1D (Xu et al. 2021b), KPA-Flow (Luo et al. 2022), GMFlow (Xu et al. 2022), GMFlowNet (Zhao et al. 2022), CRAFT (Sui et al. 2022), and FlowFormer (Huang et al. 2022b). The adapted method is inherited from the comparable experiments in SCFlow (Hu et al. 2022), i.e., regarding the binary spike sub-stream as a multi-channel image. All the methods use spikes as input and are retrained in the same setting as our method. Note that we do not use the event-based methods (Zhu and Yuan 2018; Lee et al. 2020) since it has been shown that these methods are not appropriate to be straightforwardly adapted for spike-based optical flow in Table 2 of literature (Hu et al. 2022). It is noticeable that we only use the architecture of the image-based optical flow methods. The straightforwardly adapted methods are optical flow networks for spike streams rather than images. Similarly, we don’t use image-based datasets since the input of the spike-based optical flow is spike streams. We use the average end-point error (AEPE) and percent of outliers (PO%) as the metrics for quantitative comparison. The AEPE is the spatially mean value of Euclidean distance between the predicted flow wpred and its ground truth wgt. We define pixels whose end-point error is larger than 0.5 and 5% of its ground truth simultaneously as outlier pixels. The PO% is the percentage of this kind of outlier pixels. For the PHM dataset, we evaluate all the methods on both ∆t = 10 and ∆t = 20 tracks, where the ∆t means the index difference between the target and source time, i.e., ∆t = 10 corresponds to 0.25 ms when the spike camera works at 40kHz. The quantitative results of these two tracks are shown in Table 1. The average in the last column of the two tables is the arithmetic mean of the metric of all the scenes, which is different from the weighted mean based on the frames of the scenes in SCFlow (Hu et al. 2022). As shown in Table 1, the proposed HiST-SFlow outperforms all the comparable methods in most instances. The visualization results on the PHM dataset and realThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7501 Index Settings ∆t = 10 ∆t = 20 InterF IntraF Lscene AEPE PO% AEPE PO% (A) ✗ ✗ ✗ 0.986 33.17 2.095 56.56 (B) ✓ ✗ ✗ 0.694 18.18 1.449 21.99 (C) ✓ ✓ ✗ 0.676 17.34 1.433 22.79 (D) ✓ ✗ ✓ 0.675 16.63 1.448 21.40 (E) ✓ ✓ ✓ 0.640 15.54 1.417 19.73 Table 2: Ablations for the proposed network modules on PHM dataset. All the values are the arithmetic mean of the scenes. The best results are marked in bold. Representation ∆t = 10 ∆t = 20 AEPE PO% AEPE PO% Window-Based 0.868 25.72 1.757 34.19 Interval-Based 0.880 29.77 1.824 37.91 Multi-Window 0.799 21.10 1.703 34.58 Flow-Guided Window 0.696 16.99 1.533 23.36 HiST (Ours) 0.640 15.54 1.417 19.73 Table 3: Ablations for different representations on PHM dataset. All the values are the arithmetic mean of the scenes. The best results are marked in bold. captured data are shown in Fig. 5 and Fig. 6, respectively. There are two scenes in the real-captured data. The “Mask” includes a dropping board with mask painting. The “Viaduct” includes a fast-moving view on a viaduct. Note that we use the color-coded scheme in the Middlebury dataset (Baker et al. 2011), which differs from SCFlow (Hu et al. 2022). As shown in these two figures, our HiST-SFlow can better preserve the objects’ edges and the motion’s consistency compared with other methods. Ablations Studies Ablations for modules. We implement a series of ablations to see the proposed modules’ effectiveness. The quantitative results are shown in Table 2. The modules that can be closed optionally include InterF, IntraF, and scene loss. The existence of IntraF depends on InterF since IntraF is used for the output of InterF, i.e., time-series features. Thus, there are five combinations based on the three options. The comparison between experiments {(A), (B)} shows the effectiveness of the InterF module. Experiments {(B), (C)} and {(D), (E)} show the effectiveness of the IntraF module. Experiments {(B), (D)} and {(C), (E)} demonstrate the effectiveness of the scene loss. In summary, Table. 2 shows that all the proposed modules make contributions to the final model. Ablations for Different Representations. Besides ablations for components. We replace our network’s HiST with other spike-based representation schemes. The representations we use are as follows. (1) Window-based representation. Zhu et al. (Zhu et al. 2019) propose using the average along the temporal axis for spike streams to recover the scene’s texture. (2) Interval-based representation. Zhu et al. (Zhu et al. Architecture with HiST ∆t = 10 ∆t = 20 AEPE PO% AEPE PO% GMA No 1.087 33.63 2.575 67.38 Yes 0.666 16.91 1.391 21.20 KPA-Flow No 0.827 24.12 1.649 29.81 Yes 0.659 16.99 1.363 22.27 GMFlowNet No 0.863 24.98 1.782 40.25 Yes 0.730 21.22 1.452 24.93 Table 4: Comparison between the network with and without HiST on different baselines. 2019) propose to use the interval of the spike firing ∆t(x) as the basement of image reconstruction. (3) Multi-window representation. SSDEFormer (Wang et al. 2022) uses the multi-window temporal average of the spike stream for representation. The window size varies from 1 to T. (4) Flow-guided window. SCFlow (Hu et al. 2022) uses an initialized optical flow to guide the direction of convolution for spike streams. In the training procedure, we use the same recurrent strategy with SCFlow. The quantitative results are shown in Table 3. The HiST outperforms all the comparable schemes on all the metrics. The flow-guided window also performs well, but its computational procedure is complex, especially in training. Using HiST for Other Baselines The main contribution of our HiST-SFlow is a representation module to obtain high-fidelity features. We use a CRAFTlike network as our baseline, and other architectures can also be used as the baseline. We apply our HiST as spike representation for other three advanced optical flow network architectures, i.e., GMA (Jiang et al. 2021a), KPA-Flow (Luo et al. 2022), and GMFlowNet (Zhao et al. 2022). As shown in Table 4, the HiST can improve the performance on the other three baselines. Almost all the metrics in the table have a 20% ∼30% error reduction with the HiST. Conclusion We propose a hierarchical spatial-temporal fusion module for spike representation and construct a robust network for spike-based optical flow. We propose an inter-moment progressive fusion module and an intra-moment filtering module to suppress the influence caused by the fluctuations in the spikes. We also design a scene loss to constrain the representation containing the brightness information of the scene. Experiments demonstrate that our method achieves state-ofthe-art performance on spike-based optical flow and can well preserve the edges and motion consistency of the objects. Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grants 62072009, 22127807, 62071449, and in part by the National Key R&D Program of China under Grant 2021YFF0900501. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7502 References Baker, S.; Scharstein, D.; Lewis, J.; Roth, S.; Black, M. J.; and Szeliski, R. 2011. A database and evaluation methodology for optical flow. IJCV, 92(1): 1–31. Chan, K. C.; Zhou, S.; Xu, X.; and Loy, C. C. 2022. BasicVSR++: Improving video super-resolution with enhanced propagation and alignment. In CVPR, 5972–5981. Dong, S.; Huang, T.; and Tian, Y. 2017. Spike Camera and Its Coding Methods. In DCC, 437–437. Dong, Y.; Zhao, J.; Xiong, R.; and Huang, T. 2022. 3D Residual Interpolation for Spike Camera Demosaicing. In ICIP, 1461–1465. Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; and Brox, T. 2015. Flownet: Learning optical flow with convolutional networks. In ICCV, 2758–2766. Han, J.; Zhou, C.; Duan, P.; Tang, Y.; Xu, C.; Xu, C.; Huang, T.; and Shi, B. 2020. Neuromorphic camera guided high dynamic range imaging. In CVPR, 1730–1739. Horn, B. K.; and Schunck, B. G. 1981. Determining optical flow. AI, 17(1-3): 185–203. Hu, L.; Zhao, R.; Ding, Z.; Ma, L.; Shi, B.; Xiong, R.; and Huang, T. 2022. Optical Flow Estimation for Spiking Camera. In CVPR. Huang, J.; Guo, M.; and Chen, S. 2017. A dynamic vision sensor with direct logarithmic output and full-frame pictureon-demand. In ISCAS, 1–4. Huang, T.; Zheng, Y.; Yu, Z.; Chen, R.; Li, Y.; Xiong, R.; Ma, L.; Zhao, J.; Dong, S.; Zhu, L.; et al. 2022a. 1000x Faster Camera and Machine Vision with Ordinary Devices. Engineering. Huang, Z.; Shi, X.; Zhang, C.; Wang, Q.; Cheung, K. C.; Qin, H.; Dai, J.; and Li, H. 2022b. FlowFormer: A Transformer Architecture for Optical Flow. In ECCV. Hui, T.-W.; Tang, X.; and Loy, C. C. 2018. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In CVPR, 8981–8989. Hui, T.-W.; Tang, X.; and Loy, C. C. 2020. A lightweight optical flow CNN—Revisiting data fidelity and regularization. IEEE TPAMI, 43(8): 2555–2569. Hur, J.; and Roth, S. 2019. Iterative residual refinement for joint optical flow and occlusion estimation. In CVPR, 5754– 5763. Isobe, T.; Li, S.; Jia, X.; Yuan, S.; Slabaugh, G.; Xu, C.; Li, Y.-L.; Wang, S.; and Tian, Q. 2020. Video super-resolution with temporal group attention. In CVPR, 8008–8017. Jiang, S.; Campbell, D.; Lu, Y.; Li, H.; and Hartley, R. 2021a. Learning to estimate hidden motions with global motion aggregation. In ICCV, 9772–9781. Jiang, S.; Lu, Y.; Li, H.; and Hartley, R. 2021b. Learning optical flow from a few matches. In CVPR, 16592–16600. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Lainema, J.; Bossen, F.; Han, W.-J.; Min, J.; and Ugur, K. 2012. Intra coding of the HEVC standard. IEEE TCSVT, 22(12): 1792–1801. Lee, C.; Kosta, A. K.; Zhu, A. Z.; Chaney, K.; Daniilidis, K.; and Roy, K. 2020. Spike-FlowNet: event-based optical flow estimation with energy-efficient hybrid neural networks. In ECCV, 366–382. Li, J.; Wang, X.; Zhu, L.; Li, J.; Huang, T.; and Tian, Y. 2022. Retinomorphic Object Detection in Asynchronous Visual Streams. In AAAI, 1332–1340. Li, S.; Sui, X.; Luo, X.; Xu, X.; Liu, Y.; and Goh, R. 2021. Medical image segmentation using squeeze-and-expansion transformers. In IJCAI. Lichtsteiner, P.; Posch, C.; and Delbruck, T. 2008. A 128×128 120 dB 15µ s latency asynchronous temporal contrast vision sensor. IEEE JSSC, 43(2): 566–576. Liu, C.; Yang, H.; Fu, J.; and Qian, X. 2022. Learning Trajectory-Aware Transformer for Video Super-Resolution. In CVPR, 5687–5696. Luo, A.; Yang, F.; Li, X.; and Liu, S. 2022. Learning Optical Flow With Kernel Patch Attention. In CVPR, 8906–8915. Maggioni, M.; Huang, Y.; Li, C.; Xiao, S.; Fu, Z.; and Song, F. 2021. Efficient multi-stage video denoising with recurrent spatio-temporal fusion. In CVPR, 3466–3475. Moeys, D. P.; Corradi, F.; Li, C.; Bamford, S. A.; Longinotti, L.; Voigt, F. F.; Berry, S.; Taverni, G.; Helmchen, F.; and Delbruck, T. 2017. A sensitive dynamic and active pixel vision sensor for color or neural imaging applications. IEEE TBCS, 12(1): 123–136. Ranjan, A.; and Black, M. J. 2017. Optical flow estimation using a spatial pyramid network. In CVPR, 4161–4170. Sui, X.; Li, S.; Geng, X.; Wu, Y.; Xu, X.; Liu, Y.; Goh, R.; and Zhu, H. 2022. CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow. In CVPR, 17602–17611. Sullivan, G. J.; Ohm, J.-R.; Han, W.-J.; and Wiegand, T. 2012. Overview of the high efficiency video coding (HEVC) standard. IEEE TCSVT, 22(12): 1649–1668. Sun, D.; Yang, X.; Liu, M.-Y.; and Kautz, J. 2018. PWCNet: CNNs for optical flow using pyramid, warping, and cost volume. In CVPR, 8934–8943. Teed, Z.; and Deng, J. 2020. RAFT: Recurrent all-pairs field transforms for optical flow. In ECCV, 402–419. Wang, Y.; Li, J.; Zhu, L.; Xiang, X.; Huang, T.; and Tian, Y. 2022. Learning stereo depth estimation with bio-inspired spike cameras. In ICME, 1–6. Xia, L.; Zhao, J.; Xiong, R.; and Huang, T. 2023. SVFI: spiking-based video frame interpolation for high-speed motion. In AAAI, 2910–2918. Xiang, X.; Zhu, L.; Li, J.; Wang, Y.; Huang, T.; and Tian, Y. 2021. Learning Super-Resolution Reconstruction for High Temporal Resolution Spike Stream. IEEE TCSVT. Xu, G.; Xu, J.; Li, Z.; Wang, L.; Sun, X.; and Cheng, M.M. 2021a. Temporal modulation network for controllable space-time video super-resolution. In CVPR, 6388–6397. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7503 Xu, H.; Yang, J.; Cai, J.; Zhang, J.; and Tong, X. 2021b. High-Resolution Optical Flow from 1D Attention and Correlation. In ICCV, 10498–10507. Xu, H.; Zhang, J.; Cai, J.; Rezatofighi, H.; and Tao, D. 2022. GMFlow: Learning Optical Flow via Global Matching. In CVPR, 8121–8130. Zhang, F.; Woodford, O. J.; Prisacariu, V. A.; and Torr, P. H. 2021. Separable Flow: Learning Motion Cost Volumes for Optical Flow Estimation. In ICCV, 10807–10817. Zhang, J.; Tang, L.; Yu, Z.; Lu, J.; and Huang, T. 2022. Spike Transformer: Monocular Depth Estimation for Spiking Camera. In ECCV. Zhao, J.; Xie, J.; Xiong, R.; Zhang, J.; Yu, Z.; and Huang, T. 2021a. Super Resolve Dynamic Scene from Continuous Spike Streams. In ICCV, 2533–2542. Zhao, J.; Xiong, R.; and Huang, T. 2020. High-speed motion scene reconstruction for spike camera via motion aligned filtering. In ISCAS, 1–5. Zhao, J.; Xiong, R.; Liu, H.; Zhang, J.; and Huang, T. 2021b. Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream. In CVPR, 11996–12005. Zhao, J.; Xiong, R.; Xie, J.; Shi, B.; Yu, Z.; Gao, W.; and Huang, T. 2021c. Reconstructing Clear Image for HighSpeed Motion Scene With a Retina-Inspired Spike Camera. IEEE TCI, 8: 12–27. Zhao, J.; Xiong, R.; Zhang, J.; Zhao, R.; Liu, H.; and Huang, T. 2023. Learning to super-resolve dynamic scenes for neuromorphic spike camera. In AAAI, 3579–3587. Zhao, S.; Zhao, L.; Zhang, Z.; Zhou, E.; and Metaxas, D. 2022. Global Matching with Overlapping Attention for Optical Flow Estimation. In CVPR, 17592–17601. Zheng, Y.; Zheng, L.; Yu, Z.; Shi, B.; Tian, Y.; and Huang, T. 2021. High-speed Image Reconstruction through Short-term Plasticity for Spiking Cameras. In CVPR, 6358–6367. Zheng, Z.; Nie, N.; Ling, Z.; Xiong, P.; Liu, J.; Wang, H.; and Li, J. 2022. DIP: Deep Inverse Patchmatch for HighResolution Optical Flow. In CVPR, 8925–8934. Zhou, C.; Zhao, H.; Han, J.; Xu, C.; Xu, C.; Huang, T.; and Shi, B. 2020. Unmodnet: Learning to unwrap a modulo image for high dynamic range imaging. In NeurIPS, 1559– 1570. Zhu, A. Z.; and Yuan, L. 2018. EV-FlowNet: SelfSupervised Optical Flow Estimation for Event-based Cameras. In RSS. Zhu, L.; Dong, S.; Huang, T.; and Tian, Y. 2019. A retinainspired sampling method for visual texture reconstruction. In ICME, 1432–1437. Zhu, L.; Dong, S.; Li, J.; Huang, T.; and Tian, Y. 2020. Retina-like visual image reconstruction via spiking neural model. In CVPR, 1438–1446. Zhu, L.; Dong, S.; Li, J.; Huang, T.; and Tian, Y. 2022a. Ultra-high Temporal Resolution Visual Reconstruction from a Fovea-like Spike Camera via Spiking Neuron Model. IEEE TPAMI. Zhu, L.; Li, J.; Wang, X.; Huang, T.; and Tian, Y. 2021. NeuSpike-Net: High Speed Video Reconstruction via BioInspired Neuromorphic Cameras. In ICCV, 2400–2409. Zhu, Y.; Zhang, Y.; Xie, X.; and Huang, T. 2022b. An FPGA Accelerator for High-Speed Moving Objects Detection and Tracking With a Spike Camera. Neural Computation, 34(8): 1812–1839. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7504
2024
833
18,666
Towards Fine-Grained HBOE with Rendered Orientation Set and Laplace Smoothing Ruisi Zhao1,2, Mingming Li1, Zheng Yang2, Binbin Lin3, Xiaohui Zhong4, Xiaobo Ren4, Deng Cai1,2, Boxi Wu3 * 1State Key Lab of CAD&CG, Zhejiang University 2FABU Inc 3School of Software Technology, Zhejiang University 4Ningbo Zhoushan Port Group Co.,Ltd., Ningbo, China zhaors00@zju.edu.cn Abstract Human body orientation estimation (HBOE) aims to estimate the orientation of a human body relative to the camera’s frontal view. Despite recent advancements in this field, there still exist limitations in achieving fine-grained results. We identify certain defects and propose corresponding approaches as follows: 1). Existing datasets suffer from nonuniform angle distributions, resulting in sparse image data for certain angles. To provide comprehensive and high-quality data, we introduce RMOS (Rendered Model Orientation Set), a rendered dataset comprising 150K accurately labeled human instances with a wide range of orientations. 2). Directly using one-hot vector as labels may overlook the similarity between angle labels, leading to poor supervision. And converting the predictions from radians to degrees enlarges the regression error. To enhance supervision, we employ Laplace smoothing to vectorize the label, which contains more information. For fine-grained predictions, we adopt weighted Smooth-L1-loss to align predictions with the smoothed-label, thus providing robust supervision. 3). Previous works ignore body-part-specific information, resulting in coarse predictions. By employing local-window self-attention, our model could utilize different body part information for more precise orientation estimations. We validate the effectiveness of our method in the benchmarks with extensive experiments and show that our method outperforms state-of-the-art. Project is available at: https://github.com/Whalesong-zrs/TowardsFine-grained-HBOE. Introduction Human body orientation estimation (HBOE) involves estimating the orientation of a person’s skeleton relative to the camera frontal view. It has been applied in various industrial applications, e.g., pedestrian trajectory prediction in autonomous driving and human-robot interactions. For some downstream tasks, body orientation is easier to obtain, provides sufficient information, and demonstrates greater robustness to illumination and occlusions compared to 3D pose estimation, for understanding human behaviors. Though many prior methods performed well in coarsegrained estimation, they encountered challenges in accurately predicting angles. These difficulties can be attributed *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. x y z φ 𝐻 Ԧ𝑆 Ԧ𝐶 170°/34 bin 180°/36 bin 100°/20 bin 140°/28 bin 180°/36 bin 90°/18 bin Real data Synthetic data Mix OEFormer Estimate φ Figure 1: Illustration of HBOE. We present the real and synthetic data as examples, where the ϕ is the angle we estimating. We visualize the angles in the bottom lines. to three key factors. The primary bottleneck is the quality of datasets. The widely used TUD dataset (Andriluka, Roth, and Schiele 2010), which originally only provided 8class labels, has a small scale that limits model capability. Additionally, the high-quality MEBOW (Wu et al. 2020) dataset based on COCO (Lin et al. 2014) suffers from a nonuniform angle distribution and scarce image data for certain angles, causing inaccurate predictions. Secondly, many methods treated HBOE as a 6/8-class classification problem (Liu, Liu, and Ma 2017; Yu et al. 2019; Choi, Lee, and Zhang 2016), further contributing to coarse results and ignoring the similarity between classes. Directly regressing values in radians struggles to obtain accurate predictions, as converting radians to degrees enlarges the regression error (Zhou et al. 2022; Burgermeister and Curio 2022). MoreThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7505 over, people usually judge the fine-grained HBOE by observing the shape of body core parts. Unfortunately, previous works tended to overlook the prior of human perception in fine-grained HBOE and adopted shallow model architectures (Raza et al. 2018; Choi, Lee, and Zhang 2016; Zhou et al. 2022; Burgermeister and Curio 2022), resulting in underfitting and reduced performance. To address limitations of existing datasets, we present the RMOS (Rendered Model Orientation Set), a synthetic dataset containing 150K images with detailed annotations. To achieve high-precision labels, we divide 360° into 72 bins, while ensuring that multiple images are rendered for each bin to guarantee uniform angle coverage. To promote diversity, our dataset captures 10 differently dressed models in 48 daily poses i.e., running, standing arms out, yoga pose, from five different viewpoints. We highlight the advantages of incorporating synthetic data during training. Considering the correlation between angle labels, we employ a Laplace smoothing strategy inspired by previous works (M¨uller, Kornblith, and Hinton 2019; Wu et al. 2020). We assign the highest probability to the label’s corresponding bin while ensuring certain probabilities for adjacent bins using a Laplacian kernel. During the training process, we also adopt the weighted Smooth-L1-loss, aligning the predictions with the smoothed-labels. Compared to Gaussian smoothing, our method’s peak probability is higher, which enhances class information distinction. By incorporating this strategy, our method significantly outperforms existing techniques, achieving more precise orientation estimations. Based on observations of human perception for HBOE, we adopt the local-window self-attention, which partitions the feature map and individually applies multi-head selfattention to each segment. Furthermore, to tackle the challenges in HBOE, we propose Orientation Estimating Former (OEFormer) based on HRFormer (Yuan et al. 2021). Compared to the vanilla HRFormer, we employ a deeper network architecture with multiple early-stage branches for more comprehensive feature extraction. Considering the varying resolutions of human images in the original data, effective feature extraction has become crucial. In the final stage, we integrate feature maps generated from different stages to achieve a more accurate final prediction. Our main contributions in this work are: 1. We introduce a novel rendered dataset that provides high-precision and comprehensive orientation annotations, effectively addressing the gaps in existing datasets. 2. Employing a Laplace smoothing strategy and weighted Smooth-L1-loss, we enhance the alignment between ground truth and predictions, resulting in more effective training and markedly improved accuracy in orientation estimation. 3. This is the first application of a transformer-based model for HBOE. We present a powerful model, OEFormer, which involves local-window self-attention to focus on body-part-specific information and outperforms state-ofthe-art in existing benchmarks. Related Works Body orientation estimations methods. Classical studies in HBOE primarily relied on feature engineering and classifiers like HOG/linSVM (Flohr et al. 2014), limited by dataset quality and scale. Enzweiler et al. (Enzweiler and Gavrila 2010) classified the pedestrian and used Gaussian mixture model to estimate orientation. Previous deep learning methods also treated this task as a classification problem, with various approaches like 4-layer neural networks (Choi, Lee, and Zhang 2016) and 14-layer convolutional networks (Raza et al. 2018). To get fine-grained predictions, the method in (Yu et al. 2019) leveraged keypoint detection results from another 2D pose estimation model as an auxiliary condition. The TUD multiview pedestrians dataset (Andriluka, Roth, and Schiele 2010) has long served as a key benchmark in HBOE. It was further improved by Hara et al. (Hara and Chellappa 2017) who relabeled it with continuous annotations, facilitating extensive use in early research. With the appearance of MEBOW (Wu et al. 2020), TUD evolved into a test benchmark to evaluate model generalizability. MEBOW, the largest benchmark in this field, offers high-precision annotations and varied backgrounds in realworld settings, containing 130K human instances. PedRecNet (Burgermeister and Curio 2022) utilized this benchmark to address orientation estimation challenges, whereas JOINT-Net (Zhou et al. 2022) first detected human instances and then estimated their orientation. Synthetic Data for Image Recognition. In computer vision tasks, utilizing synthetic data for augmentation is a widely adopted strategy. Previous works often employed rendered 2D instances or 3D model scene data with graphics engines (Dosovitskiy et al. 2015; Peng et al. 2017; Richter et al. 2016). In HBOE, PedRecNet (Burgermeister and Curio 2022) also utilized synthetic data to increase data diversity, however, it faced certain performance limitations. Generative models have recently gained popularity (Ho, Jain, and Abbeel 2020; Besnier et al. 2020). These methods leverage generated data to solve various vision tasks, including classification (Azizi et al. 2023), semantic segmentation (Zhang et al. 2021), and contrastive learning (Jahanian et al. 2021). However, these models may struggle with accurately capturing human orientation information. Attention Mechanism. The success of self-attention (Vaswani et al. 2017) has opened new avenues for exploring attention strategies in deep learning. Recently, attention mechanisms have been applied to various visual tasks, e.g., image classification (Hu, Shen, and Sun 2018), object detection (Dai et al. 2017), semantic segmentation (Fu et al. 2019) and pose estimation (Chu et al. 2017). As a pivotal type of attention mechanism, local attention has played a significant role in numerous works. SASA (Ramachandran et al. 2019) suggests that using self-attention to gather global information can be computationally intensive. The authors demonstrate that local attention not only improves computational efficiency but also elevates the quality of results. Concurrent with SASA, LR-Net (Hu et al. 2019) employed local attention for image classification. Similarly, HRFormer (Yuan et al. 2021) utilized local self-attention for various vision tasks, showing its versatility. In this work, we adopt an attention mechanism akin to these previous studies, employing local-window self-attention to specifically focus on different body part information for more accurate estimations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7506 0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 0.0% 2.0% 4.0% Percentage TUD 0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 0.0% 2.0% 4.0% Percentage MEBOW 0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 0.0% 1.0% 2.0% Percentage RMOS Figure 2: Distributions of datasets. The x-axis represents the orientation labels and the y-axis represents the corresponding percentage. Our RMOS has a uniform data distribution. Methodology In this section, we first introduce the definition of body orientation. Next we present our innovative approach to creating the RMOS. Additionally, we describe the OEFormer design. Finally we present the Laplace smoothing and weighted Smooth-L1-loss adopted to HBOE. Definition of Body Orientation As shown in the Fig. 1, we consider the camera’s shooting direction as the positive y-axis direction and the image plane as z-x plane. The orientation ϕ we aim to determine is the angle between the projection of the chest facing direction onto the x-y plane and the y-axis. Given a human pose, the chest facing direction −→ C can be denoted as −→ C = −→ S × −→ H, where −→ S is the direction from left shoulder to right shoulder, and −→ H is the direction from hip to neck. Given the projection −−→ Cxy of −→ C , and y-axis direction −→ Oy, the ϕ can be compute as: cos ϕ = −−→ Cxy · −→ Oy ∥−−→ Cxy∥∥−→ Oy∥ . (1) The output probability distribution p is obtained by passing image x through model f and followed by a softmax function. The label y is a 72-dimensional one-hot vector, where the element at index i is set to 1. The index i is: i = round(ϕ 5 ), ϕ ∈[0◦, 360◦). (2) Dataset Creation In this work, we propose augmenting training data with synthetic data to enhance the model training process. To facilitate this, we choose Blender 1 for modeling and rendering for 1https://www.blender.org/ the following reasons: 1) Blender supports Python scripting to control camera movements through commands, significantly simplifying the image capturing process. 2) It allows easy modifications to the model’s shape and appearance, providing diverse transformations to diversify the dataset. During rendering, we keep the model’s position fixed while rotating the camera around it, capturing images every 5-° rotation to obtain fine-grained labels. For modeling, we manipulate the model’s skeleton to achieve pose variations. To maintain diversity, RMOS includes 10 differently dressed models, each capable of 48 common daily poses. We capture data from five shooting views i.e., downward shot, overhead shot, which correspond to different shooting perspectives encountered in real-world scenarios. Fig. 2 illustrates the angle distribution of existing datasets. Evidently, RMOS encompasses all angles while containing substantial image data for each one. Furthermore, RMOS benefits from non-overlapping human instances, reducing potential model misinterpretations. Model Architecture People often judge orientation based on the different body part appearances. Therefore, we enhance attention to different body regions using local-window self-attention. In realworld scenarios, human body instance clarity varies, making precise estimation challenging. To address this, we employ network branches with different resolutions to gather sufficient information, which is then consolidated to obtain a comprehensive result. Following the multi-resolution parallel design (Wang et al. 2020; Yuan et al. 2021), we present our OEFomer architecture in Fig. 3. As (Dai et al. 2021; Xiao et al. 2021) suggested, we utilize convolutional layer in both the stem and the first stage to extract feature. Subsequently, transformer blocks with local-window self-attention are employed in later stages. Our architecture comprises various branches with different resolutions in each stage, ranging from high to low. Upsampling and downsampling operations enable information exchange between stages for effective feature extraction and fusion. In contrast to HRformer, we employ more modules and additional branches in the first stage to enable earlier and more comprehensive feature extraction, aiming to improve overall performance . Following these stages, outputs from different stages are concatenated into residual blocks (He et al. 2016) to obtain the final result. In local-window self-attention, we divide the feature map X ∈RN×D into M partitions, where each partition has a size K × K. We perform multi-head self-attention in each partition m. In this work K is set to 7. The representation of local self-attention as in Fig. 4. Loss Function In fine-grained HBOE tasks, directly mapping labels to onehot vectors overlooks similarities between adjacent labels. For example, 355° and 0° are highly similar but have low similarity as one-hot vectors. When the predicted category of the model is close but not equal to the ground truth, the supervision effect of the cross-entropy is poor. While regression provides some supervision, it still falls short of high The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7507 × 4 × 4 × 2 … … Align Mix & Shuffle Real data Synthetic data Capturing Data Input Smooth Label Data Preparation Conv Upsample Concat Transformer Fc Layer Downsample Conv Feature Former Feature Predictions Value Bins Figure 3: Illustrating the OEFormer architecture. We train on combined real and synthetic data. Our focused improvements (in yellow) utilize more modules and branches with increased depth to enhance early feature extraction. Predictions are generated by fusing features from different stages and aligned with smoothed labels. Crop Concate Multi-Head Attention Partitions Input Feature Output Feature Figure 4: Local-window self-attention for HBOE. precision. For instance, in cosine regression, as the difference approaches 0.1, the angle error nears 11.5°, leading to coarse prediction. Considering these supervision limitations and inspired by (M¨uller, Kornblith, and Hinton 2019; Wu et al. 2020), we perform Laplace smoothing to maximize the label peak probability while maintaining certainty for adjacent categories, aligning with human intuition. Here y is a one-hot vector of length 72. The Laplace kernel Kl on yi: Kl(yi) = 1 2σ e−|yi| σ . (3) The window size of the Laplace kernel w is set to 9. The smoothed label ˆy after Laplace smoothing can be obtained as follows: ˆyi = (y ∗Kl)(i) = X j∈Si yjKl(yj) , (4) Si = n (i + k) mod 72 | k = ⌊−w 2 ⌋, . . . , ⌊w 2 + 1⌋ o . (5) Given the model output p and the smoothed label ˆy, our loss function is: L(p, ˆy) =  0.5 × (p −ˆy)2/β if |p −ˆy| < β, |p −ˆy| −0.5 × β otherwise. (6) We use this weighted Smooth-L1-Loss to align p with the ˆy, and the β is the weight. This approach allows for better capturing of angle errors present in the real world and more accurately expressing the model’s confidence in different orientation bins. Experiments In this section, we compare performance of different backbones for HBOE and demonstrate OEFormer’s superiority over other models. Next, we compare various supervised methods including our weighted Smooth-L1-loss, Wu et al.’s gaussian mapping loss, cross-entropy, and cosine regression. Additionally, we try to upsample data of rare orientations instead of adding RMOS, however, the results are not as expected. Ablation experiments investigate the impact of different RMOS proportions and σ in Laplace kernel on performance. Furthermore, we compare with existing methods and find that incorporating RMOS leads to better results in fine-grained metrics and MAE. Through these experiments, we comprehensively evaluate the performance and stability of our proposed HBOE method. Experimental Setup Datasets. As the largest and most valuable real-scene dataset, the MEBOW dataset contains around 130K training samples and has rich background environments. It will be The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7508 Backbone Training Train set Test set Acc.(5°)↑Acc.(15°)↑Acc.(22.5°)↑Acc.(30°)↑Acc.(45°)↑ MAE(°)↓ ResNet50 From-scratch MEBOW MEBOW 61.1 83.9 88.5 92.3 94.5 13.11 HRNet 63.0 85.2 89.3 92.8 95.2 12.892 HRFormer 62.4 85.5 89.2 92,7 94.9 12.473 OEFormer 63.1 85.4 89.4 93.1 95.2 12.458 HRNet RMOS 63.4 85.7 89.3 92.9 95.1 12.831 OEFormer + MEBOW 64.4 85.8 89.3 93.2 95.1 12.135 ResNet50 Fine-tune MEBOW MEBOW 67.3 89.1 92.9 95.9 97.5 9.200 HRNet 68.4 90.8 93.6 96.5 97.9 8.479 HRFormer 69.9 90.6 93.9 96.6 97.7 9.344 OEFormer 70.6 90.5 93.6 96.5 97.8 8.400 HRNet RMOS 70.8 90.8 93.6 96.7 97.9 8.384 OEFormer +MEBOW 72.1 91.0 94.0 96.6 97.9 8.129 Table 1: Performance comparison of different backbones in the HBOE task, including different choices of models, training schedules, training data. ↑indicates that, as the metric improves, the performance improves. ↓indicates that, as the metric decreases, the performance improves. used for both training and testing. Additionally, we will incorporate the RMOS dataset as supplementary training data and evaluate its value on the MEBOW test set. We don’t use any special training techniques and are able to achieve good results by simply mixing and training the RMOS and MEBOW together. The data in the TUD dataset has clear and complete human body shapes and provides continuous labels. Due to the relatively small scale of the TUD dataset, following previous methods, we will train on MEBOW and test on TUD to assess generality. Evaluation metrics. As in previous works, we adopt Accuracy-22.5°, Accuracy-45° and mean absolute error (MAE) as evaluation metrics. Accuracy-X° represents the percentage of predictions within X° of ground truth, while MAE evaluates overall performance. Following previous work (Wu et al. 2020) and leveraging the precise labels provided by MEBOW and RMOS, we include Accuracy-5°, Accuracy-15° and Accuracy-30° in our evaluation analysis. Training Protocol. Input instances are cropped and resized to 256 × 192 while applying data augmentation techniques including flipping and scaling. For OEFormer training, we use 80 epochs with a batch size of 256 and the AdamW optimizer with initial learning rate 1 × 10−5. We set β to 0.2 and σ to 2.0 for the loss function. For the experiments in Tab. 1, we implenment these backbones based on the mmpose (Contributors 2020). And all these experiments used the same set. Fine-Grained HBOE Performance We conducted a performance comparison of various commonly used backbones for HBOE tasks, including ResNet and the HRNet used by Wu et al., which achieved promising results. Additionally, we also compared our OEFormer with HRFormer to demonstrate its excellent performance in finegrained results. As mentioned in Wu et al., using pretrained models based on 2D pose estimation can effectively improve model performance. Therefore, we conducted two groups of experiments: one trained from scratch, and one fine-tuned using a pretrained model for 2D pose estimation. As shown in the Tab. 1, using pretrained models yields better results than training from scratch. Regardless of training approach, our model outperforms others in effectiveness. Although the HRFormer architecture achieves good accuracy, it falls short in MAE. By utilizing attention mechanisms, our model surpasses Wu et al. previous best method, achieving improved fine-grained accuracy and lower MAE. Compared to other transformer models, we also achieve superior results by extracting more features earlier. We implemented and compared five supervised methods: our weighted Smooth-L1-loss, gaussian-mapping loss (Wu et al. 2020), cross-entropy loss, and cosine regression loss. As shown in Tab. 3, direct regression performs reasonably for coarse-grained evaluation metric. Treating HBOE as a 72-class classification task and neglecting the similarity between labels. When the model assigns higher probabilities to categories close to the label, it indicates that the model has some capability in assessing angle information. However, using standard cross-entropy loss can still result in large losses, thus introducing training bias. Our loss function takes this into consideration and achieves good overall results. Compared to Wu’s smoothing method, our method ensures higher peak probability values on label classes, and achieves better results in fine-grained tasks. We set experiment to assign higher weights to rare sample categories in the loss function. However, it faces performance limitations. Abalation Studies Analysis of RMOS. In this part, we investigate the impact of different proportions of the RMOS dataset on model performance. Specifically, we utilize MEBOW-Net and conduct experiments incorporating the MEBOW dataset for training. We progressively introduce 20%, 50%, 70%, and 100% of the RMOS data, while keeping the MEBOW data constant. The experimental results are presented in Tab. 4. The ablation studies demonstrate that incorporating the RMOS leads to consistent improvements in model performance across all metrics. As we increase the percentage of RMOS data from 20% to 100%, both fine-grained and coarse-grained accuracy steadily improve. This indicates that our synthesized RMOS data complements the realThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7509 Method Train set Test set Acc.(5°) ↑Acc.(15°) ↑Acc.(22.5°) ↑Acc.(30°) ↑Acc.(45°) ↑ MAE(°) ↓ MEBOW-Net (2020) MEBOW MEBOW 68.6 90.7 93.9 96.9 98.2 8.393 Joint-Net (2022) 48.3 85.2 91.0 93.2 96.5 10.526 PedRecNet (2022) 52.0 86.2 92.3 95.1 97.0 9.702 ours 71.1 90.5 93.6 96.5 97.8 8.356 MEBOW-Net (2020) RMOS 70.8 91.0 93.6 96.7 97.9 8.384 ours +MEBOW 72.1 91.0 94.0 96.6 97.9 8.129 Hara (2017) TUD TUD 70.6 86.1 26.6 AKRF-VW (2018) 68.6 78 34.7 Yu (2019) 75.7 96.8 15.3 MEBOW-Net (2020) MEBOW 39.5 66.7 77.3 92.2 99.0 14.191 PedRecNet (2022) 79.6 99.0 13.702 ours 41.7 72.5 83.2 95.5 99.7 12.298 Table 2: Performance comparison of existing methods in the HBOE task. The column Train set specifies the training dataset(s) used to train the models. Test set specifies on which test sets the results are reported on. Supervised meethod MAE Acc.(5°) Acc.(30°) Acc.(45°) Smooth-L1-loss 8.129 72.1 96.6 97.9 Wu et al. 8.333 71.0 96.4 97.8 cross-entropy 21.508 44.5 83.8 88.4 cosine-regression 16.772 31.0 87.0 92.5 Re-weight loss 9.332 65.3 96.1 97.4 Table 3: Comparison between different supervised methods. world data distribution in MEBOW, providing valuable additional training examples that enhance the model’s estimation capabilities. The optimal performance is achieved when utilizing the full RMOS dataset, suggesting it provides comprehensive coverage of body orientations. Our experiments provide insights into the benefits of supplementing high-quality synthetic data for advancing HBOE. We validated the model’s generalization performance on the real-world dataset TUD. As shown in Tab. 5, the domain gap between synthetic data and real-world data has not affected the model’s performance, and the supplementation of data distribution has improved the model’s generalization. Proportions Acc.(5°) Acc.(30°) Acc.(45°) MAE 0 68.4 96.5 97.9 8.479 20 % 69.9 96.7 97.9 8.483 50 % 70.2 96.5 97.8 8.447 70 % 70.4 96.5 97.8 8.407 100 % 70.8 96.7 97.7 8.384 Table 4: Ablation study on the addition of RMOS. Experiment is done on MEBOW-Net. Analysis of σ in Laplace kernel. The σ value in label smoothing affects the shape of the predicted probability distributions. Smaller σ values concentrate more probability mass on the ground-truth class, resulting in sharper peaks in the distributions. This compels the model to make highly confident predictions focused on the true label. In our exDataset Acc.5◦ Acc.15◦ Acc.22.5◦ w/o RMOS 39.5 66.7 77.3 w RMOS 36.6 67.7 78.0 Table 5: Evaluation on TUD Dataset periments, we evaluate models trained with various σ values using fine-grained accuracy metrics that reward correct classification, and coarse-grained metrics that measure generalization capability. σ Acc.(5°) Acc.(22.5°) Acc.(45°) MAE 1.0 71.0 93.6 97.2 8.308 2.0 72.1 94.0 97.9 8.129 3.0 71.2 93.8 97.8 8.446 4.0 69.7 94.0 98.0 8.408 Table 6: Ablation study on σ. We find that small σ values like 1.0 yield superior finegrained performance. Larger σ values of 3.0 and 4.0 enhance coarse-grained performance by diffusing probability, although at the cost of lower fine-grained accuracy. By balancing these factors, we select σ=2.0, which provides confident prediction while retaining generalizability. Comparison of Different Methods In order to comprehensively evaluate the performance of our proposed method, we conducted extensive comparisons against the previous state-of-the-art techniques for HBOE. As shown in Tab. 2, we compare our method with MEBOWNet (Wu et al. 2020), Joint-Net (Zhou et al. 2022) and PedrecNet (Burgermeister and Curio 2022). Without any additional synthetic data, our method was able to surpass all existing state-of-the-art methods on MEBOW in terms of fine-grained accuracy metrics and mean absolute error. This demonstrates the strengths of our approach even when trained solely on real-world data. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7510 Input Input Predicion Predicion Figure 5: HBOE results generated by OEFomer (trained on MEBOW and RMOS, σ=2), : ground truth, : our prediction, : Wu et.al prediction. It can be observed that our method is able to accurately determine the orientation of the human body even in cases where body are occluded or the image resolution is low. Dataset Angle Train data Test data Acc.5◦ w/o RMOS 30◦ 942 45 60.0 w RMOS 75.6 w/o RMOS 205◦ 2951 117 76.1 w RMOS 83.8 Table 7: Evaluations for some categories in MEBOW We then incorporated our novel RMOS synthetic data into the training process of both our method and a strong baseline model MEBOW-Net. The results clearly validated the benefits of RMOS. With this additional synthetic data, both methods exhibited significant improvements on fine-grained evaluation metrics and achieved lower MAE, highlighting the usefulness of our proposed data augmentation technique. And we test our method’s generalizability on TUD. As shown in Tab. 7, the integration of RMOS contributes to an increase in the model’s fine-grained discrimination ability, both for categories with limited training data and those with abundant training data. In Fig. 5, we demonstrate the performance comparison between our method and MEBOW-Net. In these examples, only partial human bodies appear in the images. With our local-window self-attention, our method can make more accurate estimations for such cases. Due to factors like poor illumination and low resolution, previous method fails to correctly judge the front/back side of the human bodies, making predictions opposite to the labels. In contrast, our method can generate correct predictions. When human bodies make large-scale motions, our method exhibits good robustness. In Figure 6, we enhance our analysis by overlaying heatmaps onto the original images, thereby visualizing the final feature maps of our model. This technique effectively demonstrates the specific areas within the images that our model focuses on. Notably, the heatmaps reveal a significant Input Mebow-Net Ours Figure 6: Heatmap of different methods concentration of the model’s attention on the core regions of the human body. This pattern of focus is in harmony with the general principles of human perception, where central body parts are often crucial for interpreting posture and actions. The alignment of our model’s focus with these perceptual norms underscores its potential applicability in fields that require an in-depth understanding of human body dynamics. Conclusion In this paper, we comprehensivily analyzed the underlying factors behind the poor performance of existing methods in fine-grained HBOE tasks. These reasons include nonuniform distribution of existing datasets, inadequate supervisory approaches, and the relatively simplistic model architectures used in previous methods. Consequently, our primary proposition involves augmenting real data deficits by introducing synthetic data for data augmentation. Furthermore, we employ a label smoothing strategy to transform original angle labels into smoothed vectors, incorporating a weighted Smooth-L1-loss for effective supervision. Lastly, we adopted local-window self-attention mechanism, presenting a transformer-based model architecture that yields remarkable performance enhancements. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7511 Acknowledgments This work was supported in part by The National Nature Science Foundation of China (Grant Nos: 62273302, 62036009, 61936006, 62273303), in part by Yongjiang Talent Introduction Programme (Grant No: 2023A-197-G). References Andriluka, M.; Roth, S.; and Schiele, B. 2010. Monocular 3d pose estimation and tracking by detection. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 623–630. Ieee. Azizi, S.; Kornblith, S.; Saharia, C.; Norouzi, M.; and Fleet, D. J. 2023. Synthetic data from diffusion models improves imagenet classification. arXiv preprint arXiv:2304.08466. Besnier, V.; Jain, H.; Bursuc, A.; Cord, M.; and P´erez, P. 2020. This dataset does not exist: training models from generated images. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Burgermeister, D.; and Curio, C. 2022. PedRecNet: Multitask deep neural network for full 3D human pose and orientation estimation. In 2022 IEEE Intelligent Vehicles Symposium (IV), 441–448. IEEE. Choi, J.; Lee, B.-J.; and Zhang, B.-T. 2016. Human body orientation estimation using convolutional neural network. arXiv preprint arXiv:1609.01984. Chu, X.; Yang, W.; Ouyang, W.; Ma, C.; Yuille, A. L.; and Wang, X. 2017. Multi-context attention for human pose estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1831–1840. Contributors, M. 2020. OpenMMLab Pose Estimation Toolbox and Benchmark. https://github.com/open-mmlab/ mmpose. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, 764–773. Dai, Z.; Liu, H.; Le, Q. V.; and Tan, M. 2021. Coatnet: Marrying convolution and attention for all data sizes. Advances in neural information processing systems, 34: 3965–3977. Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; van der Smagt, P.; Cremers, D.; and Brox, T. 2015. FlowNet: Learning Optical Flow With Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Enzweiler, M.; and Gavrila, D. M. 2010. Integrated pedestrian classification and orientation estimation. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 982–989. IEEE. Flohr, F.; Dumitru-Guzu, M.; Kooij, J. F. P.; and Gavrila, D. M. 2014. Joint probabilistic pedestrian head and body orientation estimation. In 2014 IEEE Intelligent Vehicles Symposium Proceedings, 617–622. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; and Lu, H. 2019. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3146–3154. Hara, K.; and Chellappa, R. 2017. Growing regression tree forests by classification for continuous object pose estimation. International Journal of Computer Vision, 122: 292– 312. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Hu, H.; Zhang, Z.; Xie, Z.; and Lin, S. 2019. Local relation networks for image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3464–3473. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141. Jahanian, A.; Puig, X.; Tian, Y.; and Isola, P. 2021. Generative models as a data source for multiview representation learning. arXiv preprint arXiv:2106.05258. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Liu, P.; Liu, W.; and Ma, H. 2017. Weighted sequence loss based spatial-temporal deep learning framework for human body orientation estimation. In 2017 IEEE International Conference on Multimedia and Expo (ICME), 97–102. M¨uller, R.; Kornblith, S.; and Hinton, G. E. 2019. When does label smoothing help? Advances in neural information processing systems, 32. Peng, X.; Usman, B.; Kaushik, N.; Hoffman, J.; Wang, D.; and Saenko, K. 2017. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924. Ramachandran, P.; Parmar, N.; Vaswani, A.; Bello, I.; Levskaya, A.; and Shlens, J. 2019. Stand-alone self-attention in vision models. Advances in neural information processing systems, 32. Raza, M.; Chen, Z.; Rehman, S.-U.; Wang, P.; and Bao, P. 2018. Appearance based pedestrians’ head pose and body orientation estimation using deep learning. Neurocomputing, 272: 647–659. Richter, S. R.; Vineet, V.; Roth, S.; and Koltun, V. 2016. Playing for data: Ground truth from computer games. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, 102–118. Springer. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7512 Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. 2020. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3349–3364. Wu, C.; Chen, Y.; Luo, J.; Su, C.-C.; Dawane, A.; Hanzra, B.; Deng, Z.; Liu, B.; Wang, J. Z.; and Kuo, C.-h. 2020. MEBOW: Monocular estimation of body orientation in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3451–3461. Xiao, T.; Singh, M.; Mintun, E.; Darrell, T.; Doll´ar, P.; and Girshick, R. 2021. Early convolutions help transformers see better. Advances in neural information processing systems, 34: 30392–30400. Yu, D.; Xiong, H.; Xu, Q.; Wang, J.; and Li, K. 2019. Continuous Pedestrian Orientation Estimation using Human Keypoints. In 2019 IEEE International Symposium on Circuits and Systems (ISCAS), 1–5. Yuan, Y.; Fu, R.; Huang, L.; Lin, W.; Zhang, C.; Chen, X.; and Wang, J. 2021. Hrformer: High-resolution vision transformer for dense predict. Advances in Neural Information Processing Systems, 34: 7281–7293. Zhang, Y.; Ling, H.; Gao, J.; Yin, K.; Lafleche, J.-F.; Barriuso, A.; Torralba, A.; and Fidler, S. 2021. Datasetgan: Efficient labeled data factory with minimal human effort. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10145–10155. Zhou, H.; Jiang, F.; Si, J.; and Lu, H. 2022. Joint MultiPerson Body Detection and Orientation Estimation via One Unified Embedding. arXiv preprint arXiv:2210.15586. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7513
2024
834
18,667
No Head Left Behind - Multi-Head Alignment Distillation for Transformers Tianyang Zhao1,2*, Kunwar Yashraj Singh1 †, Srikar Appalaraju1 ‡, Peng Tang1, Vijay Mahadevan1, R. Manmatha1, Ying Nian Wu1,2 1AWS AI Labs, 2University of California, Los Angeles tyzhao@ucla.edu, {sinkunwa, srikara, tangpen, vmahad, manmatha, wunyin}@amazon.com Abstract Knowledge distillation aims at reducing model size without compromising much performance. Recent work has applied it to large vision-language (VL) Transformers, and has shown that attention maps in the multi-head attention modules of vision-language Transformers contain extensive intra-modal and cross-modal co-reference relations to be distilled. The standard approach is to apply a one-to-one attention map distillation loss, i.e. the Teacher’s first attention head instructs the Student’s first head, the second teaches the second, and so forth, but this only works when the numbers of attention heads in the Teacher and Student are the same. To remove this constraint, we propose a new Attention Map Alignment Distillation (AMAD) method for Transformers with multi-head attention, which works for a Teacher and a Student with different numbers of attention heads. Specifically, we soft-align different heads in Teacher and Student attention maps using a cosine similarity weighting. The Teacher head contributes more to the Student heads for which it has a higher similarity weight. Each Teacher head contributes to all the Student heads by minimizing the divergence between the attention activation distributions for the soft-aligned heads. No head is left behind. This distillation approach operates like crossattention. We experiment on distilling VL-T5 and BLIP, and apply AMAD loss on their T5, BERT, and ViT sub-modules. We show, under vision-language setting, that AMAD outperforms conventional distillation methods on VQA-2.0, COCO captioning, and Multi30K translation datasets. We further show that even without VL pre-training, the distilled VLT5 models outperform corresponding VL pre-trained VL-T5 models that are further fine-tuned by ground-truth signals, and that fine-tuning distillation can also compensate to some degree for the absence of VL pre-training for BLIP models. Introduction Recently, large pre-trained Transformers-based (Vaswani et al. 2017) models, such as BERT (Devlin et al. 2019), T5 (Raffel et al. 2020), and GPT (Radford et al. 2018), have shown great capabilities for language modeling. Researchers have further extended these language Transformers to multi-modal Transformers for visual-linguistic tasks, *Work conducted during an internship at Amazon. †Corresponding Author ‡Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. e.g. VL-BERT (Su et al. 2019), VL-T5 (Cho et al. 2021), Oscar (Li et al. 2020b), BLIP (Li et al. 2022b, 2023), OFA (Wang et al. 2022a), Flamingo (Alayrac et al. 2022), Florence (Yuan et al. 2021), PALI (Chen et al. 2022c). These large vision-language (VL) models exhibit promising performances on a variety of visual-linguistic tasks, including visual question answering (VQA), image captioning, visual grounding and image-text matching. Increasing model size (BERT-L (340M), OFA (930M), T5 (11B), GPT-3 (175B) (Brown et al. 2020)) leads to better performance, but also increases memory consumption during deployment and leads to large increases in inference latency. To alleviate this problem, researchers (Jiao et al. 2020; Sun et al. 2020; Wang et al. 2020b; Fang et al. 2021; Sanh et al. 2019) have applied knowledge distillation (KD) (Hinton, Vinyals, and Dean 2015) approaches to large Transformers, aiming at compressing these large models into smaller ones without compromising much performance. In general, KD involves a large trained and frozen Teacher network, and a small Student network to be trained. The goal is to distill the knowledge from the larger Teacher into the smaller Student to bridge the performance gap between the two caused by difference of model sizes. In the distillation process, the Student learns to mimic the soft response and the latent representation of the Teacher. Specifically, this may involve minimizing the divergence between the Student’s and the Teacher’s output classification logits, and the divergence between their intermediate representations. In case of distilling Transformers, their attention maps are often important and contain intermediate representations to be transferred. (Cao et al. 2020) show that certain attention matrices of the pre-trained vision-language Transformers contain extensive intra- and cross-modal co-reference relations. (Fang et al. 2021) further show that minimizing the divergence between these attention maps of Teacher and Student can boost distillation performance. However, conventional attention map distillation methods for multi-head attention modules, either for language (Jiao et al. 2020; Sun et al. 2020; Wang et al. 2020b; Sanh et al. 2019) or VL (Fang et al. 2021) Transformers, directly minimize the divergence between the attention maps of Teacher and Student for each of their heads in a one-to-one fashion, i.e. the Teacher’s first attention head instructs the Student’s first head, the second teaches the second, and so forth, as in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7514 the right side of Figure 1. Hence, these methods can only be applied where the Teacher and Student have an equal number of heads, and do not generalize to the more common case where the large Transformer and the small Transformer have different numbers of attention heads; Otherwise if the Teacher has more heads, its extra heads have to be discarded in the distilling process. This motivates the design of our approach, Attention Map Alignment Distillation (AMAD), to remove the samenumber-of-heads restriction. In brief, AMAD soft aligns different heads in the Teacher and Student attention maps using cosine similarity. Each Teacher head teaches all the Student heads with the contribution being more for the Student heads with which it has higher weights (higher cosine similarities). The Teacher teaches the Student heads by minimizing the divergence between the attention activation distributions for the soft-aligned heads. We may view AMAD as operating like a cross-attention itself. One intuition behind is that, unlike embeddings for which each vision/language token shares the same order for both Teacher and Student, attention heads do not have a semantic order. (Cao et al. 2020) found that different subsets of attention heads in VL Transformers may encode different co-reference knowledge, e.g. a subset of heads may evolve to pivot on cross-modal interaction between image and text regimes. Therefore, even in the case when Teacher and Student have the same number of attention heads, we still cannot assume that the Teacher’s and the Student’s heads are aligned semantically without reordering. Conventional attention map distillation methods force Student heads to have exactly the same order as Teacher’s, while AMAD allows similarity alignment based distillation free of head order: each Teacher teaches all the Students, the contribution being proportional to the similarity weight between them. We conduct experiments on distilling VL-T5 (Cho et al. 2021) base to small, and on distilling BLIP (Li et al. 2022b) large to base. AMAD loss is applied on all their Transformer sub-modules, including T5-Encoder + Decoder (Raffel et al. 2020) for VL-T5, and Vision Transformer (ViT) (Dosovitskiy et al. 2021) + BERT (Devlin et al. 2019) for BLIP. We evaluate on VQA-2.0 (Goyal et al. 2019), COCO Captioning (Chen et al. 2015), and Multi30K (Elliott et al. 2016) datasets. We show that AMAD boosts performance. Our contributions include: • We propose Attention Map Alignment Distillation (AMAD) to distill attention maps from a Teacher Transformer to a Student Transformer with different numbers of attention heads. AMAD uses a soft-alignment approach so that each Teacher head teaches all the Student heads but in proportion to how similar the Student is to the Teacher. We show, under vision-language setting, that AMAD narrows the performance gap between the large Teacher and the small Student. With AMAD, researchers are set free from the same-number-of-heads restriction and have more choices over Transformers for distillation. • We show that even without VL pre-training, distilled VLT5 models outperform VL pre-trained VL-T5 models of the same size further fine-tuned with ground-truth data. • We conduct extensive experiments on distilling VL models, which contributes to this relatively under-explored field in the current literature. Related Work Vision-Language Pre-training (VLP). Recently, large pretrained Transformers (Liu et al. 2019; Lan et al. 2020; Clark et al. 2020; Yang et al. 2019; Ho et al. 2022; Li et al. 2022a; Appalaraju et al. 2021) have started to show improved capability in a variety of language modeling tasks (Zellers et al. 2018; Wang et al. 2018; Williams, Nangia, and Bowman 2017). Researchers have further extended these models to large image-text and video-text pre-training multi-modal models (Lu et al. 2019; Chen et al. 2020; Zhou et al. 2020; Li et al. 2020a; Cho et al. 2020; Zhang et al. 2021; Sun et al. 2019; Zhu and Yang 2020; Miech et al. 2020; Radford et al. 2021; Appalaraju et al. 2024) for visual-linguistic tasks (Goyal et al. 2019; Hudson and Manning 2019; Lei et al. 2018; Mao et al. 2016; Xu et al. 2016; Zhou, Xu, and Corso 2018). These pre-trained models outperform previous approaches (Yu et al. 2018; Yu, Kim, and Kim 2018; Kim, Jun, and Zhang 2018; Anderson et al. 2018; Liu et al. 2020). As their models sizes grow rapidly, recent works have also explored parameter-efficient learning and model compression methods, including adapters (Sung, Cho, and Bansal 2022; Houlsby et al. 2019; Rebuffi, Bilen, and Vedaldi 2018, 2017), prompt tuning (Gu et al. 2021b; Lester, Al-Rfou, and Constant 2021; Li and Liang 2021). Knowledge Distillation. Knowledge distillation (KD) (Hinton, Vinyals, and Dean 2015) transfers knowledge from a stronger Teacher network (T) to a Student network (S) by minimizing the divergence of their soft response and intermediate features (Gou et al. 2021). Compared with recent approaches to distillation in vision (Zagoruyko and Komodakis 2017; Peng et al. 2019; Tung and Mori 2019; Yang et al. 2022; Chen et al. 2022b; Wu et al. 2022b; Andonian, Chen, and Hamid 2022; He et al. 2022; Wu et al. 2022a) or language tasks (Wang et al. 2020b; Li et al. 2022c; Wang et al. 2021; Ding et al. 2023), distilling VL models is a relatively under-explored field, as pointed out by the review paper of (Chen et al. 2022a). (Fang et al. 2021) claim to be the first to distill vision-language Transformers, and MAD (Wang et al. 2022b) claims to be the first to use multi-modal distillation for VL models. These two papers both focus on distilling Encoder-only VL Transformers. (Gu et al. 2021a; Ma et al. 2022) also involve distilling knowledge from visual and linguistic domains, but their architectures are based upon Mask RCNN (He et al. 2017) and ResNet (He et al. 2016), and apply to the visual task of object detection. Specifically, for distilling large Transformers to smaller ones, recent work in language distillation (Jiao et al. 2020; Sun et al. 2020; Wang et al. 2020b; Sanh et al. 2019), vision distillation (Qu et al. 2022), and VL distillation (Fang et al. 2021) all show that applying attention map distillation to transfer the rich co-reference relations to Student can boost performance. These approaches apply attention map distillation either on all attention layers or only on the last self/cross-attention layer and the last cross-attention layer. Specifically, for a given multi-head attention layer, most of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7515 these approaches (Fang et al. 2021; Jiao et al. 2020; Sun et al. 2020; Wang et al. 2020b; Sanh et al. 2019) minimize the sum of divergence between the attention matrices of each head of Teacher and Student. However, this formulation only applies when the Teacher and Student have the same number of attention heads. (Qu et al. 2022) minimizes the divergence between the mean attention matrices of all the heads of Teacher and Student. However, as pointed out by (Cao et al. 2020), different attention heads encode different coreference knowledge, hence applying mean reduction may result in knowledge loss. Different from most approaches for distilling intermediate features in a one-to-one fashion, (Lin et al. 2022) propose a one-to-all spatial matching strategy for distilling Convnets feature maps, allowing each pixel of the Teacher feature to be distilled to all spatial locations in the Student by similarity mapping; (Ji, Heo, and Park 2021) propose to learn to match Teacher and Student features maps in different ResNet layers. Our design is also inspired by these works. Attention Map Alignment Distillation In this section, we introduce our proposed Attention Map Alignment Distillation (AMAD) method. We use plain lower case letters x for scalars, bold lower case letters x for vectors, and bold upper case letters X for matrices. In a multi-head attention layer of Transformer (Vaswani et al. 2017), each entry of the attention matrix for a head is a dot-product of query and key vectors followed by softmax normalization (Bahdanau, Cho, and Bengio 2014). In matrix form, for each head h, if we denote the number of query vectors as q, the number of key vectors as k, and the attention matrix as Ah ∈Rq×k, then we have Ah = softmax(QhKT h / p dk) (1) where Qh and Kh are the query and key matrices of head h, and dk is the dimension of the key as a scaling factor. For each training data sample (not batched), the attention maps of all heads for a given H-head multi-head attention layer form a tensor of [A1, A2, ..., AH] ∈RH×q×k. AMAD aims at distilling the attention maps of the Ht heads in the Teacher to those of the Hs ≤Ht heads in Student. For representational simplicity, let n = q · k, and ti ∈Rn denote the column vector representing the flattened attention map Ai ∈Rq×k of Teacher head i. Let sj ∈Rn denote the column vector representing the flattened attention map Aj ∈Rq×k of Student head j. For a given Teacher head ti, compute its cosine similarity wij with each Student head sj as in Equation 2: wij = ti · sj/(∥ti∥2 · ∥sj∥2) (2) aij = exp(wij) PHs m=1 exp(wim) (3) Then as in Equation 3 above, for the given Teacher head, compute its distilling contribution aij to each of the Student heads j ∈{1, 2, · · · , Hs} by applying softmax non-linearity on the similarity weights wij: Teacher Attention Maps Student Head 1: s1 Head 2: s2 Head 1 Head 1 Head 2 Head 2 Head 3 AMAD Conventional a11=0.1 a12=0.9 a21=0.2 a31=0.9 a22=0.8 a32=0.1 Teacher Student Head 1: t1 Head 2: t2 Head 3: t3 Figure 1: An illustration of AMAD in a toy case, corresponding to Equation 7, where the Teacher has Ht = 3 heads (t1, t2, t3) and Student has Hs = 2 heads (s1, s2), all with self attention maps of dimension n = q ×k = 3×3. Different matrix entry colors denote different attention values. On the left, AMAD uses soft-alignment: each Teacher head attention map teaches all the Student heads but in proportion to how close the Student head is to the Teacher head. As in the coloring of the matrices, Teacher heads 1 and 2 are similar to each other, and are relatively similar to Student head 2; While Teacher head 3 is similar to Student head 1. In this case, with AMAD, Teacher heads 1 and 2 instruct Student head 2 more (larger a12, a22 and wider arrows above) and instruct Student head 1 less (smaller a11, a21 and narrower arrows above), and Teacher head 3 mainly instructs Student head 1. Also note that the knowledge in the two similar Teacher heads t1 and t2 can be compressed mostly to a single Student head s2. While on the right, conventional attention map distillation method does not apply when the numbers of heads are different between Teacher and Student: Teacher head 3 has to be discarded in distilling. Next, we minimize the mean squared error between the given normalized Teacher head attention map ti and the weighted sum of soft-aligned Student head attention maps. For each Attention head i of the Teacher, LAMADi = ti ∥ti∥2 − Hs X j=1 aij · sj ∥sj∥2 2 2 (4) The total loss LAMAD is the summation of LAMADi over all the heads of the Teachers, LAMAD = Ht X i=1 LAMADi (5) Now we rewrite the above formulas using a matrix formulation to parallel computations. For each training data sample (not batched), recall that n = q·k, let matrix T ∈RHt×n represent the Teacher attention maps of all its heads, whose each row vector is the normalized flattened attention map The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7516 tT i /∥ti∥2 of the i-th head. Similarly, let matrix S ∈RHs×n represent the normalized Student attention maps of all its heads. We have (detailed derivation in Appendix), LAMAD = ∥T −softmaxdim=row(TST )S∥2 2 (6) where the softmax function applies to each row. The pairwise similarity weight matrix TST before and after softmax are both of dimension RHt×Hs. Computation with respect to the i-th row of the matrix formulation corresponds to the operations regarding the i-th Teacher head as in Equation 4. For instance, in the case as in Figure 1, we have, softmax(TST ) = (aij)Ht×Hs =   0.1 0.9 0.2 0.8 0.9 0.1   (7) The above formulas focus on each training data sample (not batched); For implementation, we use batched tensor computations via PyTorch, and we L2-normalize each row of the weighted sum softmax(TST )S before calculating the mean squared error. Code is provided in the Appendix. In contrast to previous attention map distillation methods directly minimizing ∥T−S∥2 2 or KL(T∥S), requiring T and S of the same shape (Fang et al. 2021; Jiao et al. 2020; Sun et al. 2020; Wang et al. 2020b), AMAD removes the limitation of requiring Teacher and Student to have the same number of attention heads, by letting each Teacher head teaches all the Student heads with the contribution being more for the Student heads with which it has a higher weight (higher cosine similarity), and supports flexible and smooth distillation because of the soft semantic alignment mechanism. Formulation Variants We refer to the above formulation as Variant 1 and the corresponding loss as LAMAD-1, and we also explore the following ablative baselines and variants: Baseline: One-to-one Distillation. In this baseline, following (Fang et al. 2021; Jiao et al. 2020; Sun et al. 2020; Wang et al. 2020b), we distill the attention maps in a one-to-one fashion. Note that different from previous work, we have more heads in the Teacher than in Student, Ht ≥Hs, so we distill the first Hs heads in Teacher to the Hs heads in Student, respectively, the extra Ht −Hs Teacher heads are ignored during distillation, as in the right part of Figure 1: LKD-ATT = ∥S −T[: Hs, :]∥2 2 (8) Variant 2: KL Divergence. In this variant, we minimize the sum of Kullback–Leibler divergence between the aligned weighted sum of Student multi-head attention map distributions and the Teacher distributions: LAMAD-2 = KL(T∥softmax(TST )S) (9) where the Teacher T and the Student S are all L1normalized by each row in all KL variants, instead of L2normalized. Both input and target contain q·Ht distributions, and KL(·) is computed for each of these q · Ht distributions and then summed up. Variants 3: Parameterized Projection. We borrow the idea from attention mechanisms to apply a learnable linear projection W on each flattened vector of attention map sj of Student head j before computing similarity and alignment: Wsj + b. In matrix form: ˜S = ReLU(SWT + b) (10) LAMAD-3 = KL(T∥softmax(T˜ST )˜S) (11) where W ∈Rn×n is a learnable matrix and b is the bias. Variant 4: Token-level Alignment. Here, we adopt a finer token-level granularity of aligning correspondence and allow independence for the soft alignment weights wij for different query attention vectors in Teacher / Student heads. Formally, denote Tl ∈RHt×k as the matrix whose h-th row vector is the l-th row vector tT h,l ∈Rk in the Teacher’s h-th head attention map Ah, and Sl ∈RHs×k as the matrix whose each h-th row vector is sT h,l ∈Rk. We have: LAMAD-4 = q X l=1 KL(Tl∥softmax(TlST l )Sl) (12) where KL(·) is computed for each of the Ht distributions and then summed up. In this variant, the weight matrices softmax(TlST l ) are different for each l-th query attention vector, in contrast to the unified same weight matrix of softmax(TST ) for all the queries in previous variants. We report ablation results for each variant in Table 8. More theoretical analysis is in Appendix. Experimental Setup We distill VL-T5 base to small; and distill BLIP large to base. Table 1 summarizes their architectural backbones. Visualized architecture and distilling pipeline are in Appendix. Knowledge Distillation (KD) For training efficiency, we only apply distillation in downstream fine-tuning, no distillation involved in pre-training. As in previous work (Hinton, Vinyals, and Dean 2015; Fang et al. 2021; Jiao et al. 2020; Sanh et al. 2019), we apply classification distillation loss on the classifier output logits. For VQA with a single classifier head, LKD = CE(zS/τd, zT /τd) (13) where τd denotes the distillation temperature (Hinton, Vinyals, and Dean 2015), which we simply use 1, as in (Fang et al. 2021). zS and zT refer to the logits from Student and Teacher classifier. CE denotes Cross Entropy, i.e. pi = exp(zi/τd) P k exp(zk/τd) and LKD = P i pT i · log(pS i ). For auto-regressive captioning and translation tasks, the Teacher and the Student both take ground-truth answer token sequence as input in Teacher Forcing style to maintain consistency for distillation (Beyer et al. 2022), LKD = |y| X j=1 CE(zS j /τd, zT j /τd) (14) where zS j and zT j denote logits for the j-th output token from Student and Teacher classifier, and |y| denotes length of seq. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7517 VL #Learnable Vision Stream Language (Multi-modal) Stream Model Params Backbone #Layers dmodel #Heads Backbone #Layers dmodel #Heads Teacher VL-T5 base 220M Faster R-CNN T5 12+12 768 12 Student VL-T5 small 60M (frozen) w/ LAMAD 6 + 6 512 8 Teacher BLIP large 446M ViT 24 1024 16 BERT 12 768 12 Student BLIP base 210M w/ LAMAD 12 768 12 w/ LAMAD 12 768 12 Table 1: Model architectures with details of their Transformer sub-modules. We distill VL-T5 base to small; and distill BLIP large to base. AMAD loss is applied on all Transformer modules, including T5-Encoder + Decoder for VL-T5, and ViT + BERT for BLIP. Conventional one-to-one attention map distillation does not apply to some of these modules when Teacher and Student have different numbers of attention heads; AMAD removes the same-number-of-heads constraint and works here. The overall training objective for the Student LTOTAL is a weighted sum of the classification distillation loss LKD and the proposed loss LAMAD, LTOTAL = LKD + αLAMAD (15) We apply LAMAD to distill the self/cross-attention maps of the last (Wang et al. 2020b; Fang et al. 2021) layers of each stream. α is tuned so that LKD and LAMAD scale similarly. We do not add ground-truth loss (Beyer et al. 2022). Pre-training and Fine-tuning As VL-T5 small is not released, we pretrain it by ourselves, adopting the same setting as how they pretrain base. After uni-modal pretraining its T5 and Faster R-CNN (Ren et al. 2015) sub-modules, it is then VL pretrained on MS COCO (Lin et al. 2014; Chen et al. 2015), Visual Genome (Krishna et al. 2016), VQA-2.0 (Goyal et al. 2019), GQA (Hudson and Manning 2019), and Visual7W (Zhu et al. 2016). For VL-T5 base and BLIP, we directly use their released pretrained checkpoints. After VL pre-training the Teacher and the Student, we finetune the Teacher on downtream tasks, adopting the same settings as in VL-T5 or BLIP. The Teacher model is then frozen and ready to be distilled. We then distill the Teacher to the Student with LTOTAL on downstream tasks. In some of our ablative settings, we do not conduct any VL pre-training for Student: After loading the language-only pre-trained language / multi-modal branch checkpoint and the vision-only pre-trained vision branch checkpoint, we directly finetune the Student on downstream VL tasks with distillation; Teacher is always pre-trained and finetuned. Downstream Fine-tuning Datasets We demonstrate visual question-answering performance on VQA-2.0 dataset. We report results on Karpathy test, teststd and test-dev via: https://visualqa.org/challenge.html. We evaluate image captioning performance on MS COCO dataset (Chen et al. 2015). As in (Cho et al. 2021; Fang et al. 2021; Li et al. 2022b), we use the Karparthy split (Karpathy and Fei-Fei 2015), which re-splits train2014 and val2014 images (Lin et al. 2014) into ∼11K / 5K / 5K for train / validation / test. We report BLEU@4 (B) (Papineni et al. 2002), CIDEr (C) (Vedantam, Zitnick, and Parikh 2015), METEOR (M) (Banerjee and Lavie 2005), SPICE (S) (Anderson et al. 2016) evaluation metrics. We also evaluate multi-modal machine translation performance on Multi30K dataset (Elliott et al. 2016), where models translate English text to German given context images. We report BLEU@4 score using SacreBLEU (Post 2018). We report our implementation details in Appendix. Results and Analysis Table 2 shows results on distilling VL-T5 and BLIP with AMAD, in comparison to recent vision-language models. The higher the better for all metrics. As in previous work (Fang et al. 2021; Cho et al. 2021), captioning performance are shown with their cross entropy optimization variants instead of CIDEr optimization variants. Figure 2 visualizes the effect of model size and number of VL pre-training images to VQA performance for recent models. Overall, Table 2 and Figure 2 support the following arguments: 1. Reducing VL model size within same family of models causes performance drops: If all trained with groundtruth supervision without distillation, VL-T5 small performs Figure 2: VL model performance with respect to # learnable params (X-axis) and # VL pre-training images (marker size). AMAD pushes VL-T5 towards upper-left (orange to red). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7518 # Learnable # VLP Distilled From VQA-2.0 Acc ↑ COCO Captioning Method Parameters Images Which Model Karpathy test-std test-dev B ↑ C ↑ M ↑ S ↑ CNN-LSTM-based Models 1 Up-Down * 108K ✗ 70.34 36.2 113.5 27.0 20.3 2 GLIED * 18.3M self distillation 37.9 118.2 28.3 21.2 Encoder-Only Transformers 3 ViLBERT * 220M 3M ✗ 70.92 70.55 4 UNITER base * 220M 4M ✗ 72.91 72.70 5 Unified VLP * 112M 3M ✗ 70.7 36.5 116.9 28.4 21.2 6 OSCAR base * 112M 4M ✗ 73.44 73.16 36.5 123.7 30.3 23.1 7 MiniVLM * 35M 7M ✗ 34.3 116.7 28.1 21.3 8 MiniVLM * 35M 14M ✗ 69.4 69.1 35.6 119.8 28.6 21.6 9 DistillVLM * 35M 7M Oscar base 69.8 69.6 35.6 120.8 28.7 22.1 10 MAD-ViLBERT * 220M CLIP-V+T 72.22 11 MAD-UNITER * 220M CLIP-V+T 74.02 Encoder-Decoder Transformers 12 OFA huge * 930M 15M ✗ 82.0 82.0 43.9 145.3 31.8 24.8 13 VL-T5 base * 220M 180K ✗ 67.9 70.30 34.5 116.5 28.7 21.9 14 VL-T5 small † 63M 180K ✗ 66.72 69.28 69.04 32.8 108.2 27.0 20.4 15 Ours VL-T5 small 63M 180K VL-T5 base † 68.06 70.47 70.41 33.9 114.4 28.3 21.5 16 VL-T5 base * 220M 0 ✗ 32.6 109.4 28.2 21.0 17 VL-T5 small † 63M 0 ✗ 56.44 58.47 30.8 101.4 26.3 19.5 18 Ours VL-T5 small 63M 0 VL-T5 base † 67.79 70.25 70.06 33.3 112.9 28.0 21.3 Mixture of Encoder / Decoder Transformers 19 BLIP large * 446M 129M ✗ 40.4 136.7 20 BLIP base * 210M 129M ✗ 78.32 78.25 39.7 133.3 21 Ours BLIP base 210M 129M BLIP large * 40.0 134.1 31.0 23.9 22 BLIP base * 210M 14M ✗ 77.62 77.54 38.6 129.7 23 BLIP base † 210M 0 ✗ 34.7 115.8 28.5 21.5 24 Ours BLIP base 210M 0 BLIP large * 38.7 129.6 30.4 23.3 Table 2: Results on distilling VL-T5 and BLIP with AMAD, with comparisons to recent VL models. Results with * are reported from their papers; with † are trained by ourselves; also in following tables. AMAD narrows the performance gaps caused by reducing model size or removing VL pre-training (VLP). Furthermore, to our surprise, the finetuning distilled VL-T5 small w/o VL pretraining (row 18) even outperforms VL pre-trained and GT fine-tuned VL-T5 small (row 14). Also, for BLIP, finetuning distillation (row 24) compensates for 14M-scale VL pre-traning (row 22), but cannot fully compensate for 129M-scale pre-training (row 20). worse than base (row 13-14, 16-17); and BLIP base worse than large (row 19-20). This is also visualized in Figure 2. 2. For a same model, removing VL pre-training / Pretraining with fewer images degrades performance: Performance of VL-T5 base and small both drop significantly if not VL pre-trained (row 13 vs 16; 14 vs 17). Even when VL pretrained, smaller numbers of VL pre-training images cause performance drop in MiniVLM (row 7 vs 8) and in BLIP (row 20, 22, 23; pink triangles in Figure 2). 3. Distilling with AMAD narrows the aforementioned gaps caused by shrinking model size or by removing VL pre-training: Supported by results of distilling VL pretrained VL-T5 (row 14 vs 15); distilling non-VL-pretrained VL-T5 (row 17 vs 18); and distilling BLIP (row 20 vs 21). Note that, DistillVLM (row 9) (Fang et al. 2021) distilled Oscar base (row 6) to MiniVLM (row 8) (Wang et al. 2020a) and achieved 0.4% VQA accuracy boost and 1.0% Captioning CIDEr score boost. And they claimed to be the first work to apply KD in training VL models. Neither MiniVLM nor DistillVLM has released their model or code. 4. Knowledge distillation compensates to some degree for the absence of VL pre-training: When we do not conduct any VL pre-training for VL-T5 small, fine-tuning distilled VL-T5 small (row 18) even outperforms the ground-truth supervised VL pre-trained and fine-tuned VL-T5 small baseline (row 14). It also outperforms non-VL-pretrained VL-T5 base (row 16). The performance is also rather comparable to other recent pre-trained and finetuned VL models. For BLIP, distillation (row 24) compensates for the absence of 14Mscale VL pre-traning (row 22), but cannot compensate for 129M-scale pre-training (row 20). One possible explanation is that the knowledge obtained in the pre-training stage of the Teacher can somehow be distilled to the Student in the downstream fine-tuning process when the Student tries to mimic the Teacher’s classification logits and tries to align and mimic the Teacher’s attention maps, even if the Student has no access to the pre-training data by itself. The Teacher’s attention maps contain valuable The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7519 VL-T5 Model #Params test-2016 test-2017 test-2018 Teacher † 220M 44.00 39.40 37.00 Student 60M 41.90 36.85 34.02 Our Student 60M 43.88 38.70 36.64 ∆ +1.98 +1.85 +2.62 Table 3: Multi30K English-German translation BLEU@4 score. The VL-T5 small Student distilled with AMAD outperforms ground-truth (GT) fine-tuned VL-T5 small. BLIP Model #Params B C M S Teacher * 446M 40.4 136.7 Student 210M 22.3 65.1 20.9 13.5 Student w/ AMAD 210M 29.7 95.0 25.4 18.5 ∆ +7.4 +29.9 +4.5 +5.0 Table 4: Results on COCO Captioning karpathy test split. All BLIP Students are w/o any pretraining, i.e. the ViT and BERT modules are randomly initialized for all Students. Results for pretrained and finetuning distilled BLIP models are reported in Appendix. The distilled Student outperforms the GT trained Student by a surprisingly large margin, although the performance still degrades a lot because of neither VL nor uni-modal pre-trained. This indicates that uni-modal pretraining is still necessary even when finetuning distillation is applied. #Params LKD LAMAD B C M S T † 220M 34.2 115.1 28.3 21.6 S † 60M 32.8 108.2 27.0 20.4 S 60M ✓ 33.1 111.8 27.9 21.2 S ‡ 60M ✓ 33.6 113.4 28.1 21.3 S 60M ✓ ✓ 33.9 114.4 28.3 21.5 Table 5: Ablation results of VL-T5 models on COCO Captioning karpathy test split. ‘T’ denotes Teacher, and ‘S’ denotes Student; Also in following tables. All VL-T5 Students are first VL pretrained. Ablation results w/o VL pretraining are in Appendix. Distilling with LAMAD outperforms logits distillation with LKD, and narrows the CIDEr gap between the small Student and the base Teacher to only 0.7% with 72% less parameters. The VL-T5 Teacher is reproduced, as their fine-tuned checkpoints are not released, also in following Tables. We also compare AMAD with RKD-D (Park et al. 2019) (the row with ‡ in the Table), a distance-based similarity distillation method: We treat attention maps of all heads as a whole single feature and apply RKD-D on that, since #heads are different between Teacher and Student. intra-modal and cross-modal coreference relations learned from the pre-training dataset, and LAMAD helps the Student to inherit the rich learned representation from the Teacher. Besides these observations 1-4, we show in Table 3 that AMAD can generalize well to language-heavy translation task. Table 4 unveils one limitation that, although fine-tuning #Params LKD LAMAD Karpathy std dev T † 226M 68.75 71.34 71.23 S † 63M 66.72 69.28 69.04 S 63M ✓ 67.74 70.19 70.10 S 63M ✓ ✓ 68.06 70.47 70.41 Table 6: Ablation results of VL-T5 models on VQA-2.0 test splits. All Students are first VL pretrained. #Params LKD LAMAD Karpathy std dev T † 226M 68.75 71.34 71.23 S † 63M 56.44 58.47 S 63M ✓ 66.39 68.71 S 63M ✓ ✓ 67.79 70.25 70.06 Table 7: Ablation results of VL-T5 models on VQA-2.0 test splits. All Students are w/o VL pretraining, i.e. initialized with language-only pre-trained T5 and vision-only pretrained Faster R-CNN checkpoints. This shows that vanilla finetuning logits distillation already compensates to some degree for the absence of VL pre-training, and AMAD narrows the gap further. VL-T5 #Params Loss Variant Acc ∆ Teacher † 226M 68.75 Student † 63M Ground-Truth 66.72 Student 63M LKD 67.74 +1.02 Student 63M LKD + LKD-ATT 67.73 +1.01 Student 63M LKD + LAMAD-1 67.96 +1.24 Student 63M LKD + LAMAD-2 68.06 +1.34 Student 63M LKD + LAMAD-3 68.02 +1.30 Student 63M LKD + LAMAD-4 68.05 +1.33 Student 63M w/ AMAD mean 68.02 +1.30 std-err of the mean ± 0.02 ± 0.02 Table 8: Ablation results on VQA-2.0 Karpathy test split. All VL-T5 Students are first VL pretrained. The baseline of distilling the attention maps from the first 8 heads of Teacher to those of the 8 heads of Student in a one-to-one fashion with LKD-ATT = ∥S −T[: Hs, :]∥2 2 does not help improve performance than distilling with LKD only, maybe because the extra Ht −Hs Teacher heads are discarded during distillation, causing forced knowledge loss. Meanwhile, all LAMAD variants help improve performance consistently. KL variants for LAMAD (2, 3, 4) perform slightly better than MSE LAMAD-1. We have not observed significant performance change brought by learnable projection LAMAD-3 or token-level alignment LAMAD-4, compared to LAMAD-2. distillation might close the performance gap of removing VL pretraining, uni-modal pretraining is still necessary. Ablations Effects of logits distillation LKD and AMAD LAMAD: We ablates on captioning and VQA tasks in w/ and w/o VL preThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7520 train settings in Table 5-7 and in Appendix. In w/o VL pretrain settings, Students are finetuned directly after loading the vision-only pretrained visual branch and the languageonly pretrained linguistic branch sub-modules. AMAD Variants and Baselines are analyzed in Table 8. We present visualizations, more ablations, and qualitative analysis of LAMAD distilled attention maps in Appendix. Appendix Please kindly refer to the Appendix via the following link: https://www.amazon.science/publications/no-head-leftbehind-multi-head-alignment-distillation-for-transformers In the Appendix, we provide a further illustration on the matrix-form loss derivation and formulation of different AMAD variants in Section A; more ablation results and a reverse experiment of distilling a smaller Teacher to a larger Student in Section B; implementation details including training environment, training time, and hyperparameters in Section C; an additional illustration of the distillation workflow for our experimental setup and the architecture of VL-T5 in Section D; visualizations of AMAD distilled cross- and self- attention maps in Section E; and PyTorch code for the proposed AMAD loss in Section F. Conclusion We have proposed the Attention Map Alignment Distillation (AMAD) method to distill attention maps from a Teacher to a Student Transformer with different numbers of attention heads. AMAD narrows the performance gap between the large Teacher and the small Student in both discriminative VQA and auto-regressive generative captioning / translation tasks. Our ablation further suggests that fine-tuning knowledge distillation can compensate to some degree for the absence of VL pre-training for VL Transformers. However, uni-modal pre-training is still necessary. Exploring theoretically or empirically why fine-tuning distillation can compensate for VL pre-training is potentially intriguing for future work. References Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198. Anderson, P.; Fernando, B.; Johnson, M.; and Gould, S. 2016. SPICE: Semantic Propositional Image Caption Evaluation. In ECCV. Anderson, P.; He, X.; Buehler, C.; Teney, D.; Johnson, M.; Gould, S.; and Zhang, L. 2018. Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6077–6086. Andonian, A.; Chen, S.; and Hamid, R. 2022. Robust Cross-Modal Representation Learning with Progressive Self-Distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16430– 16441. Appalaraju, S.; Jasani, B.; Kota, B. U.; Xie, Y.; and Manmatha, R. 2021. Docformer: End-to-end transformer for document understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 993–1003. Appalaraju, S.; Tang, P.; Dong, Q.; Sankaran, N.; Zhou, Y.; and Manmatha, R. 2024. DocFormerv2: Local Features for Document Understanding - Full Paper. AAAI. Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Banerjee, S.; and Lavie, A. 2005. METEOR : An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In ACL Workshop. Beyer, L.; Zhai, X.; Royer, A.; Markeeva, L.; Anil, R.; and Kolesnikov, A. 2022. Knowledge distillation: A good teacher is patient and consistent. In Proceedings of the IEEE/CVF Conference on CVPR, 10925–10934. Brown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D. M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Models are Few-Shot Learners. In NeurIPS. Cao, J.; Gan, Z.; Cheng, Y.; Yu, L.; Chen, Y.-C.; and Liu, J. 2020. Behind the scene: Revealing the secrets of pre-trained vision-and-language models. In ECCV. Chen, F.; Zhang, D.; Han, M.; Chen, X.; Shi, J.; Xu, S.; and Xu, B. 2022a. VLP: A Survey on Vision-Language Pretraining. ArXiv, abs/2202.09061. Chen, X.; Cao, Q.; Zhong, Y.; Zhang, J.; Gao, S.; and Tao, D. 2022b. DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12052–12062. Chen, X.; Fang, H.; Lin, T.-Y.; Vedantam, R.; Gupta, S.; Dollar, P.; and Zitnick, C. L. 2015. Microsoft COCO Captions: Data Collection and Evaluation Server. Arxiv. Chen, X.; Wang, X.; Changpinyo, S.; Piergiovanni, A.; Padlewski, P.; Salz, D.; Goodman, S.; Grycner, A.; Mustafa, B.; Beyer, L.; et al. 2022c. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794. Chen, Y.-c.; Li, L.; Yu, L.; Kholy, A. E.; Ahmed, F.; Gan, Z.; Cheng, Y.; and Liu, J. 2020. UNITER: UNiversal ImageTExt Representation Learning. In ECCV. Cho, J.; Lei, J.; Tan, H.; and Bansal, M. 2021. Unifying Vision-and-Language Tasks via Text Generation. In International Conference on Machine Learning. Cho, J.; Lu, J.; Schwenk, D.; Hajishirzi, H.; and Kembhavi, A. 2020. X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers. In EMNLP. Clark, K.; Luong, M.-T.; Le, Q. V.; and Manning, C. D. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In ICLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7521 Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL. Ding, Z.; Jiang, G.; Zhang, S.; Guo, L.; and Lin, W. 2023. SKDBERT: Compressing BERT via Stochastic Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6): 7414–7422. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR. Elliott, D.; Frank, S.; Sima’an, K.; and Specia, L. 2016. Multi30K : Multilingual English-German Image Descriptions. In ACL Workshop, 70–74. Fang, Z.; Wang, J.; Hu, X.; Wang, L.; Yang, Y.; and Liu, Z. 2021. Compressing Visual-linguistic Model via Knowledge Distillation. ICCV. Gou, J.; Yu, B.; Maybank, S. J.; and Tao, D. 2021. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6): 1789–1819. Goyal, Y.; Khot, T.; Agrawal, A.; Summers-Stay, D.; Batra, D.; and Parikh, D. 2019. Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering. International Journal of Computer Vision. Gu, X.; Lin, T.-Y.; Kuo, W.; and Cui, Y. 2021a. Openvocabulary Object Detection via Vision and Language Knowledge Distillation. In ICLR. Gu, Y.; Han, X.; Liu, Z.; and Huang, M. 2021b. PPT: Pre-trained Prompt Tuning for Few-shot Learning. ArXiv, abs/2109.04332. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2961–2969. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. He, R.; Sun, S.; Yang, J.; Bai, S.; and Qi, X. 2022. Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9161–9171. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop. Ho, C.-H.; Appalaraju, S.; Jasani, B.; Manmatha, R.; and Vasconcelos, N. 2022. YORO-Lightweight End to End Visual Grounding. In European Conference on Computer Vision - ECCV CAMP Workshop. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; De Laroussilhe, Q.; Gesmundo, A.; Attariyan, M.; and Gelly, S. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, 2790–2799. PMLR. Hudson, D. A.; and Manning, C. D. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. In CVPR. ISBN 9781728132938. Ji, M.; Heo, B.; and Park, S. 2021. Show, attend and distill: Knowledge distillation via attention-based feature matching. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 7945–7952. Jiao, X.; Yin, Y.; Shang, L.; Jiang, X.; Chen, X.; Li, L.; Wang, F.; and Liu, Q. 2020. TinyBERT: Distilling BERT for Natural Language Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, 4163– 4174. Online: Association for Computational Linguistics. Karpathy, A.; and Fei-Fei, L. 2015. Deep Visual-Semantic Alignments for Generating Image Descriptions. In CVPR. ISBN 9781467369640. Kim, J.-h.; Jun, J.; and Zhang, B.-t. 2018. Bilinear Attention Networks. In NeurIPS, 1–12. Krishna, R.; Zhu, Y.; Groth, O.; Johnson, J.; Hata, K.; Kravitz, J.; Chen, S.; Kalantidis, Y.; Jia-Li, L.; Shamma, D. A.; Michael Bernstein; and Fei-Fei, L. 2016. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. International Journal of Computer Vision. Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; and Soricut, R. 2020. Albert: A lite bert for self-supervised learning of language representations. In ICLR. Lei, J.; Yu, L.; Bansal, M.; and Berg, T. L. 2018. Tvqa: Localized, compositional video question answering. In EMNLP. Lester, B.; Al-Rfou, R.; and Constant, N. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In EMNLP. Li, C.; Feh´erv´ari, I.; Zhao, X.; Macˆedo, I.; and Appalaraju, S. 2022a. SeeTek: Very Large-Scale Open-set Logo Recognition with Text-Aware Metric Learning. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 587–596. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. In ICML. Li, J.; Li, D.; Xiong, C.; and Hoi, S. 2022b. BLIP: Bootstrapping Language-Image Pre-training for Unified VisionLanguage Understanding and Generation. In ICML. Li, L.; Chen, Y.-C.; Yu Cheng, Z. G.; Yu, L.; and Liu, J. 2020a. HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training. In EMNLP. Li, X.; Yin, X.; Li, C.; Zhang, P.; Hu, X.; Zhang, L.; Wang, L.; Hu, H.; Dong, L.; Wei, F.; Choi, Y.; and Gao, J. 2020b. Oscar: Object-Semantics Aligned Pre-training for VisionLanguage Tasks. In ECCV. Li, X. L.; and Liang, P. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. Arxiv. Li, Z.; Wang, Z.; Tan, M.; Nallapati, R.; Bhatia, P.; Arnold, A.; Xiang, B.; and Roth, D. 2022c. DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization. arXiv preprint arXiv:2203.11239. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7522 Lin, S.; Xie, H.; Wang, B.; Yu, K.; Chang, X.; Liang, X.; and Wang, G. 2022. Knowledge Distillation via the Target-aware Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10915–10924. Lin, T. Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In ECCV. ISBN 9783-319-10601-4. Liu, F.; Ren, X.; Liu, Y.; Lei, K.; and Sun, X. 2020. Exploring and distilling cross-modal information for image captioning. arXiv preprint arXiv:2002.12585. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Lu, J.; Batra, D.; Parikh, D.; and Lee, S. 2019. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS. Ma, Z.; Luo, G.; Gao, J.; Li, L.; Chen, Y.; Wang, S.; Zhang, C.; and Hu, W. 2022. Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language Knowledge Distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14074–14083. Mao, J.; Huang, J.; Toshev, A.; Camburu, O.; Yuille, A.; and Murphy, K. 2016. Generation and Comprehension of Unambiguous Object Descriptions. In CVPR. Miech, A.; Alayrac, J.-B.; Smaira, L.; Laptev, I.; Sivic, J.; and Zisserman, A. 2020. End-to-end learning of visual representations from uncurated instructional videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W. W.-j. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In ACL. ISBN 1-55860-883-4. Park, W.; Kim, D.; Lu, Y.; and Cho, M. 2019. Relational knowledge distillation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3967–3976. Peng, B.; Jin, X.; Liu, J.; Li, D.; Wu, Y.; Liu, Y.; Zhou, S.; and Zhang, Z. 2019. Correlation congruence for knowledge distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5007–5016. Post, M. 2018. A Call for Clarity in Reporting BLEU Scores. In WMT, volume 1, 186–191. Qu, X.; Ding, C.; Li, X.; Zhong, X.; and Tao, D. 2022. Distillation Using Oracle Queries for Transformer-Based Human-Object Interaction Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19558–19567. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020. Radford, A.; Narasimhan, K.; Salimans, T.; and Sutskever, I. 2018. Improving Language Understanding by Generative Pre-Training. Arxiv. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR, 21: 1–67. Rebuffi, S.-A.; Bilen, H.; and Vedaldi, A. 2017. Learning multiple visual domains with residual adapters. In NIPS. Rebuffi, S.-A.; Bilen, H.; and Vedaldi, A. 2018. Efficient Parametrization of Multi-domain Deep Neural Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8119–8127. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In NIPS. Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Su, W.; Zhu, X.; Cao, Y.; Li, B.; Lu, L.; Wei, F.; and Dai, J. 2019. VL-BERT: Pre-training of Generic Visual-Linguistic Representations. In ICLR. Sun, C.; Myers, A.; Vondrick, C.; Murphy, K.; and Schmid, C. 2019. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7464–7473. Sun, Z.; Yu, H.; Song, X.; Liu, R.; Yang, Y.; and Zhou, D. 2020. MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2158–2170. Online: Association for Computational Linguistics. Sung, Y.-L.; Cho, J.; and Bansal, M. 2022. Vladapter: Parameter-efficient transfer learning for vision-andlanguage tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5227–5237. Tung, F.; and Mori, G. 2019. Similarity-Preserving Knowledge Distillation. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 1365–1374. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention Is All You Need. In NIPS. Vedantam, R.; Zitnick, C. L.; and Parikh, D. 2015. CIDEr: Consensus-based Image Description Evaluation. In CVPR. Wang, A.; Singh, A.; Michael, J.; Hill, F.; Levy, O.; and Bowman, S. R. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In ICLR. Wang, J.; Hu, X.; Zhang, P.; Li, X.; Wang, L.; Zhang, L.; Gao, J.; and Liu, Z. 2020a. MiniVLM: A Smaller and Faster Vision-Language Model. Arxiv. Wang, P.; Yang, A.; Men, R.; Lin, J.; Bai, S.; Li, Z.; Ma, J.; Zhou, C.; Zhou, J.; and Yang, H. 2022a. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-tosequence learning framework. In International Conference on Machine Learning, 23318–23340. PMLR. Wang, W.; Bao, H.; Huang, S.; Dong, L.; and Wei, F. 2021. MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers. In ACL Findings. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7523 Wang, W.; Wei, F.; Dong, L.; Bao, H.; Yang, N.; and Zhou, M. 2020b. Minilm: Deep self-attention distillation for taskagnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems (NeurIPS). Wang, Z.; Codella, N.; Chen, Y.-C.; Zhou, L.; Dai, X.; Xiao, B.; Yang, J.; You, H.; Chang, K.-W.; Chang, S.-f.; et al. 2022b. Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks. arXiv preprint arXiv:2204.10496. Williams, A.; Nangia, N.; and Bowman, S. R. 2017. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL. Wu, H.; Gao, Y.; Zhang, Y.; Lin, S.; Xie, Y.; Sun, X.; and Li, K. 2022a. Self-supervised Models are Good Teaching Assistants for Vision Transformers. In ICML. Wu, K.; Zhang, J.; Peng, H.; Liu, M.; Xiao, B.; Fu, J.; and Yuan, L. 2022b. Tinyvit: Fast pretraining distillation for small vision transformers. In ECCV, 68–85. Springer. Xu, J.; Mei, T.; Yao, T.; and Rui, Y. 2016. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R. R.; and Le, Q. V. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS. Yang, Z.; Li, Z.; Jiang, X.; Gong, Y.; Yuan, Z.; Zhao, D.; and Yuan, C. 2022. Focal and global knowledge distillation for detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4643–4652. Yu, L.; Lin, Z.; Shen, X.; Yang, J.; Lu, X.; Bansal, M.; and Berg, T. L. 2018. MAttNet : Modular Attention Network for Referring Expression Comprehension. In CVPR. Yu, Y.; Kim, J.; and Kim, G. 2018. A joint sequence fusion model for video question answering and retrieval. In ECCV. Yuan, L.; Chen, D.; Chen, Y.-L.; Codella, N.; Dai, X.; Gao, J.; Hu, H.; Huang, X.; Li, B.; Li, C.; et al. 2021. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432. Zagoruyko, S.; and Komodakis, N. 2017. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. ICLR. Zellers, R.; Bisk, Y.; Schwartz, R.; and Choi, Y. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In EMNLP. Zhang, P.; Li, X.; Hu, X.; Yang, J.; Zhang, L.; Wang, L.; Choi, Y.; and Gao, I. 2021. VinVL: Making Visual Representations Matter in Vision-Language Models. Arxiv. Zhou, L.; Palangi, H.; Zhang, L.; Hu, H.; Corso, J. J.; and Gao, J. 2020. Unified Vision-Language Pre-Training for Image Captioning and VQA. In AAAI. Zhou, L.; Xu, C.; and Corso, J. J. 2018. Towards automatic learning of procedures from web instructional videos. In AAAI. Zhu, L.; and Yang, Y. 2020. ActBERT: Learning GlobalLocal Video-Text Representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhu, Y.; Groth, O.; Bernstein, M.; and Fei-Fei, L. 2016. Visual7W: Grounded Question Answering in Images. In CVPR. ISBN 978-1-4673-8851-1. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7524
2024
835
18,668
SFC: Shared Feature Calibration in Weakly Supervised Semantic Segmentation Xinqiao Zhao1,2*, Feilong Tang1*, Xiaoyang Wang1,2,3, Jimin Xiao1† 1Xi’an Jiaotong-Liverpool University 2University of Liverpool 3Metavisioncn xqz@liverpool.ac.uk, Feilong.Tang19@xjtlu.edu.cn, wangxy@liverpool.ac.uk, Jimin.Xiao@xjtlu.edu.cn Abstract Image-level weakly supervised semantic segmentation has received increasing attention due to its low annotation cost. Existing methods mainly rely on Class Activation Mapping (CAM) to obtain pseudo-labels for training semantic segmentation models. In this work, we are the first to demonstrate that long-tailed distribution in training data can cause the CAM calculated through classifier weights over-activated for head classes and under-activated for tail classes due to the shared features among head- and tail- classes. This degrades pseudo-label quality and further influences final semantic segmentation performance. To address this issue, we propose a Shared Feature Calibration (SFC) method for CAM generation. Specifically, we leverage the class prototypes that carry positive shared features and propose a Multi-Scaled Distribution-Weighted (MSDW) consistency loss for narrowing the gap between the CAMs generated through classifier weights and class prototypes during training. The MSDW loss counterbalances over-activation and under-activation by calibrating the shared features in head-/tail-class classifier weights. Experimental results show that our SFC significantly improves CAM boundaries and achieves new state-of-the-art performances. The project is available at https://github.com/Barrett-python/SFC. Introduction Semantic segmentation (Minaee et al. 2021) assigns semantic labels to image pixels and is crucial for applications like autonomous driving, robotics (Zhang et al. 2022). Obtaining accurate pixel annotations for training deep learning models is laborious and time-consuming. One alternative approach is to adopt Weakly Supervised Semantic Segmentation (WSSS) with only image-level labels provided (Ahn, Cho, and Kwak 2019; Wang et al. 2020; Zhang et al. 2020; Lee, Kim, and Yoon 2021; Xu et al. 2022; Zhang et al. 2023a, 2021a). Generally, these methods employ Class Activation Mapping (CAM) (Zhou et al. 2016) to generate discriminative semantic masks from a classification model. Then, a series of post-processing methods (Kr¨ahenb¨uhl and Koltun 2011) are adopted to refine the masks to obtain pixel*These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. level pseudo labels, which are then used to train a semantic segmentation model (Chen et al. 2016). However, we find the training data of WSSS are naturally long-tailed distributed (Fig. 1(a)), which makes the shared feature components (Li and Monga 2020) tend to be positive in head-class classifier weight and negative in tail-class classifier weight because the head-class weight receives more positive gradients (denoted as ⊕) than the negative ones (denoted as ⊖) and the tail-class weight receives more negative gradients than the positive ones (Fig. 1(b)). This makes the pixels containing shared features activated by the head-class classifier weight (i.e., the dot product (denoted as ·) of feature and weight > 0) while the pixels containing tail-class feature not activated by the tail-class weight (i.e., the dot product of feature and weight < 0) as shown in Fig. 1(c). Thus, the CAM calculated through classifier weights inevitably becomes over-activated for head classes and underactivated for tail classes (Fig. 1(d)). This degrades the qualities of pseudo labels and further influences the final WSSS performances. On the other hand, as shown in Fig. 1(d), the CAM activated by the head-class prototype (Chen et al. 2022a) is less-activated compared to the CAM activated by the head-class classifier weight, and the CAM activated by the tail-class prototype is more-activated compared to the CAM activated by the tail-class classifier weight. Inspired by the above findings (a detailed theoretical analysis is provided in Analysis On SFC section of main paper), we propose a Shared Feature Calibration (SFC) method to reduce shared feature proportions in head-class classifier weights and increase the ones in tail-class classifier weights, avoiding shared-feature-caused over-/underactivation issues. Particularly, a Multi-Scaled DistributionWeighted (MSDW) consistency loss is calculated on the CAMs generated through class prototypes and classifier weights, where the consistency loss magnitude on one class is re-weighted by the total sample number gaps between this class and other classes. The theories behind this re-weighting strategy is also demonstrated, proving that pseudo-labels with better boundaries can be achieved through our SFC. The contributions of this work include: • We first point out that the features shared by head and tail classes can enlarge the classifier-weight-generated CAM for the head class and shrink it for the tail class under a long-tailed scenario. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7525 Figure 1: Illustration of how shared features influence CAMs under a long-tailed scenario and the effects of our proposed SFC. (a) shows Pascal VOC 2012 (Everingham et al. 2010) is a naturally long-tailed distributed dataset. (b) explains the shared feature components in head-/tail-class classifier weights and prototypes. (c) shows how over-/under-activations happen. (d) shows the CAMs of head-/tail-class examples. Our SFC achieves better results with appropriate activation areas. • We propose a Shared Feature Calibration (SFC) CAM generation method aiming at balancing the shared feature proportions in different classifier weights, which can improve the CAM quality. • Our method achieves new state-of-the-art WSSS performances with only image-level labels on Pascal VOC 2012 and COCO 2014. Related Works Weakly Supervised Semantic Segmentation The generation of pseudo-labels in WSSS is based on attention mapping (Wang et al. 2020; Zhang et al. 2021a). The key step is the produce of high-quality CAMs (Sun et al. 2020; Yoon et al. 2022). Several works have designed heuristic approaches, such as erasing and accumulation (Zhang et al. 2021b; Yoon et al. 2022), to force the network to mine novel regions rather solely focusing on discriminative regions. Moreover, other strategies include self-supervised learning (Wang et al. 2020; Chen et al. 2022a), contrastive learning (Du et al. 2022), and crossimage information (Xu et al. 2023) have been proposed to generate accurate and complete CAMs. Recently, visionlanguage pre-training has emerged as the prevalent approach for addressing downstream vision-language tasks (Zhu et al. 2023), including WSSS (Lin et al. 2023). Due to the rough boundary of the initial map, refinement methods like CRF (Kr¨ahenb¨uhl and Koltun 2011) and IRN (Ahn, Cho, and Kwak 2019) are employed for further enhancements. However, to the best of our knowledge, no previous work aims at solving the over-/under-activation issue caused by longtailed distributed training data. This paper analyzes the reasons behind the over-/under-activation and tackles this issue through a Shared Feature Calibration (SFC) method. Shared Feature in Classification Classification is an upstream task for semantic segmentation (Zhang et al. 2023b) and shared feature has been actively studied in this task (Li and Monga 2020). Most existing methods (Zheng et al. 2017; Yao et al. 2017; Peng, He, and Zhao 2017) tend to only extract discriminative partial features for classification and prevent the shared features from influencing the classification performance. Although both training under classification loss, unlike the classification task, WSSS cannot solely rely on discriminative features to construct intact CAM, and existing methods (Lee, Kim, and Shim 2022; Chen et al. 2022a) freeze several layers of pre-trained encoder for avoiding catastrophic forgetting of indiscriminative features (Vasconcelos, Birodkar, and Dumoulin 2022). In this work, we focus on balancing the shared feature proportions in classifier weights under a longtailed scenario for a better WSSS performance. Methodology The pipeline of our SFC is illustrated in Fig. 2. It involves an Image Bank Re-sampling (IBR) and a Multi-Scaled Distribution-Weighted (MSDW) consistency loss. Preliminary Classifier Weight CAM Given an input image I, the features extracted from I by the encoder is denoted as F ∈ RH×W ×D, and the classifier weight of class c is denoted as Wc ∈RD×1, where H × W is the feature map size and D is the feature dimension. The classification loss, which is a multi-label soft margin loss, is calculated as: Lcls = 1 |C| |C| X c=1  yc log  sigmoid GAP(FWc)  + (1 −yc) log  1 −sigmoid GAP(FWc)  , (1) where C is the foreground class set and |C| denotes its size; yc denotes the binary label on class c; GAP(·) denotes the global average pooling. Then, the CAM generated through The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7526 Feature Encoder Batch Images Uniformly sampling ℒ!"# ℒ$% & ··· Prototypes !! !" !|$%| Down-scale ℒ$% % ℒ!"# Classifier Training Inference Original data batch ≈ Image Bank Re-sampling ··· Feature Encoder ⨁ Update 1 2 3 |c| Gradient detach Figure 2: The overall structure of our proposed SFC. For each training image, two distribution-weighted consistency losses (LP DW and LW DW) are calculated, where LP DW is calculated between the prototype CAM (MP) and classifier weight CAM (MW) of original image and LW DW is calculated between the classifier weight CAMs of down-scaled and original images. In addition, an image bank that stores the latest shown images for different classes is maintained, and images are uniformly sampled from it to complement the original training batch, increasing the consistency loss optimization frequency for tail classes. Finally, the classifier weight CAM is complemented with prototype CAM in inference. classifier weights W (i.e., classifier weight CAM) on the extracted feature F of input image I is calculated as follows: MW(F, W, I) =  f (FW1, I, F) , ..., f FW|C|, I, F  | {z } |C| foreground classes ∪  1 −max c∈C f(FWc, I, F)  | {z } background class , (2) where MW denotes the classifier weight CAM, f(·) denotes a function that feeds the normalized ReLU activated FW, I, and F into a Pixel Correlation Module (PCM) (Wang et al. 2020) to refine the CAM based on the relationships among the low-level features of different pixels. Prototype CAM Following (Chen et al. 2022a, 2023), class prototype is calculated through masked average pooling of extracted features. Specifically, the hierarchical features extracted from different layers of feature extractor are denoted as F1, F2, F3, F4; L(·) denotes the linear projection and it stops gradients to the feature extractor. Then, the prototype P˜c of class ˜c (˜c can be either foreground or background class) is calculated as follows: P˜c = MAP  c MW(F, W, I)  ˜c ⊙L(F1, F2, F3, F4)  , (3) where c MW(F, W, I)  ˜c is a binary mask for class ˜c, highlighting the pixels whose activation values are higher than the set threshold with 1; MAP(·) denotes masked average pooling. Finally, the CAM calculated through the prototype of class ˜c (i.e., prototype CAM) is calculated as follows: MP(F, P)  ˜c = ReLU cos ⟨P˜c, L(F1, F2, F3, F4)⟩  , (4) where cos ⟨·, ·⟩denotes the cosine similarity between the two terms within it. Shared Feature Calibration Image Bank Re-sampling (IBR) We maintain an image bank B = (b1, ..., bc) that stores |C| images for |C| foreground classes. For each image I in the current training batch, we update bc with I when the c-th class appears in I. Otherwise, we keep bc as it was. After the bank updating, we uniformly sample NIBR images from the current bank and concatenate them with the original training batch as final training inputs. The uniform sampling does not bring further shared feature issues caused by long-tailed distribution, as the sample numbers of different classes are nearly balanced. Our proposed IBR is used for increasing the tail-class sampling frequency, thus the MSDW loss will be enforced on the tail classes more frequently, effectively calibrating the shared features in the tail-class classifier weights. Multi-Scaled Distribution-Weighted Consistency Loss To address the over-activation issues on head classes and under-activation issues on tail classes, we propose two Distribution-Weighted (DW) consistency losses LP DW and LW DW. LP DW is calculated between the prototype CAM and classifier weight CAM as: LP DW =P|C| c=1  DCc · MW  c − MP  c 1 | {z } ℓ1 loss of foreground class  + MW  |C|+1 − MP  |C|+1 1 | {z } ℓ1 loss of background class , (5) where DCc denotes the scaled Distribution Coefficient, calculated for each foreground class c as: DCc = |C| P|C| j=1 P|C| i=1|nj −ni| (nj+N )  | {z } scaling factor · P|C| i=1|nc −ni| nc + N | {z } total demand , (6) where nc denotes the sample number of class c; N denotes the estimated increased sample number of our IBR on each The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7527 (a) (b) (c) (d) (e) Head classes Tail classes Figure 3: CAM visualization results on PASCAL VOC 2012, demonstrating Conclusion 2 and Conclusion 3. (a) input images; (b) classifier weight CAMs; (c) prototype CAMs; (d) final CAMs generated through our SFC; (e) ground truth. Method PASCAL VOC CAM CRF Mask IRN (Ahn, Cho, and Kwak 2019) 48.8 54.3 66.3 SEAM (Wang et al. 2020) 55.4 56.8 63.6 CONTA (Zhang et al. 2020) 48.8 67.9 AdvCAM (Lee, Kim, and Yoon 2021) 55.6 62.1 68.0 RIB (Lee et al. 2021a) 56.5 62.9 70.6 CLIMS (Xie et al. 2022) 56.6 62.4 70.5 ESOL (Li et al. 2022) 53.6 61.4 68.7 SIPE (Chen et al. 2022a) 58.6 64.7 68.0 AMN (Lee, Kim, and Shim 2022) 62.1 65.3 72.2 SFC (Ours) 64.7 69.4 73.7 Table 1: Evaluation (mIoU (%)) of different pseudo labels on PASCAL VOC 2012 training set. class and N = NIBR·CIBR·Niter |C| . Here, Niter denotes the number of iterations in one training epoch; NIBR is the sampling number from the image bank; CIBR is the average number of classes covered in one image. We calculate the sum of sample number gaps between class c and other classes and regard this sum as the total demand on the consistency loss for class c. Next, this total reward is averaged by nc + N (i.e., the estimated total sample number of class c) and scaled with the scaling factor, obtaining the scaled distribution coefficient (i.e., DCc). The scaling factor is to scale the ℓ1 loss magnitude of foreground class to the same level as the background class. DCc is finally used to re-weight the consistency loss on class c, assigning higher consistency loss to the class with higher total demand, as the severity of over-/under-activation issue is positively related to the total demand. Meanwhile, all images in the current training batch are further down-scaled with 0.5 through bilinear interpolation algorithm and are used to calculate the loss LW DW: LW DW =P|C| c=1  DCc · s MW(F, W, I)  c  − MW(Fs, W, Is)  c 1  , (7) Method Backbone Val Test Image-level supervision + Saliency maps. AuxSegNet (Xu et al. 2021) ResNet38 69.0 68.6 NSROM (Yao et al. 2021) ResNet101 70.4 70.2 EPS (Lee et al. 2021b) ResNet101 71.0 71.8 Image-level supervision only. SEAM (Wang et al. 2020) ResNet38 64.5 65.7 PPC+SEAM (Du et al. 2022) ResNet38 67.7 67.4 ReCAM (Chen et al. 2022b) ResNet38 68.5 68.4 SIPE (Chen et al. 2022a) ResNet38 68.2 69.5 SIPE (Chen et al. 2022a) ResNet101 68.8 69.7 ESOL (Li et al. 2022) ResNet101 69.9 69.3 AMN (Lee, Kim, and Shim 2022) ResNet101 70.7 70.6 SFC (Ours) ResNet38 70.2 71.4 SFC (Ours) ResNet101 71.2 72.5 Table 2: Comparison of semantic segmentation performance on PASCAL VOC 2012 validation and test sets. where s(·) denotes the bilinear down-sampling operation; Is denotes the down-scaled image and Fs denotes its extracted feature. Similar to Eq. (5), we re-weight the consistency loss by DC coefficient. Considering the prototype CAM on the down-scaled image is less accurate than the down-scaled classifier weight CAM calculated on the original image (see LW DW with Multi-Scaled Scheme in appendix), we calculate the consistency loss between the down-scaled classifier weight CAM on the original image and the classifier weight CAM on the down-scaled image. LW DW further boosts the performance improvement. Our multi-scaled distributionweighted consistency loss LMSDW is formulated as follows: LMSDW = Lcls + LP DW + LW DW. (8) Inference The final CAM for inference is calculated as: (Mfinal)˜c = MW(F, W, I)  ˜c + MP(F, P)  ˜c, (9) where (Mfinal)˜c denotes the final CAM of class ˜c; ˜c can be foreground or background class. In this way, the classifier weight CAM is complemented by the prototype CAM, jointly solving the over-/under-activation issue. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7528 Method Backbone Val Image-level supervision + Saliency maps. G-WSSS (Li et al. 2021) ResNet38 28.4 AuxSegNet (Xu et al. 2021) ResNet38 33.9 EPS (Lee et al. 2021b) ResNet101 35.7 Image-level supervision only. IRN (Ahn, Cho, and Kwak 2019) ResNet50 32.6 SEAM (Wang et al. 2020) ResNet38 31.9 RIB (Lee et al. 2021a) ResNet38 43.8 SIPE (Chen et al. 2022a) ResNet38 43.6 CONTA (Zhang et al. 2020) ResNet101 33.4 SIPE (Chen et al. 2022a) ResNet101 40.6 ESOL (Li et al. 2022) ResNet101 42.6 AMN (Lee, Kim, and Shim 2022) ResNet101 44.7 SFC (Ours) ResNet101 46.8 Table 3: Comparison of semantic segmentation performance on MS COCO 2014 validation set. Analysis on SFC This section demonstrates how shared features in classifier weights cause over-/under-activation issues under a longtailed scenario and the working mechanism behind SFC. Shared Feature Distribution in Classifier Weights In image-level WSSS, a multi-label soft margin loss (denoted as L) is commonly used for the classification model training (Chen et al. 2022a). Following the definitions in (Tan et al. 2021), the positive and negative gradients caused by L are formulated as follows: (L′pos c )i = −yi,c sigmoid(zi,c) −1  > 0, yi,c = 1 (L′neg c )i = −(1 −yi,c)sigmoid(zi,c) < 0, yi,c = 0, (10) where zi,c denotes the model predicted logit for the i-th sample on class c, and sigmoid(zi,c) indicates its sigmoid activated value; yi,c denotes the label of the i-th sample on class c (either 0 or 1); (L′pos c )i and (L′neg c )i denote the positive and negative gradients on zi,c. Considering a simplified case with one head class H and one tail class T, based on the conclusion that different classes have shared features (Hou, Yu, and Tao 2022; Liu et al. 2021; Li and Monga 2020), the head-class image feature can be decomposed as αHfH + ηHf0, where αHfH and ηHf0 indicate the discriminative and shared feature parts respectively, with αH and ηH indicating their proportions. Similarly, the tail-class image feature can be decomposed as αT fT + ηT f0. Then, the head-class classifier weight WH can be represented as: WH =nHE(αH)E(L′pos H )fH + nT E(αT )E(L′neg H )fT + nHE(ηH)E(L′pos H ) + nT E(ηT )E(L′neg H )  f0 | {z } f H 0 : shared feature component in WH , (11) where nH and nT indicate the sample numbers of head and tail classes, with nH ≫nT under a long-tailed scenario; IBR LP DW LW DW mIoU (%) Base 55.1 I ✓ 55.9 II ✓ ✓ 62.4 III ✓ ✓ 62.4 IV ✓ ✓ 58.1 V ✓ ✓ ✓ 64.7 Table 4: Ablation of SFC components. Base and I report the mIoU of MW and others report the mIoU of Mfinal. E(·) denotes the expectation operation. Similarly, we have the tail-class classifier weight WT as: WT =nT E(αT )E(L′pos T )fT + nHE(αH)E(L′neg T )fH + nT E(ηT )E(L′pos T ) + nHE(ηH)E(L′neg T )  f0 | {z } f T 0 : shared feature component in WT . (12) The proofs of Eq. (11) and Eq. (12) are provided in Proof 1 of appendix. Then, as demonstrated in Gradient Magnitude Analysis of appendix, the magnitude of E(L′pos) is larger than that of E(L′neg) and the gap is not significant. Combining with the precondition that nH ≫nT , it can be concluded that in Eq. (11) we have nHE(L′pos H ) + nT E(L′neg H ) > 0. Similarly, in Eq. (12) we have nT E(L′pos T ) + nHE(L′neg T ) < 0. Then, based on Eq. (11) and Eq. (12), we can have: Conclusion 1. When nH ≫ nT and the difference between E(ηH) and E(ηT ) is not as significant as that between nH and nT , the shared feature component in WH (i.e., f H 0 ) tends to be positive with a large magnitude, and the one in WT (i.e., f T 0 ) tends to be negative with a large magnitude. However, when nH ≈nT , the class with a higher E(η) will have a larger shared feature in its classifier weight, and the shared feature magnitude will be much lower than that when nH ≫nT . Over-activation and Under-activation One extracted feature of tail-class image area can be decomposed as AT = αA T fT +ηA T f0 (αA T and ηA T are the proportions of fT and f0). Similarly, one extracted feature of head-class image area can be decomposed as AH = αA HfH + ηA Hf0. Under long-tailed scenario (i.e., nH ≫nT ), the head-/tailclass classifier weight activations on AT and AH can be formulated as follows: AT WH = nHE(ηH)E(L′pos H ) + nT E(ηT )E(L′neg H )  ηA T ∥f0∥2 2 + nT αA T E(αT )E(L′neg H ) ∥fT ∥2 2 , (13) AHWH = nHE(ηH)E(L′pos H ) + nT E(ηT )E(L′neg H )  ηA H ∥f0∥2 2 + nHαA HE(αH)E(L′pos H ) ∥fH∥2 2 , (14) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7529 LP DW LW DW mIoU (%) VI 62.0 VII ✓ 63.7 VIII ✓ 63.6 V ✓ ✓ 64.7 Table 5: Effect of DC. ✓: the presence of DC. VI: ‘Base’: plain consistency loss. mIoU of Mfinal is reported. AT WT = nT E(ηT )E(L′pos T ) + nHE(ηH)E(L′neg T )  ηA T ∥f0∥2 2 + nT αA T E(αT )E(L′pos T ) ∥fT ∥2 2 , (15) AHWT = nT E(ηT )E(L′pos T ) + nHE(ηH)E(L′neg T )  ηA H ∥f0∥2 2 + nHαA HE(αH)E(L′neg T ) ∥fH∥2 2 . (16) Based on Conclusion 1, proved through Proof 2 in appendix, we have Conclusion 2: Conclusion 2. When nH ≫nT , AHWT and AT WT tend to be unactivated, and WT has under-activated tail-class image area compared with the ground truth (as shown in the tail classes of Fig. 3(b)). On the contrary, AHWH and AT WH tend to be activated, and WH has over-activated head-class image area compared with the ground truth (as shown in the head classes of Fig. 3(b)). On the other hand, for the class prototype extracted through averaging its classifier weight activated features, it only has positive shared features. Thus, based on Conclusion 2, proved through Proof 3 in appendix, we have Conclusion 3: Conclusion 3. Let PH and PT denote the prototypes of head class H and tail class T, A denotes the image area including AH and AT . When nH ≫nT , APH is less-activated compared with AWH (as shown in the head classes of Fig. 3(b) and Fig. 3(c)). On the contrary, APT is more-activated compared with AWT (as shown in the tail classes of Fig. 3(b) and Fig. 3(c)). How SFC Works As described in Eq. (5), DW consistency loss pulls closer prototype CAM and classifier weight CAM for pairs: {AT WH, AT PH}, {AT WT , AT PT }, {AHWH, AHPH}, {AHWT , AHPT }. Thereby, WH or WT are enforced to learn towards features activated by PH or PT . When nH ≫nT , based on Conclusion 3, we have: CASE 1: For {AT WH, AT PH}, AT PH is lessactivated compared with AT WH. As AT contains f0 and fT , WH is optimized towards −f0 and −fT , bringing positive effects for WH to shrink its CAM on tail-class areas. CASE 2: For {AT WT , AT PT }, AT PT is moreactivated compared with AT WT . Since AT contains f0 Classifier weight Prototype mIou (%) IX ✓ 64.2 X ✓ 62.5 V ✓ ✓ 64.7 Table 6: IX: Inference with MW. X: Inference with MP. V: Inference with Mfinal. and fT , WT is optimized towards f0 and fT , bringing positive effects for WT to expand its CAM on tail-class areas. CASE 3: For {AHWT , AHPT }, AHPT is moreactivated compared with AHWT . Since AH contains f0 and fH, WT is optimized towards f0 and fH. As WT has −f0 and −fH with large magnitudes (Conclusion 1), optimizing WT towards positive f0 and fH hardly brings negative effects. CASE 4: For {AHWH, AHPH}, AHPH is lessactivated compared with AHWH. Since AH contains f0 and fH, WH is optimized towards −f0 and −fH. As WH has f0 and fH with large magnitudes (Conclusion 1), optimizing WH towards −f0 and −fH hardly brings negative effects. In summary, classifier weights with severe over-/underactivation issues can benefit from CASE 1 and CASE 2, while they are not negatively affected for CASE 3 and CASE 4, improving the overall CAMs as shown in Fig. 3(d). However, when nH ≈nT , the consistency negatively affects the CAM generation. For example, by pulling closer the pair of {AHWT , AHPT }, as PT contains f0 and it activates AH which contains f0 and fH, WT will be optimized towards f0 and fH. However, WT does not have −f0 or −fH with large magnitude when nH ≈nT (Conclusion 1), increasing fH and f0 in WT brings a negative effect. Considering the consistency loss brings positive effects when nH ≫nT and brings negative effects when nH ≈nT , we define the total demand on the consistency loss for each class by adding up the sample number gaps between this class and all other classes, and then regard this total demand as the weight of consistency loss on this class (i.e., DC coefficient in Eq. (5)), maximizing the consistency loss effect. Experiments Dataset and Evaluation Metric. Experiments are conducted on two benchmarks: PASCAL VOC 2012 (Everingham et al. 2010) with 21 classes and MS COCO 2014 (Lin et al. 2014) with 81 classes. For PASCAL VOC 2012, following (Wang et al. 2020; Lee, Kim, and Yoon 2021; Chen et al. 2022a; Li et al. 2022), we use the augmented SBD set (Hariharan et al. 2011) with 10,582 annotated images. Mean Intersection over Union (mIoU) (Long, Shelhamer, and Darrell 2015) is used to evaluate segmentation results. Implementation Details. For pseudo label generation, we adopt the ImageNet (Deng et al. 2009) pretrained ResNet50 (He et al. 2016). Random cropping size 512×512 is adopted for training data augmentation. The NIBR is set to 4. Mfinal from our method is further post-processed by DenceCRF (Kr¨ahenb¨uhl and Koltun 2011) and IRN (Ahn, Cho, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7530 Class sets Overall Many Medium Few VI 6.9 5.6 6.5 8.6 V 9.6 9.5 6.5 12.9 Table 7: Average mIoU gains of different class sets. The class sets definitions follow (Wu et al. 2020). Kwak 2019) to generate the final pseudo labels, which are used to train the segmentation model: ResNet101-based DeepLabV2 (Ahn, Cho, and Kwak 2019; Chen et al. 2022a). More details can be found in appendix. Comparison of Pseudo-label Quality To validate the effectiveness of our SFC, we evaluate the quality of intermediate and final results in the pseudolabel generation process in Table 1. Specifically, we first compare the initial CAM generated by the classification model (denoted as CAM). Then, we compare various postprocessed CAMs to show the consistent improvements by our SFC. Particularly, the original CAM is first refined by CRF (Kr¨ahenb¨uhl and Koltun 2011)(denoted as CRF) and further processed by IRN (Ahn, Cho, and Kwak 2019) to generate the final pseudo masks (denoted as Mask). Experimental results in Table 1 show that the CAM from SFC is significantly better than the previous works on datasets with different class numbers and long-tailed degrees, and our method outperforms state-of-the-art methods by 2.6% on PASCAL VOC. Regarding the CRF-post-processed CAM, we achieve 69.4% mIoU on the PASCAL VOC, and further with IRN, our SFC improves the mIoU to 73.7%, achieving 1.5% gain compared to AMN (Lee, Kim, and Shim 2022). Comparison of WSSS Performance In WSSS, the CRF and IRN post-processed pseudo masks obtained from the initial CAM are treated as ground truth to train semantic segmentation model in a fully supervised manner. Table 2 reports the mIoU scores of our method and recent WSSS methods on the validation and test sets of PASCAL VOC 2012. On this dataset, we achieve 71.2% and 72.5% mIoU using an ImageNet pre-trained backbone, outperforming all other WSSS methods that use only imagelevel labels or both image-level labels and saliency maps (Xu et al. 2021; Yao et al. 2021; Lee et al. 2021b). Table 3 reports the performance comparison on MS COCO 2014. Using the same training scheme as the PASCAL VOC 2012 experiment, our method achieves 46.8% mIoU on the validation set with a ResNet101 backbone, outperforming AMN (Lee, Kim, and Shim 2022) by 2.1%. Ablation Studies In Table 4, we first verify the effectiveness of SFC components, i.e., Image Bank Re-sampling (IBR) and Multi-scaled Distribution-Weighted (MSDW) consistency Loss (including LP DW and LW DW). ‘Base’ is the classifier weight CAM in Eq.(2). In Setting I, IBR increases the mIoU of ‘Base’ CAM by 0.8%, showing increasing the tail-class sampling frequency can alleviate the over-/under-activation issues. In Base Setting I Setting II Setting V Head Tail Figure 4: CAM visualization results under different settings. Setting II, the CAM generated by SFC without IBR has a lower mIoU score than SFC (Setting V) by 2.3%, showing increasing the tail-class sampling frequency can boost the effectiveness of LMSDW. The result of setting III shows LW DW can boost the performance improvement brought by LP DW. However, setting IV indicates that using LW DW alone fails to calibrate the shared features in the down-scaled feature space and its performance drops significantly. Table 5 studies the effectiveness of DC in Eq. (6). Setting VI shows the SFC performance without DC in both of LP DW and LW DW. Settings VII and VIII show the performances of removing DC only from LW DW or LP DW. The results show that DC effectively adjusts the consistency loss weights of each class, bringing significant improvement. Table 6 shows that the CAM combination Mfinal in Eq. (9) (setting V) has the highest performance compared with those using MW or MP alone during inference, demonstrating it is better to complement MP to MW for SFC. Table 7 shows the average performance gains on different class sets with or without our DC coefficient. The plain consistency loss (setting VI) achieves almost the same gains across ‘Many’, ‘Medium’, and ‘Few’ classes. However, head and tail classes (i.e., ‘Many’ and ‘Few’ classes) actually need higher magnitudes of consistency loss to overcome the over/under-activation issues. With the help of DC coefficient (setting V), head and tail classes achieve more mIoU gains. Besides, we also study the qualitative effects of SFC components in Fig. 4. It can be seen that Base with IBR (setting I) improves the CAM as it increases the tail-class sampling frequency and calibrates the shared features in classifier weights. However, the improvements are limited (e.g., the shared feature ‘wheel’ is not activated) as IBR cannot balance the training data completely. When only using LMSDW (setting II), the CAMs are improved significantly but still not perfect, as the tail-class sampling frequencies and optimizing frequency of LMSDW are low. By using the complete SFC (setting V), we can achieve decent CAM results. Conclusion In this paper, we first demonstrate that shared features can cause over-/under-activation issues in CAM generation under a long-tailed scenario and then propose a novel Shared Feature Calibration (SFC) method for solving such issues, achieving new state-of-the-art performances. Our work provides a new perspective for improving CAM accuracy in image-level weakly supervised semantic segmentation, and other possible solutions will be investigated in future work. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7531 Acknowledgements This work was supported by the National Key R&D Program of China (No.2022YFE0200300), the National Natural Science Foundation of China (No. 61972323, 62331003), Suzhou Basic Research Program (SYG202316) and XJTLU REF-22-01-010, XJTLU AI University Research Centre, Jiangsu Province Engineering Research Centre of Data Science and Cognitive Computation at XJTLU and SIP AI innovation platform (YZCXPT2022103), Suzhou Municipal Key Laboratory for Intelligent Virtual Engineering (SZS2022004). References Ahn, J.; Cho, S.; and Kwak, S. 2019. Weakly supervised learning of instance segmentation with inter-pixel relations. In CVPR. Chen, J.; Cong, R.; Yuxuan, L.; Ip, H.; and Kwong, S. 2023. Saving 100x Storage: Prototype Replay for Reconstructing Training Sample Distribution in Class-Incremental Semantic Segmentation. In NeurIPS. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K. P.; and Yuille, A. L. 2016. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. PAMI, 40: 834–848. Chen, Q.; Yang, L.; Lai, J.-H.; and Xie, X. 2022a. Selfsupervised image-specific prototype exploration for weakly supervised semantic segmentation. In CVPR. Chen, Z.; Wang, T.; Wu, X.; Hua, X.-S.; Zhang, H.; and Sun, Q. 2022b. Class re-activation maps for weakly-supervised semantic segmentation. In CVPR. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In CVPR. Du, Y.; Fu, Z.; Liu, Q.; and Wang, Y. 2022. Weakly supervised semantic segmentation by pixel-to-prototype contrast. In CVPR. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. IJCV, 88: 303–338. Hariharan, B.; Arbel´aez, P.; Bourdev, L.; Maji, S.; and Malik, J. 2011. Semantic contours from inverse detectors. In ICCV. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR. Hou, Z.; Yu, B.; and Tao, D. 2022. Batchformer: Learning to explore sample relationships for robust representation learning. In CVPR. Kr¨ahenb¨uhl, P.; and Koltun, V. 2011. Efficient inference in fully connected crfs with gaussian edge potentials. In NeurIPS. Lee, J.; Choi, J.; Mok, J.; and Yoon, S. 2021a. Reducing information bottleneck for weakly supervised semantic segmentation. In NeurIPS. Lee, J.; Kim, E.; and Yoon, S. 2021. Anti-adversarially manipulated attributions for weakly and semi-supervised semantic segmentation. In CVPR. Lee, M.; Kim, D.; and Shim, H. 2022. Threshold matters in WSSS: manipulating the activation for the robust and accurate segmentation model against thresholds. In CVPR. Lee, S.; Lee, M.; Lee, J.; and Shim, H. 2021b. Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation. In CVPR. Li, J.; Jie, Z.; Wang, X.; Wei, X.; and Ma, L. 2022. Expansion and shrinkage of localization for weakly-supervised semantic segmentation. In NeurIPS. Li, X.; and Monga, V. 2020. Group based deep shared feature learning for fine-grained image classification. arXiv preprint arXiv:2004.01817. Li, X.; Zhou, T.; Li, J.; Zhou, Y.; and Zhang, Z. 2021. Groupwise semantic mining for weakly supervised semantic segmentation. In AAAI. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In ECCV. Lin, Y.; Chen, M.; Wang, W.; Wu, B.; Li, K.; Lin, B.; Liu, H.; and He, X. 2023. Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation. In CVPR. Liu, F.; Gao, C.; Sun, Y.; Zhao, Y.; Yang, F.; Qin, A.; and Meng, D. 2021. Infrared and Visible Cross-Modal Image Retrieval Through Shared Features. TCSVT, 31: 4485–4496. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In CVPR. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; and Terzopoulos, D. 2021. Image segmentation using deep learning: A survey. PAMI, 44(7): 3523–3542. Peng, Y.; He, X.; and Zhao, J. 2017. Object-part attention model for fine-grained image classification. TIP, 27(3): 1487–1500. Sun, G.; Wang, W.; Dai, J.; and Van Gool, L. 2020. Mining cross-image semantics for weakly supervised semantic segmentation. In ECCV. Tan, J.; Lu, X.; Zhang, G.; Yin, C.; and Li, Q. 2021. Equalization loss v2: A new gradient balance approach for longtailed object detection. In CVPR. Vasconcelos, C.; Birodkar, V.; and Dumoulin, V. 2022. Proper reuse of image classification features improves object detection. In CVPR, 13628–13637. Wang, Y.; Zhang, J.; Kan, M.; Shan, S.; and Chen, X. 2020. Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. In CVPR. Wu, T.; Huang, Q.; Liu, Z.; Wang, Y.; and Lin, D. 2020. Distribution-balanced loss for multi-label classification in long-tailed datasets. In ECCV. Xie, J.; Hou, X.; Ye, K.; and Shen, L. 2022. CLIMS: cross language image matching for weakly supervised semantic segmentation. In CVPR. Xu, L.; Ouyang, W.; Bennamoun, M.; Boussaid, F.; Sohel, F.; and Xu, D. 2021. Leveraging auxiliary tasks with affinity learning for weakly supervised semantic segmentation. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7532 Xu, L.; Ouyang, W.; Bennamoun, M.; Boussaid, F.; and Xu, D. 2022. Multi-class token transformer for weakly supervised semantic segmentation. In CVPR. Xu, R.; Wang, C.; Sun, J.; Xu, S.; Meng, W.; and Zhang, X. 2023. Self correspondence distillation for end-to-end weakly-supervised semantic segmentation. In AAAI. Yao, H.; Zhang, S.; Yan, C.; Zhang, Y.; Li, J.; and Tian, Q. 2017. AutoBD: Automated bi-level description for scalable fine-grained visual categorization. TIP, 27(1): 10–23. Yao, Y.; Chen, T.; Xie, G.-S.; Zhang, C.; Shen, F.; Wu, Q.; Tang, Z.; and Zhang, J. 2021. Non-salient region object mining for weakly supervised semantic segmentation. In CVPR. Yoon, S.-H.; Kweon, H.; Cho, J.; Kim, S.; and Yoon, K.-J. 2022. Adversarial erasing framework via triplet with gated pyramid pooling layer for weakly supervised semantic segmentation. In ECCV. Zhang, B.; Xiao, J.; Jiao, J.; Wei, Y.; and Zhao, Y. 2021a. Affinity attention graph neural network for weakly supervised semantic segmentation. PAMI, 44(11): 8082–8096. Zhang, B.; Xiao, J.; Wei, Y.; and Zhao, Y. 2023a. Credible Dual-Expert Learning for Weakly Supervised Semantic Segmentation. IJCV, 131: 1892 – 1908. Zhang, D.; Zhang, H.; Tang, J.; Hua, X.-S.; and Sun, Q. 2020. Causal intervention for weakly-supervised semantic segmentation. In NeurIPS. Zhang, F.; Gu, C.; Zhang, C.; and Dai, Y. 2021b. Complementary patch for weakly supervised semantic segmentation. In ICCV. Zhang, G.; Wang, L.; Kang, G.; Chen, L.; and Wei, Y. 2023b. SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model. arXiv preprint arXiv:2303.05118. Zhang, Z.; Gao, G.; Fang, Z.; Jiao, J.; and Wei, Y. 2022. Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation. NeurIPS, 35: 24340–24353. Zheng, H.; Fu, J.; Mei, T.; and Luo, J. 2017. Learning multiattention convolutional neural network for fine-grained image recognition. In ICCV. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning deep features for discriminative localization. In CVPR. Zhu, H.; Wei, Y.; Liang, X.; Zhang, C.; and Zhao, Y. 2023. CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation. In ICCV, 22257–22267. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7533
2024
836
18,669
Unifying Multi-Modal Uncertainty Modeling and Semantic Alignment for Text-to-Image Person Re-identification Zhiwei Zhao1,2, Bin Liu1,2*, Yan Lu3, Qi Chu1,2, Nenghai Yu1,2 1School of Cyber Science and Technology, University of Science and Technology of China 2CAS Key Laboratory of Electromagnetic Space Information 3Shanghai AI Laboratory zwzhao98@mail.ustc.edu.cn, {flowice,qchu,ynh}@ustc.edu.cn, luyan@pjlab.org.cn Abstract Text-to-Image person re-identification (TI-ReID) aims to retrieve the images of target identity according to the given textual description. The existing methods in TI-ReID focus on aligning the visual and textual modalities through contrastive feature alignment or reconstructive masked language modeling (MLM). However, these methods parameterize the image/text instances as deterministic embeddings and do not explicitly consider the inherent uncertainty in pedestrian images and their textual descriptions, leading to limited imagetext relationship expression and semantic alignment. To address the above problem, in this paper, we propose a novel method that unifies multi-modal uncertainty modeling and semantic alignment for TI-ReID. Specifically, we model the image and textual feature vectors of pedestrian as Gaussian distributions, where the multi-granularity uncertainty of the distribution is estimated by incorporating batch-level and identity-level feature variances for each modality. The multimodal uncertainty modeling acts as a feature augmentation and provides richer image-text semantic relationship. Then we present a bi-directional cross-modal circle loss to more effectively align the probabilistic features between image and text in a self-paced manner. To further promote more comprehensive image-text semantic alignment, we design a task that complements the masked language modeling, focusing on the cross-modality semantic recovery of global masked token after cross-modal interaction. Extensive experiments conducted on three TI-ReID datasets highlight the effectiveness and superiority of our method over state-of-the-arts. Introduction Text-to-Image person re-identification (TI-ReID) is a subtask of person re-identification (Ye et al. 2022), aiming to retrieve the most matching pedestrian images from an image gallery based on given textual descriptions. Leveraging the ease of obtaining textual descriptions of the query compared to actual images, this technology offers a more versatile and user-friendly person search manner. Given its practical applicability in the domain of public safety, the TI-ReID has gained increasing attention in recent years. Compared to general image-text retrieval, the TI-ReID is more challenging. It requires a fine-grained understanding *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (1)The woman has long brown hair and she is walking.She is wearing a dark red and purple coat, black pants and a pair of black shoes. (2) A lady with long brown hair, a rose striped coat, black trousers and black boots. She is carrying a rose red bag and a yellow handbag. (3) A woman with long hair, dark coat, white work card around her neck, black trousers and black high-heeled boots is walking. (b) deterministic image/text modeling (c) uncertain-aware image/text modeling point distribution (a) Uncertainty in pedestrian images and textual descriptions (1) (3) (2) Figure 1: (a) The inherent uncertainty for pedestrian images and text descriptions in TI-ReID. (b) Current TI-ReID methods do not explicitly depict the uncertainty and parameterize visual-textual data as deterministic embeddings. (c) We model image/text embeddings as distributions and estimate the multi-granularity distribution uncertainty to express more reasonable and richer image-text relationships. of the complex semantic concepts of pedestrians across the image and text modalities, as well as the establishment of cross-modal correspondences to bridge the inherent modality gap. Existing TI-ReID methods mainly revolve around aligning the image and text description of pedestrian into a shared space. They can be classified into cross-modal interaction-free (Zhang and Lu 2018; Han et al. 2021; Sarafianos, Xu, and Kakadiaris 2019; Wang et al. 2020) and crossmodal interaction-based (Li et al. 2017; Niu et al. 2020; Gao et al. 2021; Jiang and Ye 2023) methods. The former primarily utilized contrastive alignment (Zhang and Lu 2018; Han et al. 2021) to embed image-text features into shared space. In contrast, the latter employed the cross-attention mechanism (Niu et al. 2020; Farooq et al. 2022) and masked language modeling (Jiang and Ye 2023) to build fine-grained correlation between image regions and textual entities. While successful to some extent, these methods have not explicitly considered the inherent uncertainty between pedestrian images and textual descriptions. As shown in Fig. 1 (a), uncertainty in pedestrian images arises from factors like viewpoint variations, lighting changes, while in texThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7534 tual descriptions, it stems from word synonymy and granularity of annotations. Furthermore, the intra-modal uncertainty results in the same identity being associated with multiple perspectives of textual descriptions. Actually, this uncertainty reflects a reasonable range of semantic variation for image and text. Neglecting such uncertainty limits the semantic understanding and alignment capabilities for complex image-text relationships. This motivates us to explicitly model and utilize the uncertainty inherent in visual-textual data. In view of this, in this paper, we propose a novel approach that unifies multi-modal uncertainty modeling and semantic alignment for text-to-image person Re-ID. Specifically, we first propose the multi-modal uncertainty modeling (MUM) for TI-ReID that characterizes the global features of pedestrian images and textual descriptions as Gaussian distributions. For each modality, the MUM estimates the multi-granularity uncertainty of distribution by combining batch-level and identity-level feature variance. The batch-level variance generally provides a coarsegrained reflection of modality-level uncertainty, while the identity-level variance captures the scope of fine-grained semantic variation. Random sampling from these probabilistic distributions acts as a multi-modal feature augmentation, which effectively enhances the diversity of image-text features and enriches more reasonable and meaningful imagetext semantic relationships during training phase. After utilizing the multi-modal uncertainty modeling to convey more comprehensive semantic relationships, it is essential to further strengthen the capability of cross-modal semantic alignment. We first develop a bi-directional crossmodal circle loss (cm-Circle) to more effectively align the probabilistic image and text features sampled from the distributions. Our cm-Circle loss is built upon the circle loss (Sun et al. 2020) in image retrieval and focuses on optimizing the similarity of cross-modal pairs from text-to-image and image-to-text with a self-paced manner. It can adaptively strengthen the alignment for under-optimized imagetext pairs and well preserve the intra-modality structures. In addition, considering the current MLM-based methods (Jiang and Ye 2023) only focusing on utilizing visual context to recover the vocabulary semantics of masked local text tokens, we devise a task to recover the cross-modal semantic of masked global text token after the cross-modal interaction. This task (termed cm-GSR) employs cross-modal contrastive reconstruction as a supervisory signal, complementing the MLM and promoting comprehensive image-text semantic alignment and interaction. The multi-modal uncertainty modeling and semantic alignment objectives are integrated into a unified framework for end-to-end optimization. Our main contributions can be summarized as follows: • We present multi-modal uncertainty modeling for textto-image person Re-ID, which uses Gaussian distributions to depict image/text features and estimates the multi-granularity uncertainty. It acts as feature augmentation and conveys richer image-text relationship. • To enhance comprehensive image-text semantic alignment, we present a bi-directional cross-modal circle loss to align probabilistic image and text features more effectively, and propose to recover cross-modal semantic of masked global text token after cross-modal interaction. • We unify the multi-modal uncertainty modeling and semantic alignment into a joint learning framework. Extensive experiments on three text-to-image person Re-ID datasets show the effectiveness and superiority of our approach against the state-of-the-arts. Related Work Text-to-Image Person Re-identification Current TI-ReID methods can be roughly classified into cross-modal interaction-based and interaction-free methods. The interaction-based methods (Li et al. 2017; Niu et al. 2020; Wang et al. 2020; Farooq et al. 2022; Yan et al. 2022; Jiang and Ye 2023) utilize attention mechanisms to build fine-grained cross-modal correspondences between image regions and textual entities. Niu et al. (Niu et al. 2020) leveraged cross-attention to conduct relation-guided alignment between image regions and textual phrases, sentences. Gao et al. (Gao et al. 2021) proposed a contextual non-local attention mechanism to align full-scale image and textual features. Jiang et al. (Jiang and Ye 2023) further designed the cross-modal interaction transformer and used the masked language modeling (MLM) task to achieve implicit finegrained alignment. The cross-modal interaction-free methods (Zheng et al. 2020; Zhang and Lu 2018; Han et al. 2021; Wang et al. 2020) focus on upgrading model structures and designing contrastive-style loss functions to extract and align image-text representations. Benefiting from the advancements of vision-language pretraining (VLP) (Radford et al. 2021), the encoders for image and text modalities in TIReID have undergone upgrades, transitioning from ResNet (He et al. 2016) and BERT (Devlin et al. 2018) to CLIPbased encoders (Radford et al. 2021). The representative loss functions in TI-ReID include cross-modal projection matching (CMPM) loss (Zhang and Lu 2018), cross-modality contrastive loss (Han et al. 2021), similarity distribution matching (SDM) loss (Jiang and Ye 2023). Nevertheless, the above approaches fail to consider the inherent uncertainty in pedestrian images and their corresponding textual descriptions, leading to limited image-text understanding and alignment capability. Furthermore, the MLM-based method (Jiang and Ye 2023) solely focus on semantic recovery for masked local text tokens, disregarding the global masked token. Additionally, the contrastive-style losses overlook the varying learning difficulty among different cross-modal samples. In this work, we explicitly model the mutli-modal uncertainty and promote more effective semantic alignment for TI-ReID. Uncertainty Modeling in Computer Vision Uncertainty modeling, which aims to capture the intrinsic “randomness” in the data, has been receiving increasing attention in computer vision. In face recognition and person Re-ID, the DUL (Chang et al. 2020) and DistributionNet (Yu et al. 2019) employed Gaussian distributions to model face/person embeddings and used a learnable sub-network to estimate uncertainty, reflecting the quality of facial/person features. In domain generalization, the DSU (Li et al. 2022) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7535 A woman with long hair is wearing a white coat, black trousers and white shoes. A woman with long [MASK] is wearing a white coat, black trousers and white [MASK]. random mask Self Attention Feed Forward 12× Image Encoder Self Attention Feed Forward 12× Text Encoder × × … … Text Memory Bank … … Image Memory Bank … Text batch … Image batch random sampling random sampling cm-Circle Loss SDM Loss Cross Attention Multi-modal interaction encoder hair Q × × Self Attention MLM Task cm-GSR Task Multi-modal Uncertainty Modeling Cross-modal Semantic Alignment Image-Text Dual Encoder Image [CLS] Embedding Multi-granularity Uncertainty Text [EOS] Embedding Multi-granularity Uncertainty ID ID K V shoes 4× Figure 2: The overview framework of our proposed method for TI-ReID. Given the image and text inputs, we first present multimodal uncertainty modeling to represent them as Gaussian distributions and estimate multi-granularity distribution uncertainty by jointly utilizing batch-level and identity-level feature variances. Subsequently, for further enhancing cross-modal semantic alignment, we propose the cross-modal circle loss (cm-Circle) to more effectively align the probabilistic image-text features in self-paced manner and present the cm-GSR task to promote more comprehensive image-text interaction and alignment. modeled the uncertainty of feature statistics to generate diverse domain shifts. In cross-modality retrieval, the PCME (Chun et al. 2021) presented the probabilistic cross-modal embedding and predicted the mean and variance with learnable sub-networks. In vision-language pretraining, the MAP method (Ji et al. 2023) modeled image-text features as probabilistic distributions and utilizes learnable multi-head selfattention module to estimate uncertainty. In this paper, we present multi-modal uncertainty modeling for the first time in the text-to-image person Re-ID. We estimate the distribution uncertainty for each image/text instance with multigranularities by jointly using batch-level and identity-level feature variances, which is more suitable for TI-ReID and expresses richer image-text semantic relationships. Method In this section, we present the joint multi-modal uncertainty modeling and semantic alignment method. An overview of the framework is illustrated in Figure 2 and we delve into its specific details in the following subsections. Image-Text Dual Encoder The inputs consist of image-text pairs, represented as {vi, ti, yi}B i=1. where vi, ti, and yi refer to the image, text, and identity label, respectively. B is the batch-size. Image Encoder. We use a CLIP pre-trained Vision Transformer (ViT) to obtain the image embedding from an input image vi ∈RH×W ×C. The image is split into a sequence of N = H × W/P 2 patches, with P denoting the patch size. A trainable linear projection is applied to map these patches to 1D tokens {f v n}N n=1. The positional embeddings and [CLS] token are added to the token sequence. The resulting sequence of tokens is then processed through multiple layer transformer blocks to model relations between each patch and obtain the sequence of contextual image embeddings {f v cls, f v 1 , · · · , f v N}. where the f v cls is served as the global image representation gv i ∈R512. Text Encoder. For input text ti, the CLIP text encoder is used to extract text representation. The text description is tokenized and enclosed with [SOS] and [EOS] tokens to indicate the sequence’s beginning and end. Following recent methods (Shu et al. 2023; Wei et al. 2023), we randomly mask the word tokens of the input text ti with a probability (usually 15% or 30%) and replace them with the special token [MASK] during training. The masked text sequence then fed into the transformer to obtain sequence of contextual text embedding {f t sos, f t 1, · · · , f t eos}, where the transformer uses masked self-attention to capture correlations among tokens. Finally, the embedding at the [EOS] token, f t eos is treated as the global text feature gt i ∈R512. Multi-Modal Uncertainty Modeling The inherent uncertainty of pedestrian images and textual descriptions reflects a reasonable range of semantic variation. This motivates us to explicitly model and utilize the uncertainty in visual-textual data. By employing this uncertainty for feature augmentation of visual-textual instance, it can effectively express more reasonable image-text semantic relationships and contribute diverse semantic alignThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7536 ment. We suggest that by incorporating potential uncertainties, the global features of each pedestrian image and textual description, conform to specific Gaussian distributions. Therefore, the key lies in efficiently and comprehensively estimating the uncertainty of distributions for pedestrian images and texts. We propose multi-modal uncertainty modeling (MUM), which estimates the uncertainty of distribution for image and text modalities by considering both the batchlevel and identity-level feature variance. We believe that for each modality, the variance of feature embeddings within mini-batch primarily provides a coarse-grained perspective of image/text uncertainty and can be calculated by Eq. (1), Σ2 batch(V) = 1 B B X i=1 (gv i −Eb[gv])2, Σ2 batch(T ) = 1 B B X i=1 (gt i −Eb[gt])2, (1) where Σbatch(V), Σbatch(T ) ∈R512 represent the coarsegrained uncertainty for image/text modalities, respectively. However, solely estimating modality-level coarse-grained uncertainty is insufficient for fine-grained TI-ReID task, we proceed to depict the important fine-grained uncertainty by considering the identity label. For visual and textual modalities, the identity-level feature variances are calculated to capture the local scope of semantic variations specific to the individuals. Given the difficulty of estimating identitylevel variances through randomly sampled mini-batch, we employ two memory banks MV and MT , composed of the first-in-first-out dynamic queues, to respectively record a significant amount of global visual and text features from image-text pairs in past and current iterations. Specifically, the MV = {hv i }|M| i=1 and the MT = {ht i}|M| i=1 . The hv i and ht i are the global visual and textual features recorded in the memorys. The |M| denotes the size of memory bank, and is set to 65536. Then the identity-level feature variances for each identity in image and text modalities can be derived by Σ2 ID(Vy)= 1 |MVy | |M| X i=1 1(yi = y) ∗(hv i −Em[hv y])2, Σ2 ID(Ty)= 1 |MTy | |M| X i=1 1(yi = y) ∗(ht i −Em[ht y])2, (2) where ΣID(Vy), ΣID(Ty) ∈R512 indicate the fine-grained uncertainty of the y-th identity in vision and text modality, respectively. 1(yi = y) is the indicator function, |MV y | is the sample number of y-th identity in memory. We then unify the coarse-grained and fine-grained uncertainty through weighted coupling to estimate multi-granularity uncertainty Σunify(Vy) and Σunify(Ty) for each image/text instance by the Eq. (3), where the ω ∈(0, 1) is the coupling factor and the s is the scale factor. Σunify(Vy)=s ∗(ω ∗Σbatch(V)+(1 −ω) ∗ΣID(Vy)), Σunify(Ty)=s ∗(ω ∗Σbatch(T )+(1 −ω) ∗ΣID(Ty)). (3) The multi-granularity uncertainty Σunify(Vy)/Σunify(Ty) not only captures modality-related coarse-grained global uncertainty patterns, but also encompasses fine-grained identityrelated variations. With such multi-modal uncertainty modeling, it expands the reasonable and meaningful semantic distribution range for each visual/textual feature. Each visual/textual feature is established as Gaussian distribution with the uncertainty, and denoted as pv i ∼N(gv i , Σ2 unify(Vyi)) and pt i ∼N(gt i, Σ2 unify(Tyi)), respectively. Then the probabilistic features can be randomly drawn from the above distributions with the re-parameterization trick by as follows: pv i = gv i + ϵv i ∗Σunify(Vyi), ϵv i ∼N(0, I), pt i = gt i + ϵt i ∗Σunify(Tyi), ϵt i ∼N(0, I), (4) where the ϵv i and ϵt i are individually sampled from standard normal distributions. By randomly sampling from the above distributions for each image/text instance, it can generate more reasonable features with different directions and intensities and express richer image-text semantic relationship. Cross-Modal Semantic Alignment After conveying richer visual-textual semantic relationships through the proposed multi-modal uncertainty modeling, we need to enhance the visual-textual semantic alignment to adapt more diverse features. We first employ the commonly used similarity distribution matching (SDM) loss (Jiang and Ye 2023) in TI-ReID to initially align the probabilistic image-text features. It minimizes the KL divergence between the distributions of image-text similarity πi,j and the normalized distributions of matching labels qi,j as follows: Lt2v SDM = 1 B B X i=1 B X j=1  πi,j · log πi,j qi,j + δ  , (5) πi,j = exp  pt i · pv j /τ  PB k=1 exp (pt i · pv k/τ) , qi,j = li,j PB k=1 li,k , (6) where τ is temperature coefficient, the pt i · pj v is the cosine similarity. li,j = 1 means that (ti, vj) is positive pair with same identity, while li,j = 0 indicates negative pair, the δ is a small number to avoid the numerical issues. The total SDM loss LSDM = Lt2v SDM + Lv2t SDM. To further enhance the semantic alignment between probabilistic global images and text features more efficiently, we present a bi-directional cross-modal circle loss (termed cmCircle) for the TI-ReID, inspired by the circle loss (Sun et al. 2020) in image retrieval task. The designed cm-circle loss aims to further align the global semantic of probabilistic features for positive and negative image-text pairs in a selfpaced manner. Specifically, the text-to-image cm-circle loss Lt2v cmcir is formulated as Eq. (7), where the pt ipv k and pt ipv j denote the cosine similarity of positive and negative imagetext pair in probabilistic feature space, respectively. The αk p and αj n respectively represent the non-negative re-weighting for each positive and negative image-text pair. Lt2v cmcir =log  1+ yi̸=yj X j eγαj n(pt ipv j−∆n) yi=yk X k e−γαk p(pt ipv k−∆p)   (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7537 Similarly, the image-to-text cm-circle loss Lv2t cmcir can be expressed as Eq. (8) with a symmetric manner. Lv2t cmcir =log  1+ yi̸=yj X j eγβj n(pv i pt j−∆n) yi=yk X k e−γβk p(pv i pt k−∆p)   (8) The re-weighting factors αk p, αj n and βk p, βj n are calculated as Eq. (9), where Op and On are optimums of the similarity score for the positive and negative image-text pair, respectively. The hyper-parameter Op = 1 + m, On = −m, ∆p = 1 −m, ∆n = m, m is the margin, [·]+=max{·, 0}. ( αk p =  Op −pt ipv k  + , βk p =  Op −pv i pt k  + , αj n = h pt ipv j −On i + , βj n = h pv i pt j −On i + . (9) Finally, the bi-directional cm-circle loss is calculated as Eq. (10), and it brings two benefits to TI-ReID. First, it focuses solely on optimizing similarity of cross-modal positive and negative text-image pairs, preserving intra-modality structures. Secondly, it dynamically adjusts the weights of cross-modal pairs based on alignment difficulty, enhancing optimization for under-optimized image-text pairs. Lcm-Circle = Lt2v cmcir + Lv2t cmcir. (10) The above proposed cross-modal circle loss only offers coarse-grained semantic alignment between vision and text. Following recent MLM-based method IRRA (Jiang and Ye 2023), as shown in Fig. 2, we use a multi-modal interaction encoder (MIE) consisting of several cross-attention and self-attention layers to model the interactions between the sequence of contextual image embeddings f(vi) = {f v cls, f v 1 , · · · , f v N} and the sequence of contextual text embeddings f(ti) = {f t sos, f t 1, · · · , f t eos} of masked text. The {rt i,k}L k=1 denote the recovered textual token embeddings after cross-modal interaction, L is the length of text tokens.  rt i,k L k=1 = MIE(f(ti), f(vi)). (11) Then, the masked language modeling predicts the correct vocabulary ID for masked word tokens with contextual image embeddings and textual embeddings, by minimizing the negative log-likelihood as Eq. (12). Mindexes is the indexes of masked positions, wi,k is the true vocabulary ID of word. LMLM = −Ei,k∈Mindexes log p(wi,k | rt i,k). (12) However, we observe that above-mentioned masked language modeling task solely focus on recovering the vocabulary semantic for masked local text tokens, while ignoring the key global masked text embedding. Actually, the rt i,eos represents the recovered global embedding of masked text after the cross-modality interaction (via Eq. (11)). We encourage the rt i,eos should encompass complete cross-modal semantic, as achieving this objective necessitates a more comprehensive cross-modal interaction between f(vi) and f(ti). In view of this, we design a task (termed cm-GSR) to recover the cross-modal semantic of masked global text token after the cross-modal interaction, which leveraging the cross-modal contrastive reconstruction as supervisory signal. We apply the cross-modal Info-NCE loss between the rt i,eos and the complete image embedding gv i to achieve the cm-GSR task, and can be expressed as Eq. (15), Lt2v NCE =−Ei[log exp(< rt i,eos, gv i > /τ) PB j=1 exp < rt i,eos, gv j > /τ) ], (13) Lv2t NCE =−Ei[log exp(< (gv i , rt i,eos >)/τ) PB j=1 exp(< gv i , rt j,eos >)/τ) ], (14) Lcm-GSR(rt i,eos, gv i ) = 0.5(Lt2v NCE + Lv2t NCE), (15) where <rt i,eos, gv i >denotes the cosine similarity between rt i,eos and gv i . The cm-GSR task effectively complements the masked language modeling and promotes more comprehensive image-text interaction and semantic alignment. Joint Optimization We unify multi-modal uncertainty modeling and crossmodal semantic alignment into an end-to-end framework, and minimize overall optimization loss Loverall for training. Loverall = LSDM + LMLM + λ1Lcm-Circle + λ2Lcm-GSR. (16) Experiments Experimental Setup CUHK-PEDES (Li et al. 2017) has 40,206 images and 80,412 textual descriptions associated with 13,003 identities. The training set has 11,003 identities with 34,054 images and 68,108 textual descriptions. The validation and test set comprise 3,078 and 3,074 images, along with 6,158 and 6,156 textual descriptions, respectively. Both the val/test subsets have 1,000 identities. RSTPReid (Zhu et al. 2021) comprises 20,505 images, showcasing 4,101 unique identities. Each identity is represented by five images from different cameras, with every image being paired with two textual descriptions. The dataset utilizes 3,701, 200 and 200 identities for training, validation, and testing, respectively. ICFG-PEDES (Ding et al. 2021) is a identity-centric TIReID dataset, featuring 54,522 images across 4,102 unique identities. Each image corresponds to a single textual description. The dataset is divided into a training set with 34,674 images from 3,102 identities and a test set containing 19,848 images representing 1,000 identities. Evaluation Protocol. Similar to most works in TI-ReID, we report the Rank-k accuracy (k=1,5,10) and the mean Average Precision (mAP) metric. Implementation Details. Our approach is implemented using the PyTorch framework on a single NVIDIA RTX3090 GPU(24G). Similar to the IRRA method (Jiang and Ye 2023), our model comprises a pre-trained image encoder (CLIP-ViT-B/16), a pre-trained text encoder (CLIP text Transformer), and a randomly initialized multimodal interaction encoder. During training, all input images are resized to 384 × 128, the patch and stride size are set to 16. We apply the random horizontal flipping, RandAugment (Cubuk The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7538 Method Venue Image Encoder Text Encoder R1 R5 R10 mAP GNA-RNN (Li et al. 2017) CVPR17 VGG LSTM 19.05 53.64 Dual-path (Zheng et al. 2020) TOMM20 RN50 RN50 44.40 66.26 75.07 CMPM/C (Zhang and Lu 2018) ECCV18 RN50 LSTM 49.37 79.27 MIA (Niu et al. 2020) TIP20 RN50 GRU 53.10 75.00 82.90 PMA (Jing et al. 2020) AAAI2020 RN50 LSTM 53.81 73.54 81.23 ViTAA (Wang et al. 2020) ECCV20 RN50 LSTM 54.92 75.18 82.90 51.60 NAFS (Gao et al. 2021) arXiv21 RN50 BERT 59.36 79.13 86.00 54.07 DSSL (Zhu et al. 2021) MM21 RN50 BERT 59.98 80.41 87.56 SSAN (Ding et al. 2021) arXiv21 RN50 LSTM 61.37 80.15 86.73 LapsCore (Wu et al. 2021) ICCV21 RN50 BERT 63.40 87.80 TextReID (Han et al. 2021) BMVC21 CLIP-RN101 CLIP-Xformer 64.08 81.73 88.19 60.08 TIPCB (Chen et al. 2022) Neuro22 RN50 BERT 64.26 83.19 89.10 CAIBC (Wang et al. 2022a) MM22 RN50 BERT 64.43 82.87 88.37 AXM-Net (Farooq et al. 2022) AAAI22 RN50 BERT 64.44 80.52 86.77 58.73 LGUR (Shao et al. 2022) MM22 DeiT-Small BERT 65.25 83.12 89.00 IVT (Shu et al. 2023) ECCVW22 ViT-Base BERT 65.59 83.11 89.21 CFine (Yan et al. 2022) arXiv22 CLIP-ViT BERT 69.57 85.93 91.15 MCM (Wei et al. 2023) arXiv23 CLIP-ViT CLIP-Xformer 69.61 86.01 90.90 IRRA (Jiang and Ye 2023) CVPR2023 CLIP-ViT CLIP-Xformer 73.38 89.93 93.71 66.13 baseline (CLIP-ViT-B/16) CLIP-ViT CLIP-Xformer 72.98 89.39 93.22 65.64 Ours CLIP-ViT CLIP-Xformer 74.25 89.83 93.58 66.15 Table 1: Performance comparisons with state-of-the-art methods on CUHK-PEDES dataset. R1, R5, R10 denote the Rank-1, Rank-5, Rank-10 accuracies (%), mAP is the mean average precision (%). Method R1 R5 R10 mAP DSSL(Zhu et al. 2021) 39.05 62.60 73.95 SSAN(Ding et al. 2021) 43.50 67.80 77.15 LBUL(Wang et al. 2022b) 45.55 68.20 77.85 TIPCB (Chen et al. 2022) 46.60 71.70 81.00 36.18 IVT(Shu et al. 2023) 46.70 70.00 78.80 ACSA (Ji et al. 2022) 48.40 71.85 81.45 CFine(Yan et al. 2022) 50.55 72.50 81.60 MCM(Wei et al. 2023) 55.35 77.30 84.25 IRRA (Jiang and Ye 2023) 60.25 81.30 88.20 47.52 baseline (CLIP-ViT-B/16) 59.80 81.50 88.30 47.42 Ours 63.40 83.30 90.30 49.28 Table 2: Performance comparisons with state-of-the-art methods on RSTPReid dataset. et al. 2020), and random erasing (Zhong et al. 2020) for image augmentation. The batchsize is set to 64. The maximum length of the textual token sequence is 77. Our model is trained using Adam optimizer (Kingma and Ba 2014) for 60 epochs, with a learning rate initialized at 1 × 10−5 and cosine learning rate decay. The learning rate is gradually increased from 1 × 10−6 to 1 × 10−5 over the 5 warm-up epochs. For the MUM module, both the coupling factor ω and the scale factor s are set to 0.25. The MUM module is only applied during training phase for feature augmentation. During testing phase, we do not use this module. The mask rate of input text token during training phase is set to 30% for CUHK-PEDES and ICFG-PEDES, and 15% for RSTPReid. During the testing phase, the input texts is not masked. The hyper-parameters γ and m in the cm-Circle loss are empirically set to 64 and 0.35. The weight λ1 of cm-Circle loss is set to 2.0 for ICFG-PEDES and RSTPReid, and 0.25 for CUHK-PEDES. The weight λ2 of cm-GSR loss is set to 0.5. Method R1 R5 R10 mAP MIA (Niu et al. 2020) 46.49 67.14 75.18 ViTAA (Wang et al. 2020) 50.98 68.79 75.78 SSAN (Ding et al. 2021) 54.23 72.63 79.53 TIPCB (Chen et al. 2022) 54.23 72.63 79.53 IVT (Shu et al. 2023) 56.04 73.60 80.22 CFine (Yan et al. 2022) 60.83 76.55 82.42 MCM (Wei et al. 2023) 62.29 77.15 82.52 IRRA (Jiang and Ye 2023) 63.46 80.25 85.82 38.06 baseline (CLIP-ViT-B/16) 63.34 80.21 85.73 37.88 Ours 65.62 80.54 85.83 38.78 Table 3: Performance comparisons with state-of-the-art methods on ICFG-PEDES dataset. Comparison with State-of-the-Art Methods Results on CUHK-PEDES dataset. As shown in the Table 1, our method surpasses all current state-of-the-art methods, achieving a Rank-1 accuracy of 74.25% and an mAP of 66.15%. Compared to the methods CFine (Yan et al. 2022) and IRRA (Jiang and Ye 2023), which also employ CLIP pre-trained model as image-text encoders, our method surpasses them by +4.68% and +0.87% in terms of Rank-1, respectively. It is noteworthy that our approach primarily relies on global matching and does not use complex local imagetext matching such as (Niu et al. 2020; Yan et al. 2022). Additionally, we also do not leverage external knowledge such as semantic mask (Wang et al. 2020), human pose (Jing et al. 2020), and hierarchical textual parsing (Niu et al. 2020). Results on RSTPReid dataset. Note that RSTPReid dataset presents complex indoor and outdoor scene variations, making it more challenging. The comparative results on the RSTPReid dataset are shown in Table 2. It is evident that our approach demonstrates a more notable advantage compared The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7539 No. Component RSTPReid MUM cm-Circle cm-GSR R1 R5 R10 0 59.80 81.50 88.30 1 ✓ 62.35 82.80 89.05 2 ✓ 61.50 82.30 88.50 3 ✓ 61.25 82.20 88.85 4 ✓ ✓ 62.45 82.50 88.95 5 ✓ ✓ 62.80 82.95 89.50 6 ✓ ✓ ✓ 63.40 83.30 90.30 Table 4: Ablation study for each proposed component of our method on RSTPReid dataset, No.0 corresponds to baseline. Method RSTPReid R1 R5 R10 baseline 59.80 81.50 88.30 baseline+ batch-level UM 61.20 81.85 88.55 baseline+ identity-level UM 61.65 82.40 88.75 baseline+ MUM (multi-granularity) 62.35 82.80 89.05 baseline+ circle loss 60.85 81.85 88.50 baseline+ cm-Circle loss 61.50 82.30 88.50 Table 5: The detailed analysis of the multi-modal uncertainty modeling (MUM) and cross-modal circle loss (cm-Circle). to other methods. We achieve a Rank-1 accuracy of 63.40% and an mAP of 49.28%, significantly surpassing the current SOTA IRRA method by approximately +3.15% Rank-1. Results on ICFG-PEDES dataset. The comparative results on the ICFG-PEDES dataset are presented in Table 3. It is noteworthy that the textual descriptions in the ICFGPEDES dataset are more focused on individual identities and offer finer granularity. On the ICFG-PEDES dataset, our method still surpasses all existing state-of-the-art methods. We achieve a Rank-1 accuracy of 65.62%, outperforming the IRRA method by +2.16% in Rank-1 accuracy. Ablation Study In this paper, we adopt the CLIP-ViT-B/16 model fine-tuned with the combination of SDM loss LSDM and MLM loss LMLM as our baseline. The extensive ablation experiments are conducted on top of this baseline to demonstrate the effectiveness of each of our proposed components. Firstly, from the Table 1, 2 and 3, we can observe that our holistic approach consistently yields significant performance improvements over the baseline on three datasets. Compared to the baseline, our method achieves relative improvements of +3.6%, +2.16%, and +1.27% in Rank-1 on RSTPReid, ICFG-PEDES, and CUHK-PEDES, respectively. This validates the effectiveness of our method for TI-ReID. Effectiveness of the multi-modal uncertainty modeling (MUM). Our MUM module serve as feature augmentation to express richer image-text semantic relationships. The effectiveness of MUM is demonstrated through experimental results involving comparisons between No.0 and No.1, No.2 and No.5, and No.4 and No.6 in the Table 4. For instance, by comparing No.0 and No.1, we observe that solely applying the MUM module leads to a 2.55% Rank-1 improvement for the baseline on RSTPReid. Furthermore, in the first four rows of Table 5, we experimentally validate the advantage of MUM’s coupling of batch-level and identitylevel feature variances for multi-granularity uncertainty estimation. We can first see that utilizing either the coarsegrained batch-level uncertainty or fine-grained identity-level uncertainty can enhance the baseline performance. More importantly, when coupling Σbatch and ΣID to derive multigranularity uncertainty Σunify and thus capture more comprehensive and reasonable potential variations, the performance is further improved. This clearly shows the benefits of multi-granularity uncertainty estimation for TI-ReID. Effectiveness of the cross-modal circle loss (cm-Circle). Our introduced cross-modal circle loss aims to align the global semantic features for positive and negative crossmodal image-text pairs in a self-paced manner. The effectiveness of the cm-Circle loss is demonstrated by comparing results from Table 4 between pair of lines such as No.0 and No.2, No.3 and No.4, and No.1 and No.5. By comparing No.0 and No.2, we can see that optimizing the additional cm-Circle loss results in an 1.7% Rank-1 improvement to the baseline. We attribute this enhancement primarily to the dynamic adjustment of cross-modal pair weights in the cmCircle loss and it can enhance the alignment intensity for hard image-text pairs. Furthermore, in the last two rows of Table 5, we compared the cm-Circle loss with conventional circle loss for TI-ReID. We can observe that the cm-Circle loss achieves better performance. This is because cm-Circle loss focuses exclusively on cross-modal pairs and does not optimize negative pairs within text modality. It preserves the intra-modality structure and offers benefits for TI-ReID. Effectiveness of cross-modal global semantic recovery (cm-GSR). The cm-GSR task is designed to recover the cross-modal semantic of masked global text token after the cross-modal interaction, based on the masked language modeling. We verify its effectiveness by conducting comparisons in Table 4 across pairs of rows, including No.0 and No.3, No.2 and No.4, and No.5 and No.6. As we can see, incorporating the cm-GSR task alone results in a Rank-1 improvement of 1.45% over baseline. In addition, applying it on top of the MUM module and cm-Circle loss further amplifies semantic alignment capability, resulting in better performance. These results confirm the necessity of cm-GSR task and its potential on promoting a more comprehensive image-text semantic alignment for TI-ReID. Conclusion This paper presents a novel method that unifies multi-modal uncertainty modeling and semantic alignment for text-toimage person Re-ID. We explicitly model the uncertainty in pedestrian images and textual descriptions, using Gaussian distributions to depict image/text features and estimates multi-granularity uncertainty by jointly using batchlevel and identity-level variances. We further propose bidirectional cross-modal circle loss to more effectively align probabilistic image and text features. Moreover, we develop cm-GSR task to promote more comprehensive image-text alignment. Extensive experiments on TI-ReID benchmarks show the effectiveness and superiority of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7540 Acknowledgments This work is supported by the National Natural Science Foundation of China (Grant No. 62272430, Grant No. 62121002). References Chang, J.; Lan, Z.; Cheng, C.; and Wei, Y. 2020. Data uncertainty learning in face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5710–5719. Chen, Y.; Zhang, G.; Lu, Y.; Wang, Z.; and Zheng, Y. 2022. TIPCB: A simple but effective part-based convolutional baseline for text-based person search. Neurocomputing, 494: 171–181. Chun, S.; Oh, S. J.; De Rezende, R. S.; Kalantidis, Y.; and Larlus, D. 2021. Probabilistic embeddings for cross-modal retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8415–8424. Cubuk, E. D.; Zoph, B.; Shlens, J.; and Le, Q. V. 2020. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 702–703. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ding, Z.; Ding, C.; Shao, Z.; and Tao, D. 2021. Semantically self-aligned network for text-to-image part-aware person reidentification. arXiv preprint arXiv:2107.12666. Farooq, A.; Awais, M.; Kittler, J.; and Khalid, S. S. 2022. AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 4477–4485. Gao, C.; Cai, G.; Jiang, X.; Zheng, F.; Zhang, J.; Gong, Y.; Peng, P.; Guo, X.; and Sun, X. 2021. Contextual non-local alignment over full-scale representation for text-based person search. arXiv preprint arXiv:2101.03036. Han, X.; He, S.; Zhang, L.; and Xiang, T. 2021. Text-Based Person Search with Limited Data. In BMVC. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Ji, Y.; Wang, J.; Gong, Y.; Zhang, L.; Zhu, Y.; Wang, H.; Zhang, J.; Sakai, T.; and Yang, Y. 2023. MAP: Multimodal Uncertainty-Aware Vision-Language Pre-Training Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 23262–23271. Ji, Z.; Hu, J.; Liu, D.; Wu, L. Y.; and Zhao, Y. 2022. Asymmetric cross-scale alignment for text-based person search. IEEE Transactions on Multimedia. Jiang, D.; and Ye, M. 2023. Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person Retrieval. In CVPR. Jing, Y.; Si, C.; Wang, J.; Wang, W.; Wang, L.; and Tan, T. 2020. Pose-guided multi-granularity attention network for text-based person search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 11189–11196. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Li, S.; Xiao, T.; Li, H.; Zhou, B.; Yue, D.; and Wang, X. 2017. Person search with natural language description. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1970–1979. Li, X.; Dai, Y.; Ge, Y.; Liu, J.; Shan, Y.; and Duan, L.-Y. 2022. Uncertainty modeling for out-of-distribution generalization. arXiv preprint arXiv:2202.03958. Niu, K.; Huang, Y.; Ouyang, W.; and Wang, L. 2020. Improving description-based person re-identification by multigranularity image-text alignments. IEEE Transactions on Image Processing, 29: 5542–5556. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Sarafianos, N.; Xu, X.; and Kakadiaris, I. A. 2019. Adversarial representation learning for text-to-image matching. In Proceedings of the IEEE/CVF international conference on computer vision, 5814–5824. Shao, Z.; Zhang, X.; Fang, M.; Lin, Z.; Wang, J.; and Ding, C. 2022. Learning Granularity-Unified Representations for Text-to-Image Person Re-identification. In Proceedings of the 30th ACM International Conference on Multimedia, 5566–5574. Shu, X.; Wen, W.; Wu, H.; Chen, K.; Song, Y.; Qiao, R.; Ren, B.; and Wang, X. 2023. See finer, see more: Implicit modality alignment for text-based person retrieval. In Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part V, 624–641. Springer. Sun, Y.; Cheng, C.; Zhang, Y.; Zhang, C.; Zheng, L.; Wang, Z.; and Wei, Y. 2020. Circle loss: A unified perspective of pair similarity optimization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6398–6407. Wang, Z.; Fang, Z.; Wang, J.; and Yang, Y. 2020. Vitaa: Visual-textual attributes alignment in person search by natural language. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16, 402–420. Springer. Wang, Z.; Zhu, A.; Xue, J.; Wan, X.; Liu, C.; Wang, T.; and Li, Y. 2022a. CAIBC: Capturing All-round Information Beyond Color for Text-based Person Retrieval. In Proceedings of the 30th ACM International Conference on Multimedia, 5314–5322. Wang, Z.; Zhu, A.; Xue, J.; Wan, X.; Liu, C.; Wang, T.; and Li, Y. 2022b. Look Before You Leap: Improving Textbased Person Retrieval by Learning A Consistent Crossmodal Common Manifold. In Proceedings of the 30th ACM International Conference on Multimedia, 1984–1992. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7541 Wei, D.; Zhang, S.; Yang, T.; and Liu, J. 2023. Calibrating Cross-modal Feature for Text-Based Person Searching. arXiv preprint arXiv:2304.02278. Wu, Y.; Yan, Z.; Han, X.; Li, G.; Zou, C.; and Cui, S. 2021. LapsCore: language-guided person search via color reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1624–1633. Yan, S.; Dong, N.; Zhang, L.; and Tang, J. 2022. CLIPDriven Fine-grained Text-Image Person Re-identification. arXiv preprint arXiv:2210.10276. Ye, M.; Shen, J.; Lin, G.; Xiang, T.; Shao, L.; and Hoi, S. C. H. 2022. Deep Learning for Person Re-Identification: A Survey and Outlook. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6): 2872–2893. Yu, T.; Li, D.; Yang, Y.; Hospedales, T. M.; and Xiang, T. 2019. Robust person re-identification by modelling feature uncertainty. In Proceedings of the IEEE/CVF international conference on computer vision, 552–561. Zhang, Y.; and Lu, H. 2018. Deep cross-modal projection learning for image-text matching. In Proceedings of the European conference on computer vision (ECCV), 686–701. Zheng, Z.; Zheng, L.; Garrett, M.; Yang, Y.; Xu, M.; and Shen, Y.-D. 2020. Dual-path convolutional image-text embeddings with instance loss. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 16(2): 1–23. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. 2020. Random erasing data augmentation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 13001–13008. Zhu, A.; Wang, Z.; Li, Y.; Wan, X.; Jin, J.; Wang, T.; Hu, F.; and Hua, G. 2021. DSSL: deep surroundings-person separation learning for text-based person retrieval. In Proceedings of the 29th ACM International Conference on Multimedia, 209–217. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7542
2024
837
18,670
Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis Zihao Zhao1*, Sheng Wang1,2,3*, Qian Wang1,4, Dinggang Shen1,3,4† 1School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China 2School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China 3Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China 4Shanghai Clinical Research and Trial Center, Shanghai, China {zhaozh2022, qianwang, dgshen}@shanghaitech.edu.cn, wsheng@sjtu.edu.cn Abstract Obtaining large-scale radiology reports can be difficult for medical images due to various reasons, limiting the effectiveness of contrastive pre-training in the medical image domain and underscoring the need for alternative methods. In this paper, we propose eye-tracking as an alternative to text reports, as it allows for the passive collection of gaze signals without disturbing radiologist’s routine diagnosis process. By tracking the gaze of radiologists as they read and diagnose medical images, we can understand their visual attention and clinical reasoning. When a radiologist has similar gazes for two medical images, it may indicate semantic similarity for diagnosis, and these images should be treated as positive pairs when pre-training a computer-assisted diagnosis (CAD) network through contrastive learning. Accordingly, we introduce the Medical contrastive Gaze Image Pre-training (McGIP) as a plug-and-play module for contrastive learning frameworks. McGIP uses radiologist’s gaze to guide contrastive pre-training. We evaluate our method using two representative types of medical images and two common types of gaze data. The experimental results demonstrate the practicality of McGIP, indicating its high potential for various clinical scenarios and applications. Introduction Gaze is a rich bio-signal that provides information on where an individual’s eyes are directed. The collection of gaze data has significantly advanced in recent years in terms of ease, cost (typically under 300 USD), and speed (Elfares et al. 2023; Uppal, Kim, and Singh 2023; Valliappan et al. 2020; Wan et al. 2021). Gaze data has been extensively researched and utilized across multiple fields, such as marketing (Wedel and Pieters 2017), robotics (Aronson and Admoni 2022; Aronson, Almutlak, and Admoni 2021; Palinko et al. 2016; Biswas et al. 2022; Manzi et al. 2020), virtual reality (Hu et al. 2019, 2021a; Matthews, Uribe-Quevedo, and Theodorou 2020; Hu et al. 2021b; Hu 2020). In addition to these fields, eye tracking has also gained attention in the medical imaging domain as a low-cost and convenient tool (Wang et al. 2022; Ma et al. 2023; Karargyris *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Gaze Pull closer Latent Space Positive Pair Positive Pair Augmentation Figure 1: For contrastive pre-training, positive pairs are often only constructed between an image and its augmented version. In our McGIP, the images with similar gaze patterns when read by a radiologist are also considered as positive pairs and be pulled closer in the latent space. et al. 2021). For example, the Gaze Meets ML Workshop, endorsed by MICCAI, was held in 2022 to explore the application of gaze data in medical image analysis (Organizers 2022). One advantage of eye tracker in clinical medical imaging settings is that they do not require radiologists to open additional software programs or tools. Instead, the eye tracker can be easily attached beneath the monitor, allowing the radiologist to continue working with their existing tools and software. This can save time and effort compared to traditional annotation tools like drawing masks, circles, or boxes, which require radiologists to switch between different programs and interfaces. Moreover, eye tracking can provide additional data that cannot be captured by traditional annotation tools, such as insight into the radiologist’s attentional processes and decision-making strategies. In order to utilize these information in gaze, contrastive learning is a natural framework to choose since it has already successfully mine information in many cross modality data (Radford et al. 2021). In contrastive pre-training, images sharing similar semantics should be considered positive pairs, and vice versa. Conventional approaches (Chen et al. 2020) create positive pair by randomly augmenting an image twice as illustrated in Figure 1. 1) One straightforward improvement to generate better positive pairs is to use radiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7543 = Gaze Point Mammography Gaze Heatmap: Low →High Dental Panoramic X-Ray Figure 2: We show examples, with images of similar semantics corresponding to similar gaze patterns. On the left, there are four breast mammographic images, among which two are benign masses (green boxes) and two are malignant masses (blue boxes). The distributions of gaze points are similar across two benign masses, and also across two malign masses. On the right, there are two dental X-ray images of different patients. The yellow and red boxes indicate wisdom teeth on the upper and lower jaws, respectively. Across two images, the teeth of the same location have similar gaze heatmaps, corresponding to shared anatomical roles and the underlying image semantics. ological reports that describe lesion locations (Wiehe et al. 2022; Seibold et al. 2022; Vu et al. 2021). However, accessing a large set of reports is not always easy (Johnson et al. 2019). 2) Another option is to use classification labels, which are more commonly available. However, generating positive pairs based on these labels presents a problem: since medical images have very few classes, there will be too many positive pairs in a contrastive batch, resulting in a collapsed representation (Grill et al. 2020). 3) Moreover, classification labels in medical image analysis, such as BIRADS (Liberman and Menell 2002), reflect severity rather than visual pattern. For instance, two BI-RADS 3 mammography may have vastly different lesions, i.e., small calcification and large mass. Positive pair of these two images will not lead to good representation. In this work, we propose Medical contrastive Gaze Image Pre-training (McGIP) to use radiologist’s gaze to generate additional positive pairs for medical images. As a substitute for radiological reports or diagnosis label, gaze data are 1) easy to access, 2) highly variant and 3) directly related to visual pattern of each lesion. Originating from observations during gaze collection, we found that medical images corresponding to similar gaze patterns, when read by a radiologist, are often positively paired and thus should be drawn closer in the latent space, as illustrated by Figure 1. Specifically, a radiologist delivered similar gaze patterns when presented with medical images of the same semantical type. Two examples in Figure 2 illustrate our observation. On the left of Figure 2, we show four examples of breast mammography corresponding to benign (in green boxes) and malignant (in blue boxes) masses, respectively. When looking at the exemplar benign masses in green boxes (BIRADS 3, with cancer probability less than 2%), the radiologist often shows a more “scattered” pattern for the distribution of the gaze points. In contrast, when looking at malignant masses (BI-RADS 4C, with cancer probability 50-95%, and BI-RADS 5, with cancer probability at least 95%), radiologist tends to have a much more “centered” gaze pattern. On the right of Figure 2, we show two dental panoramic Xray images. When zooming into tooth #1 (denoted with yellow box), one can notice that the gaze heatmaps are always similar when radiologist views images from two different patients. Meanwhile, the gaze heatmaps of tooth #32 (denoted with red boxes) are also similarly low in magnitude across patients, implying little attention devoted to the molars from the radiologist. Other clinical researches also report that different types of abnormality can lead to different gaze patterns (Kundel et al. 2008; Voisin et al. 2013). We design different schemes to properly measure gaze similarity under different conditions, so that our proposed method can be generalized to various types of medical images. In summary, our main contributions are as follows. • To the best of our knowledge, McGIP is the first work to utilize human gaze as an alternative role of medical reports to guide contrastive pre-training. • We investigate three schemes of gaze similarity evaluation, to serve different types of medical images and also representations of gaze data. • We validate McGIP on two very different medical image diagnosis tasks of breast mammography and dental panoramic X-rays. The performance shows its effectiveness and generalizability for potential applications. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7544 The code implementation of our method is released at https://github.com/zhaozh10/McGIP. Related Works In this section we first introduce gaze and its application in radiology. Then we briefly cover the topic of contrastive learning and corresponding positive pair selection. Gaze in Radiology Visual attention has proven a useful tool to understand and interpret radiologist’s reasoning and clinical decision. Carmody, Nodine, and Kundel (1981) published one of the first eye-tracking studies in the field of radiology, where they studied the detection of lung nodules in chest X-ray films. In mammography, a strong correlation is found between gaze patterns and lesion detection performance (Kundel et al. 2008; Voisin et al. 2013). Recently, studies start to investigate the potential of gaze in medical image analysis from the deep learning perspective. Mall, Brennan, and Mello-Thoms (2018) and Mall, Krupinski, and Mello-Thoms (2019) investigated the relationship between human visual attention and CNN in finding missing cancer in mammography. Karargyris et al. (2021) developed a dataset of chest X-ray images and gaze coordinates. They used a multi-task framework to perform classification for diagnosis and prediction of the gaze heatmap from radiologists at the same time. Wang et al. (2022) proposed Gaze-Attention Net, which used gaze as extra supervision other than only ground-truth labels. Contrastive Learning Large-scale contrastive pre-training has become popular due to its generalizability to many scenarios and robustness against overfitting (Radford et al. 2021). There are several attempts to utilize contrastive pre-training in medical image analysis. Sowrirajan et al. (2021) proposed MoCo-CXR to produce the models with better representations and initialization for detection of abnormalities in chest X-rays. Azizi et al. (2021) utilized multiple images of the underlying pathology per patient to construct more informative positive pairs for multi-instance contrastive learning. These works have adopted image-augmentation-based semantic-unaware strategies to generate positive pairs. In the early days of contrastive learning, good representation requires a large number of negative pairs in a batch (Chen et al. 2020; He et al. 2020). However, more recently, negative pairs are shown to be less necessary for learning a good representation. That is, the number of negative pairs may have a limited influence on the representation quality when the framework is designed properly (Caron et al. 2021). In this paper, the roles of positive pairs and their impact on the learned representations will also be our focus. While it is critical to design positive pairs in contrastive learning, most existing frameworks apply semantic-unaware data augmentation that is adapted from the conventional supervised learning (He et al. 2020; Chen et al. 2020; Grill et al. 2020). Recent studies have found that a semantic-aware contrastive learning process can perform better. Selvaraju et al. (2021) proposed CAST to use unsupervised saliency maps to sample the crops. Peng et al. (2022) proposed ContrastiveCrop for augmentation of semantic-aware cropping. Method This section first discusses the collection and processing of gaze signals. Then, we propose different gaze similarity evaluation schemes for structured and unstructured medical images. Finally, we use gaze similarity to appropriately generate positive pairs and integrate them with multiple contrastive pre-training frameworks. Gaze Collection and Processing Gaze collection can be a seamless process when radiologists conduct routine diagnoses. Specifically, we used a Tobii pro nano remote eye-tracker to collect binocular gaze data at 60Hz. A radiologist with ten years of experience was invited to read and diagnose images on a computer, e.g., from a breast mammography dataset. A graphic user interface was developed to adapt to the typical clinical workflow, offering functions of multi-window viewing and interactive operations such as zooming, contrasting, etc. In this way, the gaze data can be collected from a nearly real diagnosis environment for the radiologist, which reduces interference during collection (Ma et al. 2023). The recorded eye-tracking data consists of a temporal sequence of points, each of which comes with the location and the timestamp. The gaze points need to be further categorized into saccade points (fast eye movement) and fixation points (yellow dots in Figure 3, denoted as fij for multimatch algorithm (Dewhurst et al. 2012)). Specifically, the centroid of a small cluster of the raw gaze points is considered as a fixation point, with the time-lapse of the cluster as its duration. Note that the saccade points indicate rapid eye movement, which corresponds to object searching in a large field-of-view for human vision system. Thus, the saccade points carry global features of the images. In contrast, the locations and duration of fixation points, corresponding to radiologist’s visual focus on specific regions in the image, can reveal local features. A patch containing normal tissue only may have fewer than ten fixation points, while a lesion patch can have much more. The gaze points can be expressed by either gaze sequence (preserving both temporal and spatial information of the gaze) or gaze heatmap (providing spatial distribution only). Gaze Similarity Evaluation for Different Scenarios It is pivotal to evaluate gaze similarity in our work, as images with similar gaze patterns are presumably close to each other in semantics. We have divided medical images into two categories roughly, namely structured and unstructured images. In structured images, the patients are typically well positioned and imaged following strict clinical protocols, and radiologist’s gaze tends to be similar in the same regions across different images. The reason is that those regions typically correspond to the same anatomic structure as illustrated by the example in the right of Figure 2. The unstructured images, in comparison, are very different – the patients The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7545 Structured image + 𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛: |𝑑!!. −𝑑"!| 𝑆ℎ𝑎𝑝𝑒: |𝑙!! −𝑙"!| 𝑙!! 𝑙"! 𝐿𝑒𝑛𝑔𝑡ℎ: ||𝑙!!| −|𝑙"!|| 𝑙!! 𝑙"! 𝐷𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛: 𝑙!! 𝑙"! 𝜃!! = 𝑙!!𝑙"! |𝑙!!||𝑙"!| 1. Downsample ̅𝐺∈ℝ!×#, 𝐷∈ℝ!×!, 𝐷= ̅𝐺: , 2: 9 − ̅𝐺[: , 1: 8] 2. Binary hash encoding Adjacent difference D 8 9 8 8 Unstructured image + Unstructured image + gaze sequence 𝑃𝑜𝑠𝑖𝑡𝑖𝑜𝑛: ||𝑓!! −𝑓"!||" 𝑓!! 𝑓"! Multi-match dHash Gaze moment 𝜇)* = # +, , 𝑥−̅𝑥) 𝑦−(𝑦*𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦 m-./0 = [𝜇11, 2!"32"! 2"" ! ] weight =𝜇11 moment of inertia = 𝜇41 + 𝜇14 𝑓(𝑥, 𝑦) H = 𝐷> 0, ̅𝐺 H Hash H ∈𝔹!×!, H = H . flatten() 𝑓!! 𝑓!" 𝑓!# 𝑙!! 𝑙!" 𝑓"! 𝑓"" 𝑓"# 𝑙"! 𝑙"" String editing 𝑙!# 𝑓!$ Figure 3: Three schemes to compute gaze similarity for different image types and gaze representations. On the left, for unstructured images, we extract five features for each gaze sequence, and calculate the inter-sequence similarity by the multi-match algorithm (Dewhurst et al. 2012). In the middle, also for unstructured images, we use heatmap as gaze representation, and calculate the similarity by gaze moment. On the right, for structured images, we dHash each heatmap into an 8 × 8 code. are not imaged with predefined position, and the images, e.g. mammography and pathological image, are usually interpreted at patch-level. In this case, the global anatomical structure is not a major clue to support the diagnosis, as shown in the left of Figure 2. For a batch of images [x1, x2, ..., xn], we assume A affinity matrix for gaze similarity. In this work, we investigate three different ways to evaluate gaze similarity for different categories of medical images. For the unstructured images, we utilize both gaze sequence (in the form of [time, location] for each fixation point) and gaze heatmap, as the two data formats are both commonly adopted. And for the structured images, we propose to evaluate the gaze similarity by referring to the gaze heatmap. Unstructured Image + Gaze Sequence. Holmqvist et al. (2011) described gaze sequences from five different features, i.e. shape, length, direction, position and duration. We further calculate the inter-sequence gaze similarity from these five features based on multi-match (Dewhurst et al. 2012). That is, the comparison between two gaze sequences of varying lengths is considered as a string-editing problem, where the minimum editing cost serves the dissimilarity. Assuming two gaze sequences G1 and G2 contain 3 and 4 points as shown in Figure 3. f11, f21 are two fixation points in G1 and G2, l11, l21 are offsets between two consecutive fixations, and d11, d21 indicate durations of f11 and f21. The inter-point duration editing cost is the relative duration difference, i.e. |d11 −d21|/max(d11, d21). We thus construct Sdur ∈R4×3 based on pairwise duration editing cost. The minimum duration editing cost is the minimum travel cost from the top left of Sdur to the bottom right, which can be formulated as a classic dynamic programming problem. Analogously, the position, shape, length and direction costs can also be computed through corresponding editing costs as shown in Figure 3. The multi-match shall first measure the similarity between two gaze sequences from the five abovementioned aspects. The overall similarity, denoted by A12, is then calculated by weighted summation. Similarly, the Aij of gaze sequences Gi and Gj is also computed using the same approach. Unstructured Image + Gaze Heatmap. An even more common way to represent gaze data is to use the heatmap, which is generated by convoluting the raw gaze points with Gaussian filters (Le Meur and Baccino 2013). Based on the heatmaps, we adopt image moment to measure their similarity. In general, the family of moments is defined as µpq = ∞ ZZ −∞ (x −¯x)p(y −¯y)qf (x, y) dxdy, (1) where f(x, y) is gaze heatmap, (¯x, ¯y) is the centroid of f(x, y) and p, q ∈N define the order of moment. Then, the first invariant of Hu-moment (Hu 1962) is adopted here to measure the dispersion of the heatmap in both row and column directions: ϕ1 = µ20 + µ02 µ2 00 , (2) where µ20 + µ02 is the moment of inertia. We also use µ00, which is the weight moment and duration in our case. Thus, the gaze is described by its moment vector mgaze = [µ00, ϕ1]. Beware that gaze moment is introduced here to measure the difference. In this case, the affinity between two moment vectors mi gaze and mj gaze is defined as Aij =α(1 −δ(µi 00, µj 00)) +(1 −α)(1 −δ(ϕi 1, ϕj 1)), (3) where the affinity is defined as one minus the normalized L1 distance between two moment vectors: δ(x, y) = L1(x,y) max(x,y), and α is a manually selected hyper-parameter. Structured Image + Gaze Heatmap. In this circumstance, we adopt dHash (Maharana 2016), a widely-used The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7546 image matching algorithm, to embed a heatmap into 64-bit hash code and then measure the similarity. As in Figure 3, dHash first resizes the gaze heatmap G into ¯G ∈R8×9, thus filtering out much high-frequency components yet preserving lasting fixations. In each row, we compute the difference between adjacent pixels and get D ∈R8×8. Here, the computation happens in the row direction, because the teeth in our exemplar dataset are aligned in this way. The binary mask H ∈B8×8 is then encoded by thresholding D > 0. The similarity Aij between two heatmaps can thus be measured as cosine similarity by flattening H’s. Contrastive Pre-training with Gaze Typical contrastive learning methods usually construct one positive pair for each sample, while the proposed McGIP can construct several positive pairs for each sample in a batch. For a batch of images [x1, x2, ..., xn], A is the affinity matrix for gaze similarity. Denote the constraint function in contrastive learning as CST(·), which represents InfoNCE in MoCo and L2 distance in BYOL for example. The overall loss function for a batch is L = Pn i,j=1 1Aij≥t(Aij) · CST(Net(xi), Net(ˆxj)) Pn i,j=1 1Aij≥t(Aij) , (4) where 1 refers to indicator function, ˆxj denotes the transformed view of xj, and “Net” indicates the encoder of contrastive learning. The term t here is a threshold to determine whether the two images are similar enough to be a positive pair. The above Eq. (4) will be the optimization objective during each iteration. Moreover, gaze data inherently contains noise and uncertainty, which is especially significant on unstructured images. We correspondingly introduce p, denoting confidence score, for unstructured images. When selecting positive unstructured image pairs based on gaze heatmaps, we only consider it a true positive with a possibility of p if the gaze similarity is higher than t. Experimental Results Datasets and Metrics We conduct experiments on two datasets: INbreast (Moreira et al. 2012) and Tufts dental dataset (Panetta et al. 2021). The INbreast dataset (Moreira et al. 2012) includes 410 full-field digital mammography images collected from 115 patients. We invited a radiologist with 10 years of experience to diagnose the images in this dataset, and collected the eye-movement data. The gaze data was collected using a Tobii pro nano eye-tracker, and pre-processed with the toolbox proposed in Ma et al. (2023). We randomly split the INbreast dataset for five-fold cross-validation, with 80% images for training and 20% for test. To inspect the performance, we report the accuracy (ACC), area-under-curve for malignant masses (M-AUC) and area-under-curve for all three classes (AUC) on the testing data. Here M-AUC is specially calculated since malignant masses are critical to diagnose, whose risks to cancer can be more than 10 times higher than benign masses (Liberman and Menell 2002). We use multimatch and gaze moment to calculate the gaze similarity on this dataset, respectively. The threshold of gaze similarity is set to 0.7. For gaze moment, p is set to 0.5 in all experiments. The Tufts dataset (Panetta et al. 2021) is composed of 1000 panoramic dental X-ray images, together with processed gaze heatmaps. We choose 70% and 10% of images for training and validation, while the remaining 20% of images constitute the testing set. Accuracy (ACC) and areaunder-curve (AUC) are reported as the performance indicators. We use dHash to calculate the gaze similarity and set the threshold to 0.7. All the experiments are implemented with PyTorch 1.13.0 on a single NVIDIA RTX3060. Unless otherwise specified, all networks are trained for 200 epochs using Adam optimizer with the learning rate (lr) set to 2e−5 in the pretraining. Fine-tuning and linear probing for final classification are trained for 10 epochs (INbreast) and 20 epochs (Tufts) with Adam optimizer (lr: 2e−5). All pre-training methods are initialized from ImageNet pre-trained weights. Performance on Diagnosis Tasks In order to demonstrate the generalizability and practicality of McGIP, we test different diagnosis tasks on the two datasets. In particular, we evaluate the fine-tuning performance, which is popularly adopted in medical image studies (Azizi et al. 2021). The results of INbreast dataset are reported in Table 1, where three different bakcbone models ResNet18 (RN18), ResNet50 (RN50), and ResNet101 (RN101) are adopted to evaulate the effectiveness of contrastive pre-training. Compared to the conventional way to supervise the pre-training with ImageNet-1K, the performance of McGIP improves constantly. Moreover, compared to existing contrastive learning such as MoCo v2 (Sowrirajan et al. 2021) (denoted as MoCo), McGIP constantly improves with different backbones in ACC (from 83.42% to 85.41% avergaed over three backbones). The same trend is also observed for other evaluation metrics, such as AUC and M-AUC, in most cases. Notably, although the AUC of McGIP is slightly lower when using the RN101 backbone, it demonstrates a significant advantage in terms of M-AUC, which is a more critical metric for accurate breast cancer diagnosis. Similarly, for the classification task related to the panoramic X-ray image dataset, we report the results for only BYOL with three backbones in Table 2 due to page limit. Still, McGIP constantly offers the best fine-tuning performance among all compared settings. In summary, McGIP effectively improves contrastive learning in the final diagnosis performance, while our method is notably agnostic to different network backbones. Representation Quality Analysis While previous results confirm that McGIP can effectively boost classification performance with various contrastive learning frameworks, it is more interesting to inspect representation quality after contrastive pre-training. We visualize the point-to-point affinity in Figure 4, which highlights the quality of the learned representation of McGIP. Specifically, several image patches are randomly selected from the testing set of INbreast and then resized to 224 × 224. Based on the pre-trained backbone of RN50, we use linear-probing The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7547 Method RN18 RN50 RN101 M-AUC AUC ACC M-AUC AUC ACC M-AUC AUC ACC From-scratch 71.39±3.26 73.98±4.55 76.76±4.79 68.01±2.41 71.44±4.38 75.41±4.22 69.89±2.56 67.27±3.71 73.78±2.02 ImageNet 83.43±1.98 82.38±2.84 80.38±2.34 89.73±0.89 86.17±1.23 82.97±1.08 88.90±2.98 87.50±0.89 85.63±2.26 MoCo 82.19±3.05 84.69±2.53 82.43±1.71 89.52±2.15 89.44±0.90 81.62±1.38 92.28±2.86 91.03±1.88 86.22±1.58 MoCo+McGIP 85.07±2.43 88.37±1.73 83.51±1.01 92.74±1.87 91.44±2.08 85.68±1.38 93.06±1.73 92.58±2.92 87.03±0.66 BYOL 90.42±2.31 90.59±1.48 83.78±0.85 93.84±1.72 87.96±1.71 85.95±1.83 93.82±3.44 90.39±2.08 86.49±0.89 BYOL+McGIP 95.83±0.63 94.96±1.13 85.14±0.85 97.07±0.75 93.80±0.79 87.57±1.01 95.46±2.67 90.09±3.08 86.76±0.54 SimSiam 91.10±3.26 91.81±1.63 83.51±1.01 93.11±2.26 86.56±2.99 86.27±1.79 92.26±1.11 90.26±1.14 85.68±1.08 SimSiam+McGIP 95.30±1.16 94.62±1.34 85.95±1.38 95.30±0.78 89.22±0.95 88.65±1.38 96.85±0.63 90.08±1.51 87.84±1.01 Table 1: Fine-tuning performance on the INbreast dataset of McGIP with different contrastive learning frameworks. Backbone Method AUC ACC RN18 From-scratch 47.58 61.00 ImageNet 60.26 60.50 BYOL 60.61 63.00 BYOL+McGIP 62.91 65.00 RN50 From-scratch 55.30 61.00 ImageNet 60.06 62.00 BYOL 52.12 63.50 BYOL+McGIP 61.35 67.50 RN101 From-scratch 57.61 61.00 ImageNet 59.96 61.50 BYOL 58.05 63.00 BYOL+McGIP 61.14 64.50 Table 2: Fine-tuning performance on the Tufts dataset of McGIP with different backbones. weights and derive the high-resolution feature map for each patch. Then, we randomly select a point inside the malignant mass (e.g., marked by a positive dot in Figure 4), and calculate its affinity with all other points in the patch by cosine similarity of their corresponding feature vectors. The resulting affinity maps are shown in individual rows, while the columns are for different pre-training schemes. Similarly, we select negative points from the non-mass tissues, and show the affinity maps in the right of Figure 4. For affinity maps of positive points, the high concentration of affinity indicates that the selected point is highly similar to nearby points located within the mass. Pretrained models are able to encode semantic information to a considerable extent. By employing the proposed approach (BYOL+McGIP), we observe that the affinity distributions for positive points are sharper than those obtained through pre-training with ImageNet or contrastive learning using original BYOL. This observation suggests that our approach is better suited for accurately representing masses. Our method’s superiority is especially pronounced in cases where the negative points are far apart, suggesting that it could be a promising approach for classifying challenging cases in mammography. Specifically, the ImageNet pretrained model is not good at grouping negative points that are relatively far away, as the corresponding affinity maps Format M-AUC AUC ACC FT Heatmap 95.75±1.10 93.73±1.06 86.49±0.85 Sequence 97.07±0.75 93.80±0.79 87.57±1.01 LP Heatmap 82.20±2.18 77.11±1.54 77.84±1.48 Sequence 78.31±2.89 78.21±2.38 76.22±1.08 Table 3: Performance comparison on the INbreast. deliver pulse-like patterns. Although BYOL shows slight improvements over ImageNet, it still fails to group some cases, as indicated by the last row in Figure 4. In contrast, our method exhibits significant improvements. In the last column of the figure, the boundaries between high- and lowaffinity regions are clearly defined and mostly aligned with the mass contours. We attribute such superiority to the spatial semantics introduced via gaze signals, which contribute to better classification performance as evidenced by Table 1. Gaze Sequence vs. Gaze Heatmap In the method section, we introduce two approaches for measuring gaze similarity in unstructured images: multimatch for gaze sequence and gaze moment for gaze heatmap. The fine-tuning (FT) and linear-probing (LP) performances of these two approaches are compared in Table 3, based on INbreast with RN50. The two approaches perform similarly while offering various benefits. Gaze moment, being a renaissance variant of Hu-moment algorithm (Hu 1962), has a straightforward and elegant form. Gaze sequence preserves more information because there are timestamps for individual spatial locations and the multi-match algorithm describes gaze similarity from multiple perspectives. In conclusion, we recommend that users choose between these two approaches based on their specific requirements. By offering multiple options for measuring gaze similarity, we hope to enable researchers to choose the most appropriate approach for their scenarios. Gaze vs. Ground-Truth We conducted an empirical study to evaluate the effectiveness of using gaze data as a form of weak supervision compared to supervised contrastive learning using different backbone models (Khosla et al. 2020), which is based on ground-truth labels. To assess the performance of gaze data, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7548 ImageNet BYOL + McGIP Abnormality Positive BYOL Negative BYOL BYOL + McGIP ImageNet Figure 4: Image affinity analysis. We selected a point from feature map and calculated similarity with all other points. Natural image supervised pretrained, semantic-unaware pretrained, and McGIP pretrained weights, are used to illustrate with both positive points (abnormality) and negative points (non-abnormality). Brighter color denotes more similar points. Method INbreast Tufts M-AUC AUC ACC AUC ACC RN18+GT 93.01±1.78 90.86±1.32 84.32±0.66 60.53 59.00 RN18+Gaze 95.83±0.63 94.96±1.13 85.14±0.85 62.91 65.00 RN50+GT 96.02±0.88 88.96±1.61 85.14±0.85 59.29 58.00 RN50+Gaze 97.07±0.75 93.80±0.79 87.57±1.01 61.35 67.50 RN101+GT 94.81±2.48 90.60±2.97 85.95±1.08 59.17 63.50 RN101+Gaze 95.46±2.67 90.09±3.08 86.76±0.54 61.14 64.50 Table 4: Comparison between gaze-guided and ground-truth labels-guided contrastive learning using different bakbone models (default contrastive learning method: BYOL). we fine-tuned different backbones on both the INbreast and Tufts datasets. Our results (see Table 4) show that on the INbreast dataset, gaze data outperforms ground-truth labels, albeit slightly lower in AUC when using the RN101 backbone. On the Tufts dataset, the advantage of gaze data is more pronounced. In summary, our findings suggest that gaze data has greater potential than ground-truth labels and may serve as an alternative to radiological reports. Ablation Study In the ablation analysis, we study how the training recipe affects the performance of the proposed McGIP. All these ablation studies are conducted under BYOL framework with a RN50 backbone. Regarding pre-training epochs, we observe a correlation between performance and numbers of epochs in Figure 5(a). The performance gain becomes marginal when training epochs are larger than 300, which is much earlier to happen compared to natural images (Chen et al. M-AUC AUC ACC (a) (b) Figure 5: Linear-probing performance (backbone: RN50) on the INbreast dataset with (a) different pre-training epochs and (b) different similarity thresholds. 2020). In Figure 5(b), we report the performance of different similarity thresholds for gaze sequences. One may notice that the performance drops when the threshold is too high (i.e., 0.8 and 0.9), because McGIP has very few extra positive pairs in a mini-batch and degraded into normal contrastive learning. In contrast, when the threshold is too low (i.e., <0.6), there will be too many positive pairs in a minibatch, causing a “collapsed solution” (Grill et al. 2020). Conclusion In this paper, we start with an observation that images sharing similar semantics usually have similar gaze patterns. Therefore we explore radiologists’ gazes for contrastive pretraining. We propose McGIP, a simple module to the existing contrastive learning, to shift more attention on the images with similar gaze patterns. Three schemes to evaluate gaze similarity are investigated given different medical scenarios. The superior performance on two different clinical tasks show the practicability and generalizability of McGIP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7549 Acknowledgments This work is supported in part by The Key R&D Program of Guangdong Province, China (grant number 2021B0101420006). References Aronson, R. M.; and Admoni, H. 2022. Gaze complements control input for goal prediction during assisted teleoperation. In Robotics science and systems. Aronson, R. M.; Almutlak, N.; and Admoni, H. 2021. Inferring goals with gaze during teleoperated manipulation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 7307–7314. IEEE. Azizi, S.; Mustafa, B.; Ryan, F.; Beaver, Z.; Freyberg, J.; Deaton, J.; Loh, A.; Karthikesalingam, A.; Kornblith, S.; Chen, T.; et al. 2021. Big Self-Supervised Models Advance Medical Image Classification. arXiv preprint arXiv:2101.05224. Biswas, A.; Pardhi, B. A.; Chuck, C.; Holtz, J.; Niekum, S.; Admoni, H.; and Allievi, A. 2022. Mitigating causal confusion in driving agents via gaze supervision. In Aligning Robot Representations with Humans workshop@ Conference on Robot Learning. Carmody, D. P.; Nodine, C. F.; and Kundel, H. L. 1981. Finding lung nodules with and without comparative visual scanning. Perception & psychophysics, 29(6): 594–598. Caron, M.; Touvron, H.; Misra, I.; J´egou, H.; Mairal, J.; Bojanowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9650–9660. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR. Dewhurst, R.; Nystr¨om, M.; Jarodzka, H.; Foulsham, T.; Johansson, R.; and Holmqvist, K. 2012. It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior research methods, 44(4): 1079–1100. Elfares, M.; Hu, Z.; Reisert, P.; Bulling, A.; and K¨usters, R. 2023. Federated Learning for Appearance-based Gaze Estimation in the Wild. In Annual Conference on Neural Information Processing Systems, 20–36. PMLR. Grill, J.-B.; Strub, F.; Altch´e, F.; Tallec, C.; Richemond, P. H.; Buchatskaya, E.; Doersch, C.; Pires, B. A.; Guo, Z. D.; Azar, M. G.; et al. 2020. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729–9738. Holmqvist, K.; Nystr¨om, M.; Andersson, R.; Dewhurst, R.; Jarodzka, H.; and Van de Weijer, J. 2011. Eye tracking: A comprehensive guide to methods and measures. OUP Oxford. Hu, M.-K. 1962. Visual pattern recognition by moment invariants. IRE transactions on information theory, 8(2): 179– 187. Hu, Z. 2020. Gaze analysis and prediction in virtual reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 543–544. IEEE. Hu, Z.; Bulling, A.; Li, S.; and Wang, G. 2021a. Ehtask: Recognizing user tasks from eye and head movements in immersive virtual reality. IEEE Transactions on Visualization and Computer Graphics. Hu, Z.; Bulling, A.; Li, S.; and Wang, G. 2021b. Fixationnet: Forecasting eye fixations in task-oriented virtual environments. IEEE Transactions on Visualization and Computer Graphics, 27(5): 2681–2690. Hu, Z.; Zhang, C.; Li, S.; Wang, G.; and Manocha, D. 2019. Sgaze: A data-driven eye-head coordination model for realtime gaze prediction. IEEE transactions on visualization and computer graphics, 25(5): 2002–2010. Johnson, A. E.; Pollard, T. J.; Greenbaum, N. R.; Lungren, M. P.; Deng, C.-y.; Peng, Y.; Lu, Z.; Mark, R. G.; Berkowitz, S. J.; and Horng, S. 2019. MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042. Karargyris, A.; Kashyap, S.; Lourentzou, I.; Wu, J. T.; Sharma, A.; Tong, M.; Abedin, S.; Beymer, D.; Mukherjee, V.; Krupinski, E. A.; et al. 2021. Creation and validation of a chest X-ray dataset with eye-tracking and report dictation for AI development. Scientific data, 8(1): 1–18. Khosla, P.; Teterwak, P.; Liu, C.; and Krishnan, D. 2020. Supervised contrastive learning. NeurIPS, 33: 18661–18673. Kundel, H. L.; Nodine, C. F.; Krupinski, E. A.; and MelloThoms, C. 2008. Using gaze-tracking data and mixture distribution analysis to support a holistic model for the detection of cancers on mammograms. Academic radiology, 15(7): 881–886. Le Meur, O.; and Baccino, T. 2013. Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behavior research methods, 45(1): 251–266. Liberman, L.; and Menell, J. H. 2002. Breast imaging reporting and data system (BI-RADS). Radiologic Clinics of North America, 40: 409–430. Ma, C.; Zhao, L.; Chen, Y.; Wang, S.; Guo, L.; Zhang, T.; Shen, D.; Jiang, X.; and Liu, T. 2023. Eye-gaze-guided vision transformer for rectifying shortcut learning. IEEE Transactions on Medical Imaging. Maharana, A. 2016. Application of Digital Fingerprinting: Duplicate Image Detection. Ph.D. thesis. Mall, S.; Brennan, P. C.; and Mello-Thoms, C. 2018. Modeling visual search behavior of breast radiologists using a deep convolution neural network. Journal of Medical Imaging, 5(3): 035502. Mall, S.; Krupinski, E.; and Mello-Thoms, C. 2019. Missed cancer and visual search of mammograms: what featurebased machine-learning can tell us that deep-convolution learning cannot. In Medical Imaging 2019: Image Perception, Observer Performance, and Technology Assessment, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7550 volume 10952, 1095216. International Society for Optics and Photonics. Manzi, F.; Ishikawa, M.; Di Dio, C.; Itakura, S.; Kanda, T.; Ishiguro, H.; Massaro, D.; and Marchetti, A. 2020. The understanding of congruent and incongruent referential gaze in 17-month-old infants: an eye-tracking study comparing human and robot. Scientific Reports, 10(1): 1–10. Matthews, S. L.; Uribe-Quevedo, A.; and Theodorou, A. 2020. Rendering optimizations for virtual reality using eyetracking. In 2020 22nd symposium on virtual and augmented reality (SVR), 398–405. IEEE. Moreira, I.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M. J.; and Cardoso, J. S. 2012. INbreast: toward a full-field digital mammographic database. Academic Radiology, 19: 236–248. Organizers, G. M. M. 2022. NeurIPS 2022 Gaze Meets ML Workshop. Palinko, O.; Rea, F.; Sandini, G.; and Sciutti, A. 2016. Robot reading human gaze: Why eye tracking is better than head tracking for human-robot collaboration. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5048–5054. IEEE. Panetta, K.; Rajendran, R.; Ramesh, A.; Rao, S. P.; and Agaian, S. 2021. Tufts Dental Database: A Multimodal Panoramic X-ray Dataset for Benchmarking Diagnostic Systems. IEEE Journal of Biomedical and Health Informatics, 26(4): 1650–1659. Peng, X.; Wang, K.; Zhu, Z.; and You, Y. 2022. Crafting Better Contrastive Views for Siamese Representation Learning. arXiv preprint arXiv:2202.03278. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 8748–8763. PMLR. Seibold, C.; Reiß, S.; Sarfraz, M. S.; Stiefelhagen, R.; and Kleesiek, J. 2022. Breaking with fixed set pathology recognition through report-guided contrastive training. In Medical Image Computing and Computer Assisted Intervention– MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V, 690–700. Springer. Selvaraju, R. R.; Desai, K.; Johnson, J.; and Naik, N. 2021. Casting your model: Learning to localize improves selfsupervised representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11058–11067. Sowrirajan, H.; Yang, J.; Ng, A. Y.; and Rajpurkar, P. 2021. MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models. In Proc. International Conference on Medical Imaging with Deep Learning (MIDL). Uppal, K.; Kim, J.; and Singh, S. 2023. Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models. In Annual Conference on Neural Information Processing Systems, 219–240. PMLR. Valliappan, N.; Dai, N.; Steinberg, E.; He, J.; Rogers, K.; Ramachandran, V.; Xu, P.; Shojaeizadeh, M.; Guo, L.; Kohlhoff, K.; et al. 2020. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature communications, 11(1): 4553. Voisin, S.; Pinto, F.; Xu, S.; Morin-Ducote, G.; Hudson, K.; and Tourassi, G. D. 2013. Investigating the association of eye gaze pattern and diagnostic error in mammography. In Medical Imaging 2013: Image Perception, Observer Performance, and Technology Assessment, volume 8673, 867302. International Society for Optics and Photonics. Vu, Y. N. T.; Wang, R.; Balachandar, N.; Liu, C.; Ng, A. Y.; and Rajpurkar, P. 2021. Medaug: Contrastive learning leveraging patient metadata improves representations for chest x-ray interpretation. In Machine Learning for Healthcare Conference, 755–769. PMLR. Wan, Z.; Xiong, C.; Chen, W.; Zhang, H.; and Wu, S. 2021. Pupil-Contour-Based Gaze Estimation With Real Pupil Axes for Head-Mounted Eye Tracking. IEEE Transactions on Industrial Informatics, 18(6): 3640–3650. Wang, S.; Ouyang, X.; Liu, T.; Wang, Q.; and Shen, D. 2022. Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis. IEEE Transactions on Medical Imaging. Wedel, M.; and Pieters, R. 2017. A review of eye-tracking research in marketing. Review of marketing research, 123– 147. Wiehe, A.; Schneider, F.; Blank, S.; Wang, X.; Zorn, H.-P.; and Biemann, C. 2022. Language over Labels: Contrastive Language Supervision Exceeds Purely Label-Supervised Classification Performance on Chest X-Rays. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop, 76–83. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7551
2024
838
18,671
Quad Bayer Joint Demosaicing and Denoising Based on Dual Encoder Network with Joint Residual Learning Bolun Zheng1*, Haoran Li1*†, Quan Chen1 †, Tingyu Wang1, 2, Xiaofei Zhou1, Zhenghui Hu3, Chenggang Yan1 1Hangzhou Dianzi University, Xiasha No.2 Street, Hangzhou, 310018, Zhejiang, China. 2Lishui Institute of Hangzhou Dianzi University, Semiconductor Chip Industrial Park, 323000, Zhejiang, China. 3Hangzhou Innovation Institute, Beihang University, Binjiang No.18 Chuanghui Street, 310018, Zhejiang, China. {blzheng, cgyan, chenquan}@{hdu.edu.cn}, lhr970315@gmail.com, wongtyu@foxmail.com, zxforchid@outlook.com, zhenghuihu2013@163.com Abstract The recent imaging technology Quad Bayer color filter array (CFA) brings great imaging performance improvement from traditional Bayer CFA, but also serious challenges for demosaicing and denoising during the image signal processing (ISP) pipeline. In this paper, we propose a novel dual encoder network, namely DRNet, to achieve joint demosaicing and denoising for Quad Bayer CFA. The dual encoders are carefully designed in that one is mainly constructed by a joint residual block to jointly estimate the residuals for demosaicing and denoising separately. In contrast, the other one is started with a pixel modulation block which is specially designed to match the characteristics of Quad Bayer pattern for better feature extraction. We demonstrate the effectiveness of each proposed component through detailed ablation investigations. The comparison results on public benchmarks illustrate that our DRNet achieves an apparent performance gain (0.38dB to the second best) from the state-of-the-arts and balances performance and efficiency well. The experiments on real-world images show that the proposed method could enhance the reconstruction quality from the native ISP algorithm. Introduction The demands for camera capabilities have significantly increased, due to the widespread use of smartphones. However, the smartphone camera certainly suffers from the limited sensor size to achieve high-quality imaging (Ignatov, Gool, and Timofte 2020). To overcome this limitation, researchers try to turn to pixel-binning with a non-Bayer color filter array (CFA) pattern to capture more authentic scene information in a smaller sensor size, which has been proven effective for capturing images of high quality in low-light conditions (Yoo, Im, and Paik 2015). Among these non-Bayer CFA, the Quad Bayer CFA (show in Figure. 1(a)) is one of the most popular solutions. The Quad Bayer is an extended version of the Bayer, which expands each pixel into four sub-pixels and uses a *These authors contributed equally. †Corresponding author: Haoran Li and Quan Chen Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) (b) Figure 1: (a) The Bayer CFA vs. the Quad Bayer CFA. (b) PSNR vs. Running time consumption on Urban100 dataset. different color filter arrangement (Kim and Kim 2019). This filter array has the advantage of improving the noise performance of the image sensor by merging adjacent pixels and allowing pixels of the same color to be combined into larger pixels (Agranov et al. 2017). However, such special structure brings great challenges of accurate texture and color reconstruction, because the gap of neighbor colors is certainly enlarged. Directly applying even the SOTA demosaicing method for Bayer CFA for a Quad Bayer image would lead to a poor result (Figure. 4(b)). The demosaicing and denoising are two key points for reconstructing high-quality RGB images from RAW images captured by Quad Bayer CFA. Directly conducting existing demosaicing for Bayer CFA on a Quad Bayer CFA image would introduce serious visual artifacts (Kim et al. 2019; Tan, Chen, and Hua 2018). Besides, the denoising is also taken into consideration in the remosacing methods to further enhance the finally reconstructed RGB images (Yang et al. 2022). However, these methods hardly ever consider the full reconstruction model against the joint demosaicing and denoising task. The additional computational cost brought by remosacing methods should also be seriously considered due to the limited computation resource of edge devices. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7552 Compared to remosaicing algorithms, end-to-end solutions would be more appropriate. (Ignatov, Van Gool, and Timofte 2020) proposed a pyramidal CNN architecture designed for fine-grained image restoration that implicitly learns to perform all ISP steps. (Schwartz, Giryes, and Bronstein 2018) presented an end-to-end deep learning model to tackle all of the image processing pipelines simultaneously. Though these methods get impressive performance on Byaer CFA images, ignoring the intrinsic relationship between Bayer CFA and Quad Bayer CFA makes them struggle to translate on Quad Bayer CFA images. In this work, we propose a dual-encoder network with joint residual learning (DRNet) to overcome the limitations above towards reconstructing visually pleased RGB images from RAW images captured by a Quad Bayer CFA. Specifically, our approach leverages the dual encoders to thoroughly extract information, which includes the self-adaptive encoder (SAE) and the Quad Bayer encoder (QBE), thus resolving the difficulties in extracting information from Quad Bayer CFA. In SAE, we first decouple the joint demosaicing and denoising processes into two independent branches and involve them within one joint residual block (JRB). Then introduce a self-adaption block to cross the domain gap between the input RAW image and the reconstructed RGB image. In QBE, we design a particular Quad Bayer feature extraction block called pixel modulation block (PMB) to fully utilize the advantages of Quad Bayer CFA for noise suppressing and color restoration. To achieve a better trade-off between efficiency and performance (shown in Figure. 1), we provide a mini-version of DRNet called DRNetM, which exhibits competitive performance and efficiency among state-of-the-art methods. Generally, the contribution of this work can be summarized as follows: • We propose a novel dual encoder architecture to construct an end-to-end approach for Quad Bayer joint demosaicing and denoising task. • We decompose the joint demosaicing and denoising task into two independent sub-tasks and propose a joint residual learning block that effectively solves these two subtasks within one block. • Against the special arrangement of the Qaud Bayer CFA, we propose a pixel modulation block to effectively Quad Bayer image feature extraction and improve the overall demosaicing and denoising performance. • Through sufficient experiments on public benchmarks, we demonstrate our DRNet clearly outperforms the stateof-the-art and certainly improves the reconstruction quality from the smartphone’s native ISP algorithm. Related Work Image demosaicing aims at reconstructing RAW images captured by digital devices into visually pleasing full-pixel RGB images, which is a fundamental step in the ISP pipeline and has been extensively studied. Research on image demosaicing can be broadly classified into two categories: traditional methods (Monno et al. 2015; Hirakawa and Parks 2005; Su 2006; Zhang and Wu 2005) and learning-based methods (Zhang et al. 2019; Kim et al. 2019; Tan, Chen, and Hua 2018; Sharif, Naqvi, and Biswas 2021; Zamir et al. 2022; Xing and Egiazarian 2022). Traditional methods typically use interpolation and filtering techniques for demosaicing. Compared to traditional methods, the learning-based methods bring significant improvement for scene adaption and degradation adaption (Zheng et al. 2020a; Hengrun et al. 2021). (Zhang et al. 2019) used residual blocks, non-local attention modules, and multi-scale feedback mechanisms to learn the local features of mosaic images. (Zamir et al. 2022) proposed two effective modules that can capture long-range pixel interactions. During the image formation process, noise is inevitable. Merely performing demosaicing on RAW images hardly achieves satisfactory results. To address this issue, researchers concentrate on methods of joint demosaicing and denoising. Abundant research has demonstrated that such a joint process can effectively eliminate error accumulation during individual processing and improve the quality of the final reconstructed images (Liu et al. 2020; Xing and Egiazarian 2021). The joint demosaicing and denoising can be achieved through both traditional methods (Klatzer et al. 2016; Tan et al. 2017a; Condat and Mosaddegh 2012) and learning-based methods (Ehret et al. 2019; A Sharif, Naqvi, and Biswas 2021; Liu et al. 2020; Tan et al. 2017b; Xing and Egiazarian 2021). (Klatzer et al. 2016) achieved image demosaicing and denoising by minimizing the energy function in sequence. Recently, learning-based methods exhibited great potential in image reconstruction tasks (Zheng et al. 2022; Chen et al. 2023; Zhao et al. 2021). (Ehret et al. 2019) obtained the processed demosaicing and denoising images by using a convolutional neural network with temporal and spatial redundancy information between consecutively captured images, thereby achieving the joint demosaicing and denoising task in an unsupervised way. (Liu et al. 2020) enhanced content awareness and strengthened the utilization of green channel information through the guidance of the green channel and density map. (Xing and Egiazarian 2021) proposed an end-to-end network based on residual channel attention blocks, which address image demosaicing, denoising, and super-resolution. (Zhang et al. 2022) proposed a color consistency network that can jointly store color information and enhance illumination. In recent years, the Quad Bayer CFA has been widely used in smartphones due to its excellent performance. However, reconstructing RGB images from Quad Bayer CFA remains a challenging task (Kim et al. 2019). (A Sharif, Naqvi, and Biswas 2021) proposed a deep neural network model that includes multiple attention mechanisms, combining generative adversarial models and multiple objective functions, achieving state-of-the-art performance on joint demosaicing and denoising tasks. (Zeng et al. 2023) proposed a dual-head joint demosaicing and denoising network to convert noisy Quad Bayer CFA to noise-free Bayer CFA. (Wu et al. 2023) addressed the demosaicing and denoising processes separately in a two-stage network structure. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7553 C Leaky ReLU 3×3 conv 1×1 conv ReLU Concatenate C Add SAB JRB Tanh Upsample K Chunk Multiply C 4 2×2Conv ReLu 1×1Conv Pixel Modulation BLock(PMB) RRDB 4 Branch Channel-Wise-Conv Block K AvgPool MaxPool C Sigmoid Joint Residual Block(JRB) Self-Adaption Block(SAB) PMB JRB×3 Input Output mosaic removal branch noise removal branch R F B F 2 G F 1 G F ALL F R F ' R F 1 C 2 C 3 C SAE QBE Figure 2: The overall architecture of the proposed dual encoder network (DRNet). The DRNet adopts a multi-scale encoderdecoder architecture. There are two encoders the self-adaptive encoder (SAE), and the Quad Bayer encoder (QBE), formulating the encoding part. The skip connections are introduced to connect the encoding part and decoding to produce a global residual between the RAW images and RGB images. Proposed Method Reconstructing a RAW image of Quad Bayer CFA can be regarded from two sights, one is remosaicing it to a common Bayer CFA image and then use existed demosaicing method to reconstruct an RGB image, while the other one is to fully regard the task as an end-to-end image restoration task. In this work, we propose a new dual encoder architecture to fuse the remoasicing and restoration within a unified framework. The two encoders in the proposed architecture are supposed to produce the Bayer CFA liked color-aware features and restored image features, respectively. Overview of the DRNet The architecture of the proposed DRNet is shown in Fig. 2. We adopt a widely used multiscale architecture to formulate the two encoders, the Quad Bayer encoder (QBE) and the self-adaptive encoder (SAE). The QBE starts with a pixel modulation block (PMB) to re-group the RAW pixels and produce a color-aware feature, then introduces four residual in residual dense blocks (RRDBs)(Wang et al. 2018) to enhance the features for the following fusion and upsampling. In this way, though the resolution is reduced to 1 4 of the origin, most of the color and texture information is accurately extracted, which can provide sufficient guidance for the following demosaicing and denoising. The SAE produces restored image features of three scales with joint residual learning and self-adaption. In each scale, three joint residual blocks (JRBs) and one self-adaption block (SAB) are sequentially stacked. The JRB decouples the joint demosaicing and denoising into two tasks and separately estimates the noise residuals and mosaic residuals to produce the restored image features. Meanwhile, the SAB is introduced to adapt the restored image features to the target domain for stable training. In the decoding stage, the outputs of Quad Bayer encoder (QBE) and the top scale of self-adaptive encoder (SAE) are firstly fused, then gradually upsampled through the skip connections, and finally output the image in the sRGB domain. Pixel Modulation In common image sensors, the green channel accounts for half of the pixels, while the red and blue channels each account for a quarter. Quad Bayer sensors’ pattern consists of two continuous uniform pixels in the vertical and horizontal directions. However, in previous work, neither of these two pieces’ characteristics are simultaneously considered for joint demosaicing and denoising tasks. To fully utilize such characteristics of the Quad Bayer pattern, we design a pixel modulation block (PMB) to achieve a color-aware feature extraction. As shown in Figure 2, the PMB first splits the input 1-channel RAW map into four sub-maps {FR, FG1, FG2, FB}. These sub-maps are then respectively sent into the channel-wise-convolution block, outputting the corresponding color-aware feature maps {F ′ R, F ′ G1, F ′ G2, F ′ B}, which are then be concatenated The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7554 together for following process. Specifically, to convert a Quad Bayer RAW image into four sub-maps, we first regroup the adjacent 2 × 2 onecolor pixels into a set. Assuming the left-top coordinates of a group of adjacent one-color pixels is (2i, 2j), the set of this pixel group can be defined as: Φ (i, j) = {(m, n) |m = i + R (0, 1) ∨n = j + R (0, 1)} (1) where R (0, 1) returns a random integer between 0 and 1. Then, the whole pixel groups around the RAW images can be divided as:        i%2 = 0 ∪j%4 = 0, Φ (i, j) ⊆R i%2 = 0 ∪j%4 = 1, Φ (i, j) ⊆G1 i%2 = 1 ∪j%4 = 0, Φ (i, j) ⊆G2 i%2 = 1 ∪j%4 = 1, Φ (i, j) ⊆B (2) In this way, we can regroup the RAW images into four sub-maps, where each of them containing only one set of colors (Red, Green or Blue). For each sub-map, we first use a 2×2 convolution with the strides of 4×4 to fuse the adjacent 2×2 pixels and convert them to a 1-dimensional vector along the channel axis. Then a 1 × 1 convolution further translates the initial feature to a higher dimension feature. The above operation can be expressed as: F′ c = ConvS=1 1×1 (ReLU(ConvS=4 2×2 (Fc))) (3) where c ∈{R, G1, G2, B}. Along the above pipeline, the color information of adjacent pixels is fully preserved, which provides much convenience for the following noise suppression and color restoration. However, the resolution reduction and texture degradation are non-negligible. To alleviate these problems, we introduce four RRDBs after fusing the color-aware feature maps for compensation before sending them to the decoding stage. Joint Residual Learning The image noise and color distortion belong to different degradation models (Zheng et al. 2020b). Using classic residual-based blocks to jointly handle these two types of degradation cannot achieve satisfactory performance. To tackle this problem, we separately estimate the mosaic residual and the noise residual and propose the joint residual block (JRB). As shown in Figure. 2, JRB contains two branches, the mosaic removal branch and the noise removal branch. Assuming the input of JRB is Fin JRB, a 3 × 3 convolution layer and a channel split layer are first introduced to produce two sub-feature map F1 and F2, which can be expressed as: F1, F2 = Split(Conv(Fin JRB)) (4) where Conv(·) and Split(·) denotes the 3 × 3 convolution and the channel split. Notice that the F1 and F2 get the same shape. Then we generate a basic color error map Sbasic as: Sbasic = Pavg(F1 · F2) (5) where Pavg(·) denotes a global average pooling along the channel axis. Then we can obtain the final color error map S by sequentially introducing a 1 × 1 convolution with ReLU (Glorot, Bordes, and Bengio 2011) activation and another 1 × 1 convolution with Sigmoid (Glorot, Bordes, and Bengio 2011) activation. Finally, the mosaic residual Rmosaic is calculated by: Rmosaic = S · Fin JRB (6) In the noise removal branch, we simply use a 1 × 1 convolution, a 3 × 3 convolution, and another 1 × 1 to obtain the noise residual Rnoise as the noise is additive. Specifically, these three convolution layers all accompany a Leaky ReLU (Chung et al. 2014) activation. Then, we can have the output of the JRB as: Fout JRB = Fin JRB + Rmosaic + Fnoise (7) where Fout JRB denotes the output of the JRB. Self Adaption It’s clear that the source RAW domain and the target sRGB domain are totally different. Therefore, it’s necessary to introduce a domain adaption block to cross the domain gap between the source and target. To this end, we introduce the self-adaption block (SAB) to achieve a self-adaptive domain adaption. Assuming the input of the SAB is denoted as Fin SAB, we first use a global max pooling layer and a global average pooling layer to distil the information from the Fin SAB along the channel axis, which can be expressed as: Fmax = Pmax(Fin SAB) (8) Favg = Pavg(Fin SAB) (9) where Pmax(·) denotes the global max pooling along the channel axis. Then we concatenate the Fmax and Favg along the channel axis and calculate the adaption map A as: A = Sigmoid(Conv(< Fmax, Favg >) (10) where <, > denotes the concatenate operation. So that we can now achieve the domain adaption and produce the output of the SAB as: Fout SAB = A · Conv(Fin SAB) (11) where Fout SAB denotes the output of the SAB. Loss Function To train the proposed DRNet, we adopt a hybrid loss function formulated by adversarial loss, color reconstruction loss, perceptual loss, and L1 loss, which can be defined as: L = λDLD + L1 + LP + LC (12) where LD denotes the adversarial loss and λD = 10−4 is a hyperparameter, mathcalL1 denotes the L1 loss, LP denotes the perceptual loss and LC denotes the color reconstruction loss. Specifically, the perceptual loss is constructed with the VGG-19 backbone, which the LC is to measure the color difference (∆E) defined in CIEDE2000(Luo, Cui, and Rigg 2001), which is written as: Lc(X, Y ) = △E(X, Y ) (13) where X and Y denote the outputs of DRNet and the corresponding groundtruth. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7555 Method Params σ Urban100 BSD100 MCM KODAK Set14 Average (M) PSNR/SSIM SGNet 0.912 5 34.82/0.9634 38.35/0.9786 32.42/0.9025 35.63/0.9616 33.53/0.9301 34.95/0.9472 15 33.58/0.9466 35.41/0.9488 30.85/0.8691 33.25/0.9262 31.65/0.9014 32.95/0.9184 25 31.05/0.9137 32.89/0.9154 28.90/0.8242 31.19/0.8869 30.04/0.8682 30.81/0.8816 PIPNet 3.459 5 35.66/0.9670 39.43/0.9803 35.55/0.9470 37.00/0.9672 35.47/0.9414 36.62/0.9605 15 33.85/0.9481 36.68/0.9586 33.92/0.9234 34.68/0.9394 33.59/0.9198 34.54/0.9378 25 32.28/0.9285 34.62/0.9353 32.45/0.8990 32.53/0.9046 32.12/0.8977 32.80/0.9129 Uformer 8.203 5 37.17/0.9788 39.76/0.9825 34.61/0.9365 36.17/0.9636 34.81/0.9385 36.50/0.9599 15 34.57/0.9547 36.33/0.9525 33.24/0.9125 33.88/0.9286 32.90/0.9122 34.18/0.9321 25 32.27/0.9252 34.02/0.9227 31.87/0.8824 32.02/0.8898 31.34/0.8803 32.30/0.9001 PyNet 28.937 5 35.79/0.9714 38.35/0.9786 32.42/0.9025 35.63/0.9616 33.53/0.9301 35.14/0.9488 15 33.58/0.9466 35.41/0.9488 30.85/0.8691 33.25/0.9262 31.65/0.9014 32.95/0.9184 25 31.05/0.9137 32.89/0.9154 28.90/0.8242 31.19/0.8869 30.04/0.8682 30.81/0.8816 SAGAN 22.557 5 36.53/0.9767 39.93/0.9825 34.28/0.9358 35.67/0.9626 34.64/0.9382 36.21/0.9591 15 34.31/0.9533 36.61/0.9541 32.96/0.9087 33.69/0.9278 32.90/0.9123 34.09/0.9312 25 31.95/0.9222 33.91/0.9206 31.45/0.8746 31.73/0.8861 31.21/0.8805 32.05/0.8967 Restormer 26.097 5 37.67/0.9777 39.39/0.9760 35.88/0.9493 36.72/0.9618 35.55/0.9421 37.04/0.9614 15 34.79/0.9556 36.18/0.9498 34.25/0.9286 34.45/0.9339 33.59/0.9201 34.65/0.9376 25 32.84/0.9353 34.31/0.9285 32.90/0.9078 32.79/0.9074 32.21/0.8996 33.01/0.9157 RSTCA 0.921 5 37.11/0.9800 38.29/0.9820 34.83/0.9387 36.19/0.9661 34.92/0.9425 36.27/0.9619 15 34.67/0.9584 35.41/0.9549 33.50/0.9165 34.13/0.9360 33.16/0.9189 34.17/0.9368 25 32.57/0.9331 33.28/0.9250 32.20/0.8921 32.40/0.9029 31.74/0.8930 32.44/0.9092 DRNet-M (Ours) 1.405 5 37.35/0.9810 39.99/0.9827 35.22/0.9455 36.39/0.9660 35.22/0.9442 36.83/0.9638 15 34.84/0.9580 36.77/0.9563 33.77/0.9219 34.28/0.9366 33.41/0.9200 34.61/0.9386 25 32.72/0.9334 34.57/0.9309 32.32/0.8943 32.53/0.9053 31.94/0.8948 32.82/0.9116 DRNet (Ours) 5.595 5 38.05/0.9837 40.48/0.9849 36.01/0.9520 37.05/0.9685 35.73/0.9467 37.46/0.9670 15 35.37/0.9599 37.07/0.9581 34.41/0.9301 34.96/0.9399 33.91/0.9268 35.14/0.9428 25 32.95/0.9355 34.86/0.9342 32.99/0.9084 32.96/0.9110 32.32/0.9006 33.22/0.9178 Table 1: Performance comparison of state-of-the-art methods for joint demosaicing and denoising on Quad Bayer CFA. The BOLD and UNDERLINE indicates the best and second best results respectively. Experiments In this section, we first introduce the implementation details of the model settings, training details, and related datasets. Then, the comparison of public benchmarks among the proposed method and several state-of-the-art methods will be conducted to demonstrate the superiority of our DRNet. Moreover, a set of ablation experiments focusing on the key components and major contributions, as we argued before, will be included to illustrate the effectiveness and importance of each of them. Implementation Details We formulate the proposed DRNet by setting the number of feature map channels as [64, 128, 256] for each scale in SAE. Because the output of QBE exists on the top scale we let its output get 128 channels. The mini-version of the DRNet, namely DRNet-M, gets the feature maps in the self-adaptive encoder (SAE) of [32, 64, 128] channels for three scales, while the Quad Bayer encoder (QBE) outputs a 64-channel feature map. To train the proposed method, we re-group the training data into a set of 128 × 128 sized patches with a batchsize of 16. We adopt the Adam optimizer(Kingma and Ba 2014) with the initialized learning rate of 3×10−4. The learning rate will smoothly decrease to 3 × 10−5 during the training stage of a total 100 epochs. All the experiments are conducted with the Pytorch framework on a Nvidia RTX3090 GPU server. We utilize DIV2K (Agustsson and Timofte 2017) and Flickr2K (Timofte et al. 2017) as training set, while the Urban100 (Cordts et al. 2016), BSD100 (Martin et al. 2001), MCM (Woo et al. 2018), Kodak (Loui et al. 2007), and Set14 are selected as testing set. To simulate the noise Quad Bayer CFA image, we re-sample the RGB image according to the Quad Bayer CFA and add Gaussian noise of σ = 5/15/25 to synthesize the input. Moreover, a realworld dataset SIDD (Abdelhamed, Lin, and Brown 2018) is int roduced to further study the performance of compared methods for real-world applications. In this case, the additional noise won’t be included when synthesising the inputs. Comparison to State-of-the-Arts To demonstrate the superiority of our DRNet, we conduct a fair comparison to several state-of-the-art methods, including two demosaicing methods Pynet (Ignatov, Van Gool, and Timofte 2020) and Rstca (Xing and Egiazarian 2022), two joint demosaicing and denoising methods SGNet (Liu et al. 2020) and PIPNet (A Sharif, Naqvi, and Biswas 2021), as well as two image restoration methods Ufomrer (Wang et al. 2022) and Restormer (Zamir et al. 2022). We mainly investigate the PSNR and SSIM (Wang et al. 2004) indexes to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7556 Uformer PIPNet SGNet Input DRNet SAGAN RSTCA Restormer GT PyNetV2 PIPNet SGNet Input SAGAN PyNetV2 Uformer DRNet RSTCA Restormer GT Figure 3: The visualized results of all compared methods for joint demosaicing and denoising on Quad Bayer CFA. Methods Inference Time(s) PSNR r=192 r=256 r=384 r=512 σ=5 PIPNet 0.084 0.141 0.293 0.519 36.62 SAGAN 0.183 0.301 0.599 1.073 36.21 Restormer 0.658 1.107 37.04 RSTCA 0.078 36.27 DRNet-M 0.049 0.079 0.149 0.260 36.83 DRNet 0.101 0.178 0.375 0.665 37.46 Table 2: The comparisons for inference time with the r × r sized patch input on the mobile processor and the average PSNR. The denotes the out of memory. illustrate the performance of all compared methods. We provide the quantitative comparison results for all compared methods in Table. 1. As shown, the proposed DRNet outperforms all the other methods on all datasets and performance indexes and beats the second-best method by 0.42/0.49/0.21dB in terms of PSNR for σ = 5/15/20. Moreover, the mini-model DRNet-M achieves the lowest MACs and still keeps competitive performance that gets the second-best average PSNR and SSIM performance. These observations certainly demonstrate the superiority of the performance and efficiency of the proposed method. We also provide the visualized details of all compared methods in Figure. 3. Benefiting from the dual-encoder architecture, our DRNet could well preserve the color texture details and successfully suppress the noise. At the same time, the other methods suffer from the blurring effects and noise contamination, especially around the areas of strong edges and great color changes. Experiments on the Edge Device We conduct the ablation experiments by comparing models constructed and (c) (c) (b) (b) (e) (f) (g) (h) (a) (d) Figure 4: The visualized results of partially removing each component from the complete model (σ = 25). (a) input. (b) PIPNet for Bayer CFA. (c) removing the mosaic removal branch. (d) removing the self-adaption block (SAB). (e) removing the noise removal branch. (f) removing the pixel modulation block (PMB). (g) DRNet. (h) GT. Modules Urban100 BSD100 NRB MRB PSNR/SSIM/LPIPS PSNR/SSIM/LPIPS 26.52/0.8405/0.169 28.57/0.8302/0.251 ✓ 31.24/0.9035/0.066 32.84/0.8831/0.116 ✓ 29.57/0.8880/0.111 30.29/0.8463/0.224 ✓ ✓ 32.95/0.9355/0.039 34.86/0.9342/0.063 Table 3: Experiment results for the noise removal branch (denoted as NRB in the table) and the mosaic removal branch (denoted as MRB in the table) on Urban100 and BSD100 datasets. (σ=25) trained with different combinations of proposed components. Table.2 reports the inference time of several methods on the ”Snapdragon 8 Gen 2” mobile processor. The transformer-based methods Restormer and RSTCA fail to process images of larger sizes due to the limited memory. Besides, our DRNet-M exhibits a good trade-off between efficiency and effectiveness that gets the best performance (except for Restormer) with the best efficiency. Modules Verification As we argued before, each of the proposed components gets a clear target for the joint demosaicing and denoising task. To demonstrate these arguments, we take a fully trained DRNet as a baseline and visualize the output of partially removing each component from the complete model. In this way, we can easily distinguish the role that each component plays to finally achieve the joint demosaicing and denoising. Modules in QBE Figure. 4 shows the results by partially removing the QBE, the noise removal branch, and the mosaic removal branch of JRB and the SAB. In the QBE, we design PMB to regroup RAW pixels to generate color-aware and texture information features. Removing PMB (Figure.4 (f)) leads to the texture recovery markedly deteriorated comThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7557 pared to our DRNet (Figure.4 (g)). Modules in SAE In the self-adaptive encoder (SAE), JRB is the key for joint demosaicing and denoising. Image denoising is the process of removing additive noise while demosaicking can be defined as tone mapping that restores image colors through dot multiplication (Zheng et al. 2021). Removing SAB makes the reconstructed images unreadable (Figure.4 (d)), which illustrates that the SAB is a key point for domain transfer. Branches in JRB There are two branches, mosaic removal branch and noise branch in the JRB. Removing the mosaic removal branch leads to color distortion (Figure.4 (c)). After removing the noise removal branch (Figure.4 (e)), the noise significantly increases. Besides, we also provide quantitative ablation experiment in Table. 3 that removing any of branches in the JRB would certainly lead considerable performance degradation. Thus, all the components work as we argue in the previous sections. Modules Urban100 BSD100 PMB RRDB PSNR/SSIM/LPIPS PSNR/SSIM/LPIPS 29.28/0.8713/0.042 29.25/0.8916/0.074 ✓ 32.52/0.9306/0.045 33.90/0.9278/0.067 ✓ 32.68/0.9176/0.042 34.17/0.9240/0.073 ✓ ✓ 32.95/0.9355/0.039 34.86/0.9342/0.063 Table 4: Ablation experiment results for PMB and RRDB on Urban100 and BSD100 datasets.(σ=25) Ablation Investigation In this subsection, we provide sets of experiments to demonstrate the effectiveness of each component in our DRNet. Ablation Experiments for QBE We first investigate the effectiveness of modules pixel modulation block (PMB) and RRDB in QBE. The results are shown in Table. 4, removing the RRDB leads to an inevitable performance reduction while removing PMB (replacing PMB with a pixelshuffle (Shi et al. 2016) layer) leads to an even worse situation. While if we remove both PMB and RRDBs, the performance is sharply reduced. This observation demonstrates that the missing of QBE is unacceptable. In QBE, the PMB plays a more important role than RRDBs in Quad Bayer feature extraction, while the feature enhancement provided by RRDBs also should not be ignored. Modules Urban100 BSD100 JRB SAB PSNR/SSIM/LPIPS PSNR/SSIM/LPIPS 31.89/0.9163/0.065 34.31/0.9220/0.074 ✓ 29.62/0.8801/0.094 31.13/0.8188/0.279 ✓ 31.24/0.9126/0.047 31.22/0.8970/0.120 ✓ ✓ 32.95/0.9355/0.039 34.86/0.9342/0.063 Table 5: Ablation experiment results for JRB and SAB on Urban100 and BSD100 datasets.(σ=25) a b c d e f Figure 5: The visualized results of ablation experiments for JRB and SAB. (a) GT. (b) removing SAB and replacing JRBs with resblocks. (c) replacing JRBs with resblocks. (d) removing SAB. (e) reconstructed with DRNet. (f) original. RSTCA PIPNet Input DRNet GT SAGAN Figure 6: Visualized results of Quad Bayer joint demosaicing and denoising on SIDD dataset. Ablation Experiments for SAE Then we investigate the effectiveness of components in SAE, including the joint residual block and the self-adaption block, by training the models that partially remove one or both of them. Specifically, to fairly evaluate the effectiveness of JRB, we remove JRBs by replacing them with common residual blocks (Lim et al. 2017) of closed computation cost. As shown in Table. 5 and Figure. 5, the network fails to give accurate reconstruction results without the domain adaption provided by SAB. We hold the opinion that the JRB and SAB should work in concert for the best reconstruction effects. Without the domain adaption provided by SAB, it’s difficult for JRB to learn an accurate residual estimation. Besides, the JRB should be the most valuable part for the joint demosaicing and denoising as it provides an explicit degradation model for jointly estimating the residuals of the noise and mosaic artifacts. By contrast, the common residual blocks cannot well handle the demosaicing and denoising tasks at The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7558 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) Figure 7: The visualized results on real data. (a)&(g) Input. (b)&(h) PIPNet. (c)&(i) SAGAN. (d)&(j) RSTCA. (e)&(k) DRNet. (f)&(l) smartphone’s native ISP algorithm. Method Time(s) SIDD PSNR SSIM LPIPS PIPNet 9.48 28.95 0.7704 0.340 SAGAN 9.92 34.31 0.8377 0.209 RSTCA 19.86 35.25 0.8502 0.192 DRNet 12.58 35.72 0.8621 0.177 Table 6: Comparison results of Quad Bayer joint demosaicing and denoising on SIDD datasets. the same time, though they take similar computation costs. However, there is an accident that removing both JRB and SAB leads to even better results than singly removing one of them. This observation again indicates the importance of simultaneously introducing JRB and SAB because the SAB provides the necessary domain adaption to ensure the JRB can be well trained. Generalization to Real Data To further demonstrate the superiority, we conduct an additional experiment on real-world data involving the SOTA methods exhibited in Table. 6 including PIPNet (A Sharif, Naqvi, and Biswas 2021), SAGAN (Sharif, Naqvi, and Biswas 2021) and RSTCA (Xing and Egiazarian 2022). Considering the real-world noise is quite different from the Gaussian noise, we adopt a real-world denoising dataset SIDD (Abdelhamed, Lin, and Brown 2018) to train all compared models for a fair comparison. We adopt the same pipeline described in Sec. 4.1 to construct inputs by converting the noise RGB images to Quad Bayer format. The quantitative and qualitative results of compared methods on the SIDD dataset are available in Table. 6 and Figure. 6. The performance gaps among the compared methods are further enlarged on the real-world noise images. The PIPNet suffers from non-uniformly and irregularly distributed noise along with the demosaicing process. By contrast, decomposing the joint demosaicing and denoising task into two independent sub-tasks clearly promotes robustness and effectiveness against complex real-world images. Then, we test these methods with their models trained on the SIDD dataset on authentic Quad Bayer images captured by a smartphone. We adopt the OnePlus9R smartphone, whose camera sensor is a Quad Bayer CFA, to capture testing images. Figure. 7 shows the results of the compared methods and the smartphone’s native ISP algorithm. From the visualized results, reconstructing RGB images from Quad Bayer images remains a great challenge for native ISP algorithms. Among these compared methods, our DRNet exhibits the best performance for suppressing the real-wolrd noise and preserving the details (Figure. 7 (e)&(k)). Conclusion In this paper, we propose a novel deep learning approach DRNet for joint demosaicing and denoising for Quad Bayer CFA. The proposed DRNet employs a dual encoder architecture including a Quad Bayer encoder (QBE) and selfadaptive encoder (SAE) to jointly fuse the Quad Bayer features and restored image features. In the QBE, we propose a pixel modulation block to mostly preserve the color information and introduce RRDBs to compensate for the lost texture details. In the SAE, we propose a joint residual learning block to accomplish the denoising and demosaicing tasks within one residual block and introduce a selfadaption block to cross the domain gap between RAW inputs and sRGB outputs. We demonstrate the effectiveness and superiority of our DRNet through sufficient experiments on multiple benchmarks and real-world evaluations. Acknowledgements This work is supported by the National Nature Science Foundation of China No. 62001146, U21B2024, the Key R&D Program of Zhejiang under Grant No. 2023C01044. This work is also suppoted by the Fundamental Research Funds for the Provincial Universities of Zhejiang under Grants No. GK239909299001-013 and GK229909299001009. References A Sharif, S.; Naqvi, R. A.; and Biswas, M. 2021. Beyond joint demosaicking and denoising: An image processing pipeline for a pixel-bin image sensor. In Proceedings of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7559 the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 233–242. Abdelhamed, A.; Lin, S.; and Brown, M. S. 2018. A highquality denoising dataset for smartphone cameras. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1692–1700. Agranov, G. A.; Molgaard, C.; Bahukhandi, A.; Lee, C.; and Li, X. 2017. Pixel binning in an image sensor. US Patent, 9: B2. Agustsson, E.; and Timofte, R. 2017. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 126–135. Chen, R.; Zheng, B.; Zhang, H.; Chen, Q.; Yan, C.; Slabaugh, G.; and Yuan, S. 2023. Improving dynamic HDR imaging with fusion transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 340–349. Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Condat, L.; and Mosaddegh, S. 2012. Joint demosaicking and denoising by total variation minimization. In 2012 19th IEEE International Conference on Image Processing, 2781– 2784. IEEE. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223. Ehret, T.; Davy, A.; Arias, P.; and Facciolo, G. 2019. Joint demosaicking and denoising by fine-tuning of bursts of raw images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8868–8877. Glorot, X.; Bordes, A.; and Bengio, Y. 2011. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, 315–323. JMLR Workshop and Conference Proceedings. Hengrun, Z.; Bolun, Z.; Shanxin, Y.; Hua, Z.; Chenggang, Y.; Liang, L.; and Gregory, S. 2021. CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement. IEEE Transactions on Circuits and Systems for Video Technology. Hirakawa, K.; and Parks, T. W. 2005. Adaptive homogeneity-directed demosaicing algorithm. Ieee transactions on image processing, 14(3): 360–369. Ignatov, A.; Gool, L.; and Timofte, R. 2020. Replacing Mobile Camera ISP with a Single Deep Learning Model. Ignatov, A.; Van Gool, L.; and Timofte, R. 2020. Replacing mobile camera isp with a single deep learning model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 536–537. Kim, I.; Song, S.; Chang, S.; Lim, S.; and Guo, K. 2019. Deep image demosaicing for submicron image sensors. Journal of Imaging Science and Technology, 63(6): 60410– 1. Kim, Y.; and Kim, Y. 2019. High-Sensitivity Pixels with a Quad-WRGB Color Filter and Spatial Deep-Trench Isolation. Sensors, 19(21): 4653. Kingma, D.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. Klatzer, T.; Hammernik, K.; Knobelreiter, P.; and Pock, T. 2016. Learning joint demosaicing and denoising based on sequential energy minimization. In 2016 IEEE International Conference on Computational Photography (ICCP), 1–11. IEEE. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Mu Lee, K. 2017. Enhanced deep residual networks for single image superresolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 136–144. Liu, L.; Jia, X.; Liu, J.; and Tian, Q. 2020. Joint demosaicing and denoising with self guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2240–2249. Loui, A.; Luo, J.; Chang, S.-F.; Ellis, D.; Jiang, W.; Kennedy, L.; Lee, K.; and Yanagawa, A. 2007. Kodak’s consumer video benchmark data set: concept definition and annotation. In Proceedings of the international workshop on Workshop on multimedia information retrieval, 245–254. Luo, M. R.; Cui, G.; and Rigg, B. 2001. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Franc¸ais de la Couleur, 26(5): 340–350. Martin, D.; Fowlkes, C.; Tal, D.; and Malik, J. 2001. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, volume 2, 416–423. IEEE. Monno, Y.; Kiku, D.; Tanaka, M.; and Okutomi, M. 2015. Adaptive residual interpolation for color image demosaicking. In 2015 IEEE International Conference on Image Processing (ICIP), 3861–3865. IEEE. Schwartz, E.; Giryes, R.; and Bronstein, A. M. 2018. Deepisp: Toward learning an end-to-end image processing pipeline. IEEE Transactions on Image Processing, 28(2): 912–923. Sharif, S.; Naqvi, R. A.; and Biswas, M. 2021. SAGAN: Adversarial spatial-asymmetric attention for noisy NONAbayer reconstruction. arXiv preprint arXiv:2110.08619. Shi, W.; Caballero, J.; Husz´ar, F.; Totz, J.; Aitken, A. P.; Bishop, R.; Rueckert, D.; and Wang, Z. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1874–1883. Su, C.-Y. 2006. Highly effective iterative demosaicing using weighted-edge and color-difference interpolations. IEEE Transactions on Consumer Electronics, 52(2): 639–645. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7560 Tan, D. S.; Chen, W.-Y.; and Hua, K.-L. 2018. DeepDemosaicking: Adaptive image demosaicking via multiple deep fully convolutional networks. IEEE Transactions on Image Processing, 27(5): 2408–2419. Tan, H.; Zeng, X.; Lai, S.; Liu, Y.; and Zhang, M. 2017a. Joint demosaicing and denoising of noisy bayer images with ADMM. In 2017 IEEE International Conference on Image Processing (ICIP), 2951–2955. IEEE. Tan, R.; Zhang, K.; Zuo, W.; and Zhang, L. 2017b. Color image demosaicking via deep residual learning. In Proc. IEEE Int. Conf. Multimedia Expo (ICME), volume 2, 6. Timofte, R.; Agustsson, E.; Van Gool, L.; Yang, M.-H.; and Zhang, L. 2017. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 114–125. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Change Loy, C. 2018. Esrgan: Enhanced superresolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops, 0–0. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; and Li, H. 2022. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17683–17693. Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I. S. 2018. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), 3–19. Wu, X.; Fan, Z.; Zheng, J.; Wu, Y.; and Zhang, F. 2023. Learning to Joint Remosaic and Denoise in Quad Bayer CFA via Universal Multi-scale Channel Attention Network. In Karlinsky, L.; Michaeli, T.; and Nishino, K., eds., Computer Vision – ECCV 2022 Workshops, 147–160. Cham: Springer Nature Switzerland. ISBN 978-3-031-25072-9. Xing, W.; and Egiazarian, K. 2021. End-to-end learning for joint image demosaicing, denoising and super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3507–3516. Xing, W.; and Egiazarian, K. 2022. Residual swin transformer channel attention network for image demosaicing. In 2022 10th European Workshop on Visual Information Processing (EUVIP), 1–6. IEEE. Yang, Q.; Yang, G.; Jiang, J.; Li, C.; Feng, R.; Zhou, S.; Sun, W.; Zhu, Q.; Loy, C. C.; and Gu, J. 2022. MIPI 2022 Challenge on Quad-Bayer Re-mosaic: Dataset and Report. In ECCV Workshop. Yoo, Y.; Im, J.; and Paik, J. 2015. Low-light image enhancement using adaptive digital pixel binning. Sensors, 15(7): 14917–14931. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5728–5739. Zeng, H.; Feng, K.; Cao, J.; Huang, S.; Zhao, Y.; Luong, H.; Aelterman, J.; and Philips, W. 2023. Inheriting Bayer’s Legacy-Joint Remosaicing and Denoising for Quad Bayer Image Sensor. Zhang, L.; and Wu, X. 2005. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Transactions on Image Processing, 14(12): 2167–2178. Zhang, Y.; Li, K.; Li, K.; Zhong, B.; and Fu, Y. 2019. Residual non-local attention networks for image restoration. arXiv preprint arXiv:1903.10082. Zhang, Z.; Zheng, H.; Hong, R.; Xu, M.; Yan, S.; and Wang, M. 2022. Deep Color Consistent Network for Low-Light Image Enhancement. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1889– 1898. Zhao, H.; Zheng, B.; Yuan, S.; Zhang, H.; Yan, C.; Li, L.; and Slabaugh, G. 2021. CBREN: Convolutional neural networks for constant bit rate video quality enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 32(7): 4138–4149. Zheng, B.; Chen, Q.; Yuan, S.; Zhou, X.; Zhang, H.; Zhang, J.; Yan, C.; and Slabaugh, G. 2022. Constrained Predictive Filters for Single Image Bokeh Rendering. IEEE Transactions on Computational Imaging, 8: 346–357. Zheng, B.; Chen, Y.; Tian, X.; Zhou, F.; and Liu, X. 2020a. Implicit dual-domain convolutional network for robust color image compression artifact reduction. IEEE Transactions on Circuits and Systems for Video Technology, 30(11): 3982– 3994. Zheng, B.; Yuan, S.; Slabaugh, G.; and Leonardis, A. 2020b. Image demoireing with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3636–3645. Zheng, B.; Yuan, S.; Yan, C.; Tian, X.; Zhang, J.; Sun, Y.; Liu, L.; Leonardis, A.; and Slabaugh, G. 2021. Learning Frequency Domain Priors for Image Demoireing. IEEE Transactions on Pattern Analysis and Machine Intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7561
2024
839
18,672
Omnipotent Distillation with LLMs for Weakly-Supervised Natural Language Video Localization: When Divergence Meets Consistency Peijun Bao*1, Zihao Shao2, Wenhan Yang3, Boon Poh Ng1, Meng Hwa Er1, Alex C. Kot1 1Nanyang Technological University 2Peking University 3Peng Cheng Laboratory peijun001@e.ntu.edu.sg, zh.s@pku.edu.cn, yangwh@pcl.ac.cn, {ebpng, emher, eackot}@ntu.edu.sg Abstract Natural language video localization plays a pivotal role in video understanding, and leveraging weakly-labeled data is considered a promising approach to circumvent the laborintensive process of manual annotations. However, this approach encounters two significant challenges: 1) limited input distribution, namely that the limited writing styles of the language query, annotated by human annotators, hinder the model’s generalization to real-world scenarios with diverse vocabularies and sentence structures; 2) the incomplete ground truth, whose supervision guidance is insufficient. To overcome these challenges, we propose an omnipotent distillation algorithm with large language models (LLM). The distribution of the input sample is enriched to obtain diverse multi-view versions while a consistency then comes to regularize the consistency of their results for distillation. Specifically, we first train our teacher model with the proposed intra-model agreement, where multiple sub-models are supervised by each other. Then, we leverage the LLM to paraphrase the language query and distill the teacher model to a lightweight student model by enforcing the consistency between the localization results of the paraphrased sentence and the original one. In addition, to assess the generalization of the model across different dimensions of language variation, we create extensive datasets by building upon existing datasets. Our experiments demonstrate substantial performance improvements adaptively to diverse kinds of language queries. Introduction Natural language video localization (Gao et al. 2017) is an important yet challenging task with a wide spectrum of applications in video understanding and analysis (Sreenu and Durai 2019; Qi et al. 2021; Zhu et al. 2021; Bao et al. 2023). The goal of this task is to temporally localize a video segment (i.e., start and end time) that best corresponds to a query sentence from untrimmed videos. Despite achieving impressive results, the fully-supervised natural language video localization (Liu et al. 2018; Zhang et al. 2019a,b, 2020a; Wang, Ma, and Jiang 2020; Bao and Mu 2022) requires laborious manual annotations of temporal moment boundaries, which are unscalable to the real-world setting. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A person picks up his phone and talks on it. The phone is picked up by a person and he is talking on it. An individual takes hold of his mobile phone and engages in a chat. Paraphrasing Student’s prediction Student’s prediction Paraphrased Query LLM LLM Teacher’s prediction Temporal Boundary Consistency LLM LLM Query Paraphrasing Paraphrased Query Figure 1: Existing datasets suffer from limited sentence structures and vocabulary of natural language queries e.g., on the Charades-STA dataset often with the structure of “sub + pred + obj”. By exploiting temporal boundary consistency, we propose a manual annotation-free omnipotent distillation algorithm with LLM, adapting the localization ability from a teacher model focusing on the vanilla language style to a student model whose input sentence queries are with rich language variations. Due to this, the weakly-supervised setting has attracted increasing attention in recent years (Gao et al. 2019; Mithun, Paul, and Roy-Chowdhury 2019; Lin et al. 2020; Chen et al. 2020; Tan et al. 2021; Zheng et al. 2022a,b), where only video-level descriptions are available during training. However, the performance of the existing weakly-supervised methods is still unsatisfactory and lags behind the fully-supervised methods because the incomprehensive annotations do not provide sufficient supervisory signals for training. Moreover, the language queries provided by the human annotator often suffer from limitations in writing styles, such as restricted vocabulary and sentence structures. As a result, this hinders the model’s capability to generalize to real-world applications, whose language queries showcase notable variations in writing styles as well as cultural nuances. To address these issues, we propose an Omnipotent Distillation algorithm (OmniD) with the Large Language Model (LLM) as shown in Fig 1. Specifically, we first devise a bootstrapping learning framework to train a sophisticated The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 747 teacher model. The teacher model is composed of multiple sub-models, each serving as an auxiliary model to address the shortage of effective training guidance for the other submodels by providing pseudo-labels of temporal boundaries. An intra-model consistency distillation is designed to explicitly compensate for the localization errors of each sub-model. Instead of introducing extra manual annotations, such a design alleviates the deficient supervision problem by additional computational costs during training. Subsequently, we capitalize on the generation capability of the large language model (Brown et al. 2020; Touvron et al. 2023; OpenAI 2023) to paraphrase each sentence query, formulating extensive rephrased sentences with diversity in vocabularies and sentence structures while maintaining the original meaning. Given the semantic equivariance of paraphrasing, the video moment described by the primary sentence query and its paraphrased version should also exhibit equivariance. To this end, we devise a semantic equivariance distillation loss to distill the teacher model to a smaller student model by encouraging the consistency of temporal boundary predictions between the paraphrased sentence and the untouched one. In this way, we transfer the localization ability from a teacher model, which emphasizes a vanilla language style, to a student model that engages with input sentences featuring diverse and intricate language variations. In addition, existing datasets such as Charades-STA (Gao et al. 2017) and ActivityNet-Captions (Krishna et al. 2017) suffer from restricted writing styles in terms of sentence queries. For instance, the majority of the sentence queries in the Charades-STA dataset share a similar sentence structure, following the pattern of “subject-verb-object”, with passive voice constructions being rare occurrences. To evaluate the model’s generalization across various queries, we create totally six variants of these datasets with rich vocabulary and sentence structures. Our contributions are summarized as follows: 1) To the best of our knowledge, we are the first to exploit the large language model for the task of natural language video localization. 2) We propose an omnipotent distillation algorithm to tackle the challenges of ineffective supervision and query multiformity. Intra-model consistency distillation and semantic equivariance distillation are crafted to tackle these challenges respectively. 3) To evaluate the generalization capability of the model across different dimensions of language variation, we construct comprehensive datasets by expanding upon existing ones. 4) Extensive experiments verify the superiority of the proposed methods in both performance and adaptability compared to state-of-the-art approaches. Related Works Natural Language Video Localization. The task of natural language video localization is initially introduced by Gao et al. (2017) with the goal of identifying the start and end time points of video moment based on a natural language query and an untrimmed video. Gao et al. (2017) propose a language-video localizer to identify the temporal boundary for candidate video clips. A semantic matching reinforcement learning framework is devised by Wang, Huang, and Wang (2019) to reduce the large visual-semantic discrepancy between video and language. A cross-modal attention network is proposed in (Liu et al. 2018) to highlight the essential part of visual features or query contents. Bao, Zheng, and Mu (2021) devise an event propagation network to localize video moments that are semantically related and temporally coordinated. While achieving promising localization accuracy, the fully-supervised methods rely on the manual annotations of the temporal boundaries which are labor-intensive and subjective to label. To solve this issue, weakly-supervised natural language video localization has recently gain growing attention (Zheng et al. 2022a,b; Bao et al. 2024; Tan et al. 2021; Chen et al. 2020; Lin et al. 2020), where only the sentence query and the paired video are required for training. Early works (Mithun, Paul, and Roy-Chowdhury 2019; Tan et al. 2021) explore using joint visual-semantic embedding and text-guided attention to avoid laborious temporal boundary annotations. A latent graph co-attention (Tan et al. 2021) is proposed in to use fine-grained frame-by-word interactions to reason about correspondences between possible pairs of frames. Chen et al. (2020) devise a two-stage model to tackle the weaklysupervised natural language video localization in a coarseto-fine manner where more precise start and end timestamps of the retrieval results are obtained during the fine stage. To the best of our knowledge, we are the first in the literature on natural language video localization to use a large language model to boost the adaptability of the localization model to diversified language queries. Knowledge Distillation. The approaches of knowledge distillation, initially introduced in (Hinton, Vinyals, and Dean 2015), serve the purpose of compressing and accelerating models. It achieves this by transferring the knowledge amassed by a larger, intricate model to a smaller, more efficient counterpart. In recent years, knowledge distillation has seen expanded applications in various domains, including zero-shot learning (Nayak et al. 2019; Micaelli and Storkey 2019), domain adaptation (Deng, Luo, and Zhu 2019; Chen et al. 2019), and multimodal learning (Gupta, Hoffman, and Malik 2015; Wang et al. 2020; Yu, Liu, and Chan 2021). Different from these works applied in fully-/semi-supervised scenarios, we capitalize on knowledge distillation to solve the obstacles of insufficient supervision and lacking query multiformity in the weakly-supervised setting. Large Language Model. The large language models (LLM) are transformer-based language models that contain hundreds of billions or more parameters trained on massive text data, such as GPT-3 (Brown et al. 2020), GPT-4 (OpenAI 2023), PaLM (Chowdhery et al. 2022) and LLaMA (Touvron et al. 2023). LLMs not only show a significant performance advancement but also exhibit strong capacities in in-context learning (Brown et al. 2020), that are not presented in the small-scale language models e.g., BERT (Devlin et al. 2019). A milestone utilization of LLMs is ChatGPT1 that harnesses the LLMs from the GPT series for dialogue, showcasing an impressive conversational ability. We employ the remarkable paraphrasing capability of LLM to enhance the adaptability 1https://chat.openai.com/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 748 ... ... Student Student Teacher semantic equivariance distillation intra-model consistency distillation denotes the supervisory signal of bootstrapping denotes the supervisory signal of bootstrapping LLM LLM IoU concensus semantic equivariant paraphrasing proposal candidates for sub-model 1 proposal candidates for sub-model i proposal candidates for sub-model N proposal candidates for teacher model 2) teacher model with bootstrapping learning 3) student model via semantic equivariance distillation with LLM 1) LLM paraphrasing Someone peruses through the pages of a book. paraphrased query sub-model 1 sub-model 1 sub-model i sub-model i sub-model N sub-model N A person reads a book. query video video ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Figure 2: An overview of the proposed omnipotent distillation (OmniD) with LLM. Our method consists of 1) an LLM paraphrasing module, 2) a teacher model via bootstrapping learning with intra-model consistency distillation, and 3) a student model via semantic equivariance distillation with LLM. In the teacher model, intra-model supervision signals are utilized to mutually reduce the problem of ineffective training guidance and explicitly compensate for the errors of each sub-model. Then the student model improves the generalization ability across diversified language queries using the equivariance property of the moment proposal before/after the sentence paraphrasing. The proposal candidates highlighted in dark green correspond to those identified as positive predictions. of natural language video localization models when dealing with a domain shift of sentence queries. We highlight that our approach eliminates additional annotation efforts to label temporal boundaries, thanks to the proposed bootstrapping distillation. Omnipotent Distillation with LLM Method Overview Given a sentence query Q and an untrimmed video V, natural language video localization aims to temporally localize the temporal boundary b = (s, e) of a video moment described by the query Q, where s and e denote the start and end time point of the moment respectively. In the weakly-supervised setting, the model requires the sentence-video pairs for training, without relying on the annotation of temporal boundaries. However, the weakly-supervised model is often constrained by the insufficient supervision dilemma due to incomplete annotation. Moreover, language query in the existing datasets lacks diversity in sentence structures and vocabulary. For instance, the sentence queries in the Charades-STA dataset are often with the structure of “sub + pred + obj”. This limitation hampers the model’s capacity to achieve robust generalization in real-world scenarios characterized by a diverse range of language variations. To overcome these difficulties, as shown in Fig 2, we propose Omnipotent Distillation (OmniD) with the large language model by exploiting two sorts of consistency among the temporal boundary predictions. 1) Intra-model consistency: we first devise a bootstrapping learning algorithm with an intra-model consistency distillation to train a teacher model. This teacher model is constituted by multiple submodels, where each sub-model serves as an assistive model to collaboratively provide training guidance to the other model, alleviating the ineffective supervision constraint. 2) Semantic equivariance consistency: we capitalize on the LLM (OpenAI 2023) to paraphrase the sentence query, where a wide range of rephrased sentences are formulated with diversity in vocabularies and sentence structures while keeping the semantic meaning unchanged. The localization capability of the teacher model is then transferred into a smaller, more efficient student model. This is achieved by encouraging the temporal boundary prediction of paraphrased sentences from the student model to be aligned with the prediction from the teacher model with the unaltered sentence query. Sub-Model Construction We use the CPL model (Zheng et al. 2022b) as the basic network architecture for the sub-model, which comprises a proposal generator and a sentence reconstructor. Here we take a brief overview of the network architecture and more details can be referred to (Zheng et al. 2022b). Specifically, the sentence query Q and untrimmed video V are first encoded to feature representation q ∈RLQ×d and v ∈RLV ×d by a list of multi-head attention layers respectively. Then another list of cross-modal attention layers regress K proposal candidates The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 749 from q and v as {bk = (sk, ek)} (k = 1 . . . K). We randomly mask M words {wi} in the sentence and enforce the sentence reconstructor to reconstruct the masked words from the k-th proposal candidate as ˆwk i . The cross-entropy loss can then be used to evaluate the reconstruction correctness as Lrec[k] = M X i=1 Lce(wi, ˆwk i ) (1) Assume that the k∗-th proposal candidate is selected as the positive proposal that matches the sentence query. A ranking loss Lrank is further applied as in (Zheng et al. 2022b) to guarantee that the positive candidate k∗has a reconstruction loss smaller than the hard negative candidates by a specific margin. The final loss function is written as: Lsub = Lrec[k∗] + Lrank[k∗] (2) In the weakly-supervised setting, the annotations of temporal boundaries are not available, and thus the groundtruth positive proposal is unknown for training. Previous works (Lin et al. 2020; Zheng et al. 2022a,b) heuristically select the one in the proposal candidates with the smallest reconstruction loss as positive proposal k∗, formulated as k∗= argmink=1...KLrec[k] (3) However, such selection is inherently prone to errors due to the shortage of adequate training guidance. To cope with this barrier, we propose bootstrapping learning with intra-model consensus supervisory signals as illustrated in the following subsection. Teacher Model via Bootstrapping Learning with Intra-Model Consistency Distillation The teacher model T is constituted by multiple sub-models, where each sub-model serves as an assistive model to mutually provide training guidance to the other model and alleviate the inadequate supervision issue. Specifically, the teacher model consists of N sub-models. For the i-th sub-model, each of all other sub-models is considered a reference model, which serves as a source of pseudo-labels for the temporal boundaries. We then leverage the predictions of (N −1) sub-models to create a consensus-based pseudo-label for i-th sub-model, enhancing its the accuracy and reliability. Assume that the i-th sub-model predicts P proposal candidates that correspond to the sentence query as bij where j = 1 . . . P and the k∗-th proposal ˆbk is chosen as its prediction for i-th sub-model. The consensus score cij for j-th proposal candidate of i-th sub-model is then determined as the average intersection over union (IoU) of (N −1) submodels, written as: cij = N X k=1,k̸=i σ(bij,ˆbk) (4) where σ is the IoU operator. Instead of heuristically choosing by i-th model as Eq. 3 as pseudo-labels for training which is easy to prone errors, we exploit the consensus scores from the other (N −1) submodels to determine the pseudo-label for i-th model. In more detail, the final pseudo-label for i-th sub-model is determined as the proposal candidate with the largest consensus score across the P proposal candidates, written as pi = argmaxj(cij) (5) where pi-th proposal candidate of i-th model is chosen as the positive one. The loss function of intra-model consistency distillation for bootstrapping learning can then be formulated as LT boot = N X i=1 Lrec[pi] + Lrank[pi] (6) In this way, intra-model supervision signals are leveraged to jointly relieve the problem of ineffective supervision and explicitly compensate for the errors of each sub-model. The final prediction bT of the teacher model is determined by selecting the one with the largest consensus score of the N sub-models: si = N X k=1,k̸=i σ(ˆbi,ˆbk) bT = ˆb[argmaxi=1...Nsi] (7) where σ represents the IoU operator, and ˆb[k] denotes the selection of the entry in ˆb with the k-th index. Semantic Equivariance Distillation with LLM Given a natural language query Q, we request the large language model GPT-3.5 (OpenAI 2023) to paraphrase Q as Q′. The paraphrased Q′ maintains the intended semantic meaning while enjoying rich diversity in writing styles such as vocabularies and sentence structures. The prompts that guide the large language model with context information on paraphrasing are included in the supplementary materials (which can be referenced in the arXiv version of this paper). Given the semantic equivariance of paraphrasing, the video moment described by both the primary sentence query and its paraphrased version should also exhibit equivariance. While ground-truth temporal boundaries are absent in the weaklysupervised setting, we exploit this equivariance property to transition the knowledge of the teacher model to a lightweight student model S, and further enable its versatile ability in language understanding. Assume that the student model predicts K proposal candidates {b′ k} for the paraphrased query Q′, the localization result of the student model with Q′ is encouraged to be accordant to the teacher model with the unaltered query Q. The semantic equivariance distillation loss function for the student model S is then formulated as LS = Lrec[pS] + Lrank[pS] (8) where the pseudo-label pS is determined by choosing the proposal from {b′ k} of Q′ with the highest IoU value to the prediction of the teacher model as si = σ(ˆbi, bT ) pS = argmaxisi (9) During inference, the teacher model is dropped and only the student model is utilized to localize the video moment based on the given language query. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 750 Dataset Expansion Charades-STA (Gao et al. 2017) and ActivityNet-Captions (Krishna et al. 2017) are two widely-used datasets in the task of natural language video localization. These datasets suffer from restricted writing styles in terms of sentence queries. For instance, the majority of the sentence queries in the CharadesSTA dataset exhibit a consistent sentence structure, adhering to the ”subject-verb-object” pattern. Additionally, passive voice constructions are infrequent occurrences in both the Charades-STA and ActivityNet-Captions. In a real-world setting, however, it is common to use assorted expressions to convey the same semantics of a video moment. To assess the model’s generalization capability across various types of queries, we extend the testing data of CharadesSTA and ActivityNet-Captions by creating three variants that encompass various writing styles as follows. i) Sentence structure variant (SS): This variation focuses on changing sentence structures while being prone to maintaining the original vocabulary. ii) Vocabulary variant (VO): In this variant, we modify only the vocabulary used in the sentences while endeavoring to keep the sentence structures intact. iii) Hybrid variant (HY): Both the vocabularies and sentence structures are modified in this variant to introduce a combined effect. To create these variant datasets, we first design a list of different prompts (available in the supplementary materials) and request the large language model (OpenAI 2023) to paraphrase the natural language sentence. Then we manually check the semantic equivariance between the untouched sentence and the paraphrased one in the testing dataset. These variations allow us to examine the model’s performance across different dimensions of language variation, providing a comprehensive assessment of its generalization capabilities. Experiments Datasets We validate the effectiveness of the proposed approach to the state-of-the-art methods on two benchmark datasets, namely Charades-STA (Gao et al. 2017) and ActivityNet-Captions (Krishna et al. 2017). Furthermore, we introduce six variant datasets by building upon the vanilla Charades-STA and ActivityNet-Captions datasets, which enables us to evaluate the model’s localization accuracy over diverse sentences with varying structure and vocabulary. The details of the datasets are given as follows. 1) Charades-STA (Gao et al. 2017) comprises 9,848 videos showcasing daily indoor activities. The sentence queries within the original dataset exhibit an average length of 8.6 words. And the videos possess an average duration of 29.8 seconds. This dataset is initially designed for action recognition / localization (Sigurdsson et al. 2016), and subsequently extended by Gao et al., (Gao et al. 2017) with language descriptions for natural language video localization. As illustrated in the previous subsections of “Dataset Expansion”, we introduce three variation datasets: Charades-STA SS, Charades-STA VO, and Charades-STA HY. In each variant dataset, we undertake paraphrasing of the original sentence query three times, generating distinct paraphrased sentences based on the specified criteria. As a result, the total number of sentences in the variation dataset increases by a factor of 9 compared to the original one. The detailed statistics of the variant datasets are included in the supplementary materials of our arXiv version. 2) ActivityNet Captions (Krishna et al. 2017) is comprised of 19,290 untrimmed videos, encompassing a wide range of diverse and open visual content. The average duration of the video is 117.74 seconds and the average length of the description is 13.16 words in the original dataset. There are 2.4 annotated moments with a duration of 8.2 seconds in each video. Similar to the Charades-STA variants, three variation datasets are introduced i.e., ActivityNet-Captions SS, ActivityNet-Captions VO, and ActivityNet-Captions HY with rich language multiformity. The total sentence number is nine times as the one in the vanilla dataset. And we provide more details of the variations in the supplementary materials. Evaluation Metrics Following previous works (Gao et al. 2017; Lin et al. 2020; Zheng et al. 2022b), we adopt the metrics of “R@m” for evaluation. Specifically, we calculate the Intersection over Union (IoU) between the localized temporal moment and the corresponding ground truth. “R@m” is defined as the percentage of language queries having correct localization results, where the localization is correct if its IoU is larger than m. As prior works, we report the results with m = {0.3, 0.5, 0.7} on Charades-STA and its variations, and set m = {0.1, 0.3, 0.5} on ActivityNet-Captions and its variations. Implementation Details Following prevailing works, we use I3D network (Carreira and Zisserman 2017) and C3D network (Tran et al. 2015) to extract video features for Charades-STA and ActivityNet-Captions respectively. We employ pre-trained GloVe word2vec embeddings (Pennington, Socher, and Manning 2014) to extract sentence features. We set the maximum description length to 20 on both datasets. The dimensions of the hidden state for both language and visual features are set to 256. The number of video snippets is resampled to 200 on both datasets. We use the Adam optimizer (Kingma and Ba 2014) for training with a batch size of 32. The sub-model numbers N in the teacher model are set to 3. We first train the teacher model for 15 epochs, and subsequently distill it to the student model for another 15 epochs. The learning rate is set to 0.0005 for Charades-STA dataset and 0.00035 for ActivityNet-Captions dataset, respectively. Performance Comparisons 1) Charades-STA variant datasets. Table 1 summarizes the localization accuracy of the proposed method OmniD against state-of-the-art approaches on three variant datasets of Charades-STA, i.e., Charades-STA SS, Charades-STA VO, and Charades-STA HY. In addition to the standard versions, we further enhance the previous methods with LLM paraphrasing for a fair comparison (indicated by a check mark in the respective column). To achieve this, we carefully reimplement each of these methods using their officially released code and incorporate training with the additional natThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 751 Method LLM Charades-STA SS Charades-STA VO Charades-STA HY R@0.3 R@0.5 R@0.7 R@0.3 R@0.5 R@0.7 R@0.3 R@0.5 R@0.7 SCN (Lin et al. 2020)  55.51 22.26 6.51 53.24 21.45 5.70 54.27 22.42 6.27 CPL (Zheng et al. 2022b) 53.74 38.78 17.54 46.69 33.85 15.47 48.64 35.32 15.67 OmniD (Ours) 56.91 41.41 18.64 49.78 35.66 16.47 51.80 36.57 16.81 SCN (Lin et al. 2020)  57.81 24.93 7.05 58.97 25.71 8.13 56.64 23.04 6.83 CPL (Zheng et al. 2022b) 60.30 44.74 20.70 58.23 42.95 19.60 59.44 43.96 20.32 OmniD (Ours) 65.66 49.09 22.93 64.78 48.63 22.30 65.09 49.04 22.83 Table 1: Comparisons with state-of-the-art methods on three Charades-STA variation datasets. Method LLM ActivityNet-Captions SS ActivityNet-Captions VO ActivityNet-Captions HY R@0.1 R@0.3 R@0.5 R@0.1 R@0.3 R@0.5 R@0.1 R@0.3 R@0.5 SCN (Lin et al. 2020)  74.29 46.46 26.92 74.41 46.44 26.96 74.36 46.51 27.01 CPL (Zheng et al. 2022b) 73.03 47.90 26.86 73.58 47.87 27.13 73.19 47.98 26.94 OmniD (Ours) 72.79 48.48 27.86 74.95 48.55 28.83 72.60 47.98 28.30 SCN (Lin et al. 2020)  74.87 46.95 28.05 74.78 47.14 28.27 74.87 47.14 28.09 CPL (Zheng et al. 2022b) 74.71 47.45 26.47 74.64 47.26 26.52 74.75 47.37 26.57 OmniD (Ours) 78.07 51.06 28.50 81.73 52.85 29.53 79.62 51.84 28.99 Table 2: Comparisons with state-of-the-art methods on three ActivityNet-Captions variation datasets. ural language queries paraphrased LLM. Generally, the utilization of LLM paraphrasing can boost the generalization capabilities of these methods when dealing with sentences exhibiting diverse variations. However, it is worth noting that such enhancements might be modest and occasionally unstable for their methods, For instance, LLM paraphrasing results in a mere one-point improvement for SCN within the Charades-STA HY dataset. In contrast to these approaches, the localization accuracy of our OmniD consistently and markedly improves with LLM paraphrasing. This improvement is attributed to the enforced consistency with semantic equivariance distillation, which sets our method apart. 2) ActivityNet-Captions variant datasets. Table 2 shows the performance comparison between the OmniD and the existing methods on ActivityNet-Captions SS, ActivityNetCaptions VO, and ActivityNet-Captions HY. Notably, OmniD consistently outperforms the state-of-the-art methods by a significant margin. For instance, our OmniD method surpasses CPL by more than 3 point on the ActivityNet-Captions VO dataset with LLM paraphrasing, in terms of “R@0.5”. Moreover, the enhancement in performance achieved by incorporating LLM into OmniD is notably more consistent and pronounced compared to the effects observed when adding it to SCN and CPL. This demonstrates the efficacy of the proposed omnipotent distillation technique and the benefit of exploiting the semantic equivariance for the distillation. 3) Vanilla Charades-STA and ActivityNet-Captions datasets. We compare the localization accuracy on the original Charades-STA and ActivityNet-Captions datasets in Table 3. To ensure a fair comparison with prior research, where metrics are reported without the use of LLMs, we discard the LLM paraphrasing from our OmniD models and only use the original sentences for the distillation. Remarkably, both OmniD Teacher and Student models surpass state-ofthe-art methods with a clear margin on both datasets. For instance, OmniD student model achieves more than an 8% enhancement on the Charades-STA dataset compared to the leading previous method CPL in value of R@0.7. And our teacher model is more than 5 points higher than CPL on the ActivityNet-Captions dataset in terms of R@0.3. Ablation Studies To investigate the effectiveness of the proposed algorithms, here we conduct ablation studies on the Charades-STA dataset and its variations. 1) The benefit of semantic equivariance distillation. Table 4 showcases the advantages of semantic equivariance distillation across three variant datasets of Charades-STA. We investigate the impact of degrading semantic equivariance distillation in the following two aspects: 1) Whether to use LLM: if we choose not to utilize the sentence queries paraphrased by the LLM, then we directly employ the original, unaltered sentences for the student model. 2) Whether to use the distillation loss Eq. 8: when we do not use Eq. 8, and instead we replace by the heuristic loss Eq. 2. The results indicate that omitting either of them (marked by a cross symbol in the respective column) noticeably leads to a decline in localization capability across all three variant datasets. 2) The advantage of intra-model consistency distillation learning. In this subsection, we delve into the benefits of bootstrapping learning through ablation studies. The summary of the ablation studies on the teacher model’s localization accuracy on the original Charades-STA dataset is presented in Table 5. We refer to the complete teacher model as “full”, the teacher model without intra-model consistency distillation as “full w/o icd”, and the teacher model reduced The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 752 Method Charades-STA ActivityNet Captions R@0.3 R@0.5 R@0.7 R@0.1 R@0.3 R@0.5 SCN (Lin et al. 2020) 42.96 23.58 9.97 71.48 47.23 29.22 BAR (Wu et al. 2020) 44.97 27.04 12.23 − 49.03 30.73 MARN (Song et al. 2020) 48.55 31.94 14.81 − 47.01 29.95 RTBPN (Zhang et al. 2020b) 60.04 32.36 13.24 73.73 49.77 29.63 CCL (Zhang et al. 2020c) − 33.21 15.68 − 50.12 31.07 LCNet (Yang et al. 2021) 59.60 39.19 18.87 78.58 48.49 26.33 VCA (Wang, Chen, and Jiang 2021) 58.58 38.13 19.57 67.96 50.45 31.00 WSTAN (Wang et al. 2022) 43.39 29.35 12.28 79.78 52.45 30.01 CPL (Zheng et al. 2022b) 66.40 49.24 22.39 79.86 53.67 31.24 OmniD-Teacher (Ours) 69.13 53.77 24.70 83.41 59.15 32.34 OmniD-Student (Ours) 68.30 52.31 24.35 83.24 57.34 31.60 Table 3: Comparisons with state-of-the-art methods on the vanilla datasets. LLM Distill Charades-STA SS Charades-STA VO Charades-STA HY R@0.3 R@0.5 R@0.7 R@0.3 R@0.5 R@0.7 R@0.3 R@0.5 R@0.7   65.66 49.09 22.93 64.78 48.63 22.30 65.09 49.04 22.83   61.18 44.65 19.77 59.07 43.15 19.53 60.75 44.55 20.23   56.91 41.41 18.64 49.78 35.66 16.47 51.80 36.57 16.81   54.43 39.98 18.80 47.25 34.53 16.42 49.28 35.92 16.80 Table 4: Ablation studies of equivariance distillation on Charades-STA variation datasets. Method R@0.3 R@0.5 R@0.7 full 69.13 53.77 24.70 full w/o icd 67.48 51.68 23.50 full w/o bootstrap 66.50 50.44 22.86 Table 5: Ablation studies on intra-model consistency. to a single sub-model without bootstrapping learning as “full w/o bootstrap.” When dropping intra-model consistency distillation loss, we find that the performance decreases about one to two points. Upon removing the intra-model consistency distillation loss i.e., “full w/o icd”, we observe a decline of approximately one to two points in each evaluation metric. This underscores that the enhancement of the teacher model stems not only from the ensemble effect, but further from the mutual learning among the sub-models. Moreover, downgrading the bootstrapping learning to a single sub-model results in a drop of around 3 points. This also verifies that the consensus of the N sub-models’ prediction can provide useful training guidance for the single sub-model. 3) The influence of hyperparameter N in bootstrapping learning. The teacher model T contains N sub-models, with the hyperparameter N playing a crucial role in the bootstrapping learning. Fig 3 presents the impact of N on the T ’s localization accuracy on vanilla Charades-STA. As N increases, the average of “R@m” where m={0.3, 0.5, 0.7} gradually becomes larger when N < 3. However, beyond N > 3, the model’s accuracy reaches saturation. This can 46.6 48.34 49.20 49.25 49.22 46.0 47.0 48.0 49.0 50.0 1 2 3 4 5 Avg R@m N Figure 3: Ablation studies on the sub-model numbers N. be attributed to the fact that there is little additional complementary information provided as the number of sub-models is sufficient. Conclusion This paper for the first time leverages the large language model (LLM) for weakly-supervised natural language video localization. We propose omnipotent distillation with LLM to resolve the key obstacles of this task. Firstly, a bootstrapping learning framework with intra-model consistency is devised to alleviate the insufficient supervision limitation. Secondly, we capitalize on the LLM to paraphrase the language query and then distill the teacher model to an efficient student model using the semantic equivariance property of paraphrasing. To assess the generalization of the model across diverse queries, we create extensive datasets with different types of variations. Experiments demonstrate our method achieves state-of-theart results in both adaptability and performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 753 Acknowledgements This work was carried out at the Rapid-Rich Object Search (ROSE) Lab, School of EEE, NTU, Singapore. The research is supported in part by the NTU-PKU Joint Research Institute (a collaboration between the Nanyang Technological University and Peking University that is sponsored by a donation from the Ng Teng Fong Charitable Foundation). References Bao, P.; and Mu, Y. 2022. Learning Sample Importance for Cross-Scenario Video Temporal Grounding. In ICMR. Bao, P.; Xia, Y.; Yang, W.; Ng, B. P.; Er, M. H.; and Kot, A. C. 2024. Local-Global Multi-Modal Distillation for WeaklySupervised Temporal Video Grounding. In AAAI. Bao, P.; Yang, W.; Ng, B. P.; Er, M. H.; and Kot, A. C. 2023. Cross-modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization. In AAAI. Bao, P.; Zheng, Q.; and Mu, Y. 2021. Dense Events Grounding in Video. In AAAI. Brown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; and et al. 2020. Language Models are Few-Shot Learners. In NeurIPS. Carreira, J.; and Zisserman, A. 2017. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In CVPR. Chen, Y.-C.; Lin, Y.-Y.; Yang, M.-H.; and Huang, J.-B. 2019. CrDoCo: Pixel-Level Domain Transfer With Cross-Domain Consistency. In CVPR. Chen, Z.; Ma, L.; Luo, W.; Tang, P.; and Wong, K.-Y. K. 2020. Look Closer to Ground Better: Weakly-Supervised Temporal Grounding of Sentence in Video. ArXiv. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; and et al. 2022. PaLM: Scaling Language Modeling with Pathways. ArXiv. Deng, Z.; Luo, Y.; and Zhu, J. 2019. Cluster Alignment With a Teacher for Unsupervised Domain Adaptation. In ICCV. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL. Gao, J.; Sun, C.; Yang, Z.; and Nevatia, R. 2017. Tall: Temporal activity localization via language query. In ICCV. Gao, M.; Davis, L. S.; Socher, R.; and Xiong, C. 2019. WSLLN:Weakly Supervised Natural Language Localization Networks. In EMNLP. Gupta, S.; Hoffman, J.; and Malik, J. 2015. Cross Modal Distillation for Supervision Transfer. In CVPR. Hinton, G. E.; Vinyals, O.; and Dean, J. 2015. Distilling the Knowledge in a Neural Network. ArXiv, abs/1503.02531. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. In ICLR. Krishna, R.; Hata, K.; Ren, F.; Fei-Fei, L.; and Carlos Niebles, J. 2017. Dense-captioning events in videos. In ICCV. Lin, Z.; Zhao, Z.; Zhang, Z.; Wang, Q.; and Liu, H. 2020. Weakly-Supervised Video Moment Retrieval via Semantic Completion Network. In AAAI. Liu, M.; Wang, X.; Nie, L.; Tian, Q.; Chen, B.; and Chua, T.-S. 2018. Cross-modal moment localization in videos. In ACM MM. Micaelli, P.; and Storkey, A. J. 2019. Zero-shot Knowledge Transfer via Adversarial Belief Matching. In NeurIPS. Mithun, N. C.; Paul, S.; and Roy-Chowdhury, A. K. 2019. Weakly Supervised Video Moment Retrieval From Text Queries. In CVPR. Nayak, G. K.; Mopuri, K. R.; Shaj, V.; Babu, R. V.; and Chakraborty, A. 2019. Zero-Shot Knowledge Distillation in Deep Networks. In ICCV. OpenAI. 2023. GPT-4 Technical Report. ArXiv. Pennington, J.; Socher, R.; and Manning, C. D. 2014. GloVe: Global Vectors for Word Representation. In EMNLP. Qi, M.; Qin, J.; Yang, Y.; Wang, Y.; and Luo, J. 2021. Semantics-Aware Spatial-Temporal Binaries for CrossModal Video Retrieval. TIP. Sigurdsson, G. A.; Varol, G.; Wang, X.; Farhadi, A.; Laptev, I.; and Gupta, A. K. 2016. Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding. In ECCV. Song, Y.; Wang, J.; Ma, L.; Yu, Z.; and Yu, J. 2020. WeaklySupervised Multi-Level Attentional Reconstruction Network for Grounding Textual Queries in Videos. ArXiv:2003.07048. Sreenu, G.; and Durai, M. A. S. 2019. Intelligent video surveillance: a review through deep learning techniques for crowd analysis. Journal of Big Data. Tan, R.; Xu, H.; Saenko, K.; and Plummer, B. A. 2021. LoGAN: Latent Graph Co-Attention Network for WeaklySupervised Video Moment Retrieval. In WACV. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; and et al. 2023. LLaMA: Open and Efficient Foundation Language Models. ArXiv. Tran, D.; Bourdev, L. D.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning Spatiotemporal Features with 3D Convolutional Networks. In ICCV. Wang, J.; Ma, L.; and Jiang, W. 2020. Temporally Grounding Language Queries in Videos by Contextual Boundary-aware Prediction. In AAAI. Wang, Q.; Zhan, L.; Thompson, P. M.; and Zhou, J. 2020. Multimodal Learning with Incomplete Modalities by Knowledge Distillation. In KDD. Wang, W.; Huang, Y.; and Wang, L. 2019. Language-driven Temporal Activity Localization: A Semantic Matching Reinforcement Learning Model. In CVPR. Wang, Y.; Deng, J.; gang Zhou, W.; and Li, H. 2022. Weakly Supervised Temporal Adjacent Network for Language Grounding. TMM. Wang, Z.; Chen, J.; and Jiang, Y.-G. 2021. Visual CoOccurrence Alignment Learning for Weakly-Supervised Video Moment Retrieval. In ACM MM. Wu, J.; Li, G.; Han, X.; and Lin, L. 2020. Reinforcement Learning for Weakly Supervised Temporal Grounding of Natural Language in Untrimmed Videos. In ACM MM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 754 Yang, W.; Zhang, T.; Zhang, Y.; and Wu, F. 2021. Local Correspondence Network for Weakly Supervised Temporal Sentence Grounding. TIP. Yu, B. X. B.; Liu, Y.; and Chan, K. C. C. 2021. Multimodal Fusion via Teacher-Student Network for Indoor Action Recognition. In AAAI. Zhang, D.; Dai, X.; Wang, X.; Wang, Y.-F.; and Davis, L. S. 2019a. Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In CVPR. Zhang, S.; Peng, H.; Fu, J.; and Luo, J. 2020a. Learning 2D Temporal Adjacent Networks for Moment Localization with Natural Language. In AAAI. Zhang, Z.; Lin, Z.; Zhao, Z.; and Xiao, Z. 2019b. CrossModal Interaction Networks for Query-Based Moment Retrieval in Videos. In ACM SIGIR. Zhang, Z.; Lin, Z.; Zhao, Z.; Zhu, J.; and He, X. 2020b. Regularized Two-Branch Proposal Networks for WeaklySupervised Moment Retrieval in Videos. In ACM MM. Zhang, Z.; Zhao, Z.; Lin, Z.; Zhu, J.; and He, X. 2020c. Counterfactual Contrastive Learning for Weakly-Supervised Vision-Language Grounding. In NeurIPS. Zheng, M.; Huang, Y.; Chen, Q.; and Liu, Y. 2022a. Weakly Supervised Video Moment Localization with Contrastive Negative Sample Mining. In AAAI. Zheng, M.; Huang, Y.; Chen, Q.; Peng, Y.; and Liu, Y. 2022b. Weakly Supervised Temporal Sentence Grounding with Gaussian-based Contrastive Proposal Learning. In CVPR. Zhu, W.; Lu, J.; Li, J.; and Zhou, J. 2021. DSNet: A Flexible Detect-to-Summarize Network for Video Summarization. TIP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 755
2024
84
18,673
End-to-End RGB-D Image Compression via Exploiting Channel-Modality Redundancy Huiming Zheng1, 2, Wei Gao1, 2* 1 School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, China 2 Peng Cheng Laboratory, China hmzheng@stu.pku.edu.cn, gaowei262@pku.edu.cn Abstract As a kind of 3D data, RGB-D images have been extensively used in object tracking, 3D reconstruction, remote sensing mapping, and other tasks. In the realm of computer vision, the significance of RGB-D images is progressively growing. However, the existing learning-based image compression methods usually process RGB images and depth images separately, which cannot entirely exploit the redundant information between the modalities, limiting the further improvement of the Rate-Distortion performance. With the goal of overcoming the defect, in this paper, we propose a learning-based dual-branch RGB-D image compression framework. Compared with traditional RGB domain compression scheme, a YUV domain compression scheme is presented for spatial redundancy removal. In addition, Intra-Modality Attention (IMA) and Cross-Modality Attention (CMA) are introduced for modal redundancy removal. For the sake of benefiting from cross-modal prior information, Context Prediction Module (CPM) and Context Fusion Module (CFM) are raised in the conditional entropy model which makes the context probability prediction more accurate. The experimental results demonstrate our method outperforms existing image compression methods in two RGB-D image datasets. Compared with BPG, our proposed framework can achieve up to 15% bit rate saving for RGB images. Introduction RGB-D images are an important 3D data format. It has been widely used in 3D scene reconstruction (Zollh¨ofer et al. 2018), salient object detection (Liao et al. 2020; Gao et al. 2021), robotics and autonomous navigation, medical imaging and health monitoring, environmental monitoring, and other fields. Unlike RGB images, depth images contain information about the distance to the surface of the scene object from the viewpoint, which provides depth information among 3D scenes. Therefore, the RGB-D joint analysis methods are popular in computer vision tasks. However, these methods (Bozic et al. 2020; Ji et al. 2020; Liu, Zhang, and Han 2020; Shi et al. 2022; Tian et al. 2020; Xiang et al. 2021) use additional modality, which will bring supernumerary storage and transmission costs. Therefore, designing an *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. efficient RGB-D image compression method is an important and challenging work. Deep learning-based image compression has been developing for several years. Numerous works (Minnen and Singh 2020; Gao et al. 2020; Wu et al. 2021; Lee et al. 2022; Wu and Gao 2022; Zhu et al. 2022; Tao et al. 2023; Jiang et al. 2023) have been put forth to improve rate-distortion performance and optimize coding framework. In addition, some open-source algorithm libraries (B´egaint et al. 2020; Gao et al. 2023) also efficiently promote the prosperity of the field. However, existing methods focus on single image compression, ignoring the direct interactivity of RGB and depth modalities. Modality redundancy is not adequately considered, limiting rate-distortion performance improvement. Besides, knowledge-guided compression is one of the most relevant topics. The coding framework can use additional information from the data source itself or analyze additional information from its own modules to better eliminate redundancy. Stereo image compression framework (Deng et al. 2021) adopts homography transformation to remove view redundancy. Light field image compression framework (Zhao et al. 2018) leverages the inherent similarity of light field images to remove the redundancy of different perspectives. 360° image compression framework (Li et al. 2022) utilizes a latitude adaptive coding scheme to allocate variant numbers of bits for different regions. Although these methods explore modality redundancy removal to some extent, they can not achieve a higher compression ratio in the RGB-D image compression owing to the significant difference in the distribution between RGB images and depth images. Therefore, it is necessary to develop a compression framework dedicated to RGB-D images. In this paper, we proposed an efficient learning-based RGB-D image compression network by exploiting the redundant information between the modalities and channels. Most learning-based methods usually sample and compress images in RGB domain, while our method chooses to sample images in YUV domain in order to remove spatial redundancy in transform domain for depth images. In addition, we design Intra-Modality Attention (IMA) in feature extraction module and Cross-Modality Attention (CMA) in the main encoder module to eliminate channel redundancy and modality redundancy separately. We adopt Context Prediction Module (CPM) and Context Fusion Module (CFM) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7562 in the conditional entropy model to sufficiently excavate the coherence between the two modalities and utilize crossmodality prior information, which provides more accurate probability prediction information for entropy coder. Experimental results prove that our proposed network attains better rate-distortion performance on several widely used RGB-D datasets compared with the single image compression methods. The contributions of our proposed method can be summarized as follows: • We propose a learning-based RGB-D image compression framework by exploiting the redundant information between the channels and modalities. The framework is conducted in the YUV domain rather than RGB domain, which is conducive to the elimination of spatial redundancy for depth images. • Intra-Modality Attention and Cross-Modality Attention are designed to remove cross-channel redundancy and cross-modality redundancy for higher compression ratio. To be specific, multi-head self attention and multi-head cross attention are integrated into the module for more efficient cross-channel and cross-modality information interaction. • Conditional Context-based Entropy Model is adapted to reveal the dependency between the modalities. In addition, Context Prediction Module and Context Fusion Module are elaborately designed for efficient probability prediction. • According to the experimental results, our proposed framework achieves SOTA performance when compared with existing image compression methods in two RGB-D image datasets. Related Work Deep Learning-based Image Compression Over the recent years, with the development of deep learning, numerous works (Ball´e, Laparra, and Simoncelli 2016; Ball´e et al. 2018; Minnen, Ball´e, and Toderici 2018; Toderici et al. 2015) focus on learning-based image compression have been presented. Ball´e et al. (Ball´e, Laparra, and Simoncelli 2016) proposed the most widely used lossy image compression framework. The model is constructed based on an autoencoder. The final rate-distortion (RD) performance surpassed JPEG and JPEG-2000. On this basis, Ball´e et al. (Ball´e et al. 2018) then introduced hyperprior model, which captured spatial redundancy between feature maps. Minnen et al. (Minnen, Ball´e, and Toderici 2018) utilized single PixelCNN (Van Den Oord, Kalchbrenner, and Kavukcuoglu 2016) layer to model autoregressive priors and combined autoregressive priors with hyperprior to achieve better symbol probability prediction. The above frameworks are based on Convolutional Neural Network (CNN). In addition, Toderici et al. (Toderici et al. 2015) proposed a variable rate compression framework based on Recurrent Neural Network (RNN). Although it is feasible to extend the above methods to RGBD image compression frameworks, most of these methods are devoted to single modality redundancy removal. The further improvement of compression efficiency is limited. It is urgent to design a novel compression framework for RGB-D images. Stereo Image Compression With the continuous improvement of 3D vision technology, stereo image compression has become one of the hot topics in the image compression field. A great deal of works (Liu, Wang, and Urtasun 2019; Huang et al. 2021; W¨odlinger et al. 2022; Deng et al. 2021) have sprung up in recent years. Liu et al. (Liu, Wang, and Urtasun 2019) first proposed a deep learning-based stereo image compression network (DSIC). In this work, a parameter skip function is proposed to exploit the dependencies between two perspectives. W¨odlinger et al. (W¨odlinger et al. 2022) proposed a scheme to compress images by exploiting the similarity of the stereo images. The right image utilized the latent shifting information from the encoded left image for extreme bit rate savings. Deng et al. (Deng et al. 2021) introduced a homography estimation based stereo image compression network, called HESIC. To map the left image to the right image, the homography matrix is adapted to achieve homologous transformation. The residual information from different views is encoded. The above approaches take full advantage of the similarities between the left and right views. However, for RGBD images, similar features between modalities are more difficult to extract. In addition, some transformation methods, such as homologous transformation and affine transformation, are probably not suitable for RGB-D images. Therefore, targeted works are still urgently needed in the learningbased RGB-D compression field. Image Compression with Attention Mechanism Attention mechanisms have been introduced to image compression for a long time. In self attention mechanism, each input pixel interacts with the others to calculate its dependencies with the others, and then uses the dependencies to allocate different weights to each position. Cheng et al. (Cheng et al. 2020) first used simplified non-local attention module and integrated it into the network architecture to improve performance. Chen et al. (Chen et al. 2021) embedded more efficient non-local attention module into the whole framework for reducing time and space complexity, and used attention mechanism to generate implicit masks for adaptive bit allocation. Zou et al. (Zou, Song, and Zhang 2022) introduced a simpler and more efficient window-based local attention block, which can fully take advantage of global structures and local textures in the transformer-based structure. The above methods typically use attention mechanisms in a single modality. The redundancy information between modalities is not fully explored. Therefore, it is necessary to design new attention modules to eliminate the redundancy between RGB and depth modalities. Methodology Overview The overall architecture of our RGB-D image compression framework is presented in Fig. 1. The network is based on transformer architecture (Liu et al. 2021). The input RGB The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7563 Figure 1: The overall network architecture of the proposed method. The input depth image and input RGB images are splitted into four channels. The framework consists of feature extraction module, encoder, entropy model, decoder and feature reconstruction module. Here, AE donates arithmetic encoding, AD donates arithmetic decoding, Q donates the quantizer, ”↑2” represents upsampling by a factor of two, ”↓2” represents downsampling by a factor of two. Figure 2: The architecture of Intra-Modality Attention (IMA). and depth images are transformed into 4 channels in the YCbCr subsampled color space. The weight and height of U and V channels are half of Y’s weight and height in RGB images. The depth images only retain Y channel information. We donate y, u, v, d as the input channels. First, the input channels are fed into feature extraction module to eliminate channel redundancy. The feature maps yex, uex, vex, dex are obtained from y, u, v, d respectively after feature extraction. Then we concat yex, uex, vex for the next stage input yuvex. In the encoder stage (analysis transform), a dual-branch network is presented for the input yuvex and dex. The proposed Cross-Modality Attention allows the latent representations to learn cross-modality information from each other. After the encoder stage, the latent representations yuva and da are sent to quantizer. The quantized latent representations [ yuva and c da are then sent into the conditional entropy model for accurate symbol probability prediction. In the decoder side (synthesis transform), [ yuva and c da are fed into the dualbranch decoder framework for feature restoration and upsampling. Feature maps yuvs and ds are obtained after the decoding process. At last, in the feature reconstruction module, the feature map yuvs are splited into Y,U,V channels yre, ure, vre. Detail restoration and reconstruction are conducted in the feature reconstruction module. We donate the feature extraction module, encoder, quantizer, decoder, feature reconstruction module as E(·), ga(·), Q(·), gs(·), R(·), respectively. The main encoding-decoding process except hyperprior can be formulated as: iex = E(i), ia = ga(iex), bia = Q(ia), is = gs(bia), ire = R(is), (1) where i represents one of the input y, u, v, d. Intra-Modality Attention In our proposed framework, we use Intra-Modality Attention in the feature extraction module and feature reconstruction module to reduce the channel redundancy. The framework of IMA is shown in Fig. 2. The main framework is based on two successive Swin Transformer Blocks (Liu et al. 2021). Given an input feature map with the dimensions H ×W × C, the window-based attention first reshapes the feature map to the size of HW M 2 × M 2 × C, while M represents the window size. HW M 2 windows are obtained from the operation. Then, self-attention is adopted to each window. Three learnThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7564 Figure 3: The architecture of Cross-Modality Attention (CMA). able weight matrices WQ, WK, WV shared the same parameters are multiplied to the local feature map F, in order to get query Q, key K, and value V , respectively. The process can be described as: {Q, K, V} =  FWQ, FWK, FWV . (2) Then, the attention function calculates the dot-product of the query with each of the keys. The result includes a relative position bias for better computational complexity. A softmax operator is adopted to normalize the result for attention scores. The above process can be defined as: Attention (Q, K, V) = softmax QKT √dk + B ! V, (3) where d is the dimension, B is the relative position bias. The main process of Intra-Modality Attention can be domenstrated as: ˆzl = W-MSA LN zl−1 (Q, K, V )  + zl−1(Q), zl = MLP LN ˆzl + ˆzl, ˆzl+1 = SW-MSA LN zl (Q, K, V )  + zl(Q), zl+1 = MLP LN ˆzl+1 + ˆzl+1, (4) where ˆzl and zl are the output features of (S)W-MSA and MLP blocks respectively. zl−1 is the input feature map. LN(·) is the LayerNorm function. W-MSA(·) represents the window based multi-head self attention, and SW-MSA(·) represents the shifted-window based multi-head self attention. Cross-Modality Attention Following the Intra-Modality Attention, we also design Cross-Modality Attention. The network architecture is shown in Fig. 3. Different from IMA that removes channel redundancy, CMA devotes to eliminating modality redundancy. In addition, CMA can further integrate the queries between different modalities. The framework of IMA and CMA are similar, the main difference is that CMA adopts multi-head cross attention rather than multi-head self attention to achieve information interaction between modalities. Given the input RGB features map zl−1 r and depth feature map zl−1 d in local windows, the complete process of the Cross-Modality Attention adapted to zl−1 r can be defined as: Figure 4: The architecture of Conditional Context-based Entropy Model. ˆzl r = W-MCA LN zl−1 r  (Qr, Kd, Vd)  + zl−1 r (Qr) , zl r = MLP LN ˆzl r  + ˆzl r, ˆzl+1 r = SW-MCA LN zl r  (Qr, Kd, Vd)  + zl r(Qr), zl+1 r = MLP LN ˆzl+1 r  + ˆzl+1 r , (5) while the complete process of the Cross-Modality Attention adapted to zl−1 d can be described as: ˆzl d = W-MCA LN zl−1 d  (Qd, Kr, Vr)  + zl−1 d (Qd) , zl d = MLP LN ˆzl d  + ˆzl d, ˆzl+1 d = SW-MCA LN zl d  (Qd, Kr, Vr)  + zl d(Qd), zl+1 d = MLP LN ˆzl+1 d  + ˆzl+1 d , (6) where ˆzl r and ˆzl d represent the output RGB feature map and depth feature map of (S)W-MCA. zl r and zl d are the output features of MLP blocks. W-MCA(·) represents the window based multi-head cross attention, and SW-MCA(·) represents the shifted-window based multi-head cross attention. Conditional Context-based Entropy Model Traditional single image compression methods (Ball´e et al. 2018; Cheng et al. 2020; Chen et al. 2021) usually utilize The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7565 hyperprior information as conditional prior. The probability density of one spatial location can be estimated by the known probability density of other locations. But for RGBD images with cross-modality information, it is not enough for hyperprior to provide accessional information. In our proposed method, we adopt Conditional Context-based Entropy Model for more accurate probability estimation. The architecture of Conditional Context-Based Entropy Model is shown in Fig. 4. After the encoder stage, the latent representation is sent to hyper encoder and hyper decoder for spatial distribution information. Besides, it is also fed to Context Prediction Module (CPM) for context prior information. The output feature maps of CPM are then sent to Context Fusion Module (CFM) for cross-modality information aggregation. For depth latent representation, we estimate the entropy parameters using context and spatial priors. For more complex RGB latent representation, in addition to the former, we use supernumerary cross-modality information to improve probability prediction accuracy. To be specific, we donate ˜yd as the likehoods of the depth latent representations and ˜yr as the likehoods of the RGB latent representations. ˜yi d and ˜yi r represent the i-th element in ˜yd and ˜yr. The estimated probability mass functions (PMFs) q˜yd|˜zd and q˜yr|˜yd,˜zr are shown in Eq. (7). q˜yd|˜zd (˜yd | ˜zd) = X i q˜yi d|˜y<i d ˜zd ˜yi d | ˜y<i d , ˜zd  , q˜yr|˜yd,˜zr (˜yr | ˜yd, ˜zr) = X i q˜yi r|˜y<i r ,˜yd,˜zr ˜yi r | ˜y<i r , ˜yd, ˜zr  . (7) Context Prediction Module and Context Fusion Module In order to further model PMFs, context prediction module is adapted to accurately estimate context prior information. Mask Scaled Cosine Attention (MSCA) is adopted in the context prediction module. In addition, we proposed the context fusion module instead of concat operation to better aggregate cross-modality information. Mask Scaled Cross Cosine Attention (MSCCA) is integrated into the context fusion module in order to achieve information interaction between modalities. In order to ensure the serial encodingdecoding order, we use look ahead mask mechanism (Alcorn and Nguyen 2021) in the transformer architecture. Rather than scaled dot self attention, we adopt scaled cosine attention which makes the training of the model more stable. In addition, log-space continuous relative position bias is used instead of linear-space relative position bias for better reconstruction quality aiming at high-resolution images. Loss Function During the training phase, the loss function L is described as follows: L = Rr + Rd + λ(Dr + Dd), (8) where Dr and Dd are the weighted mean-squared error (MSE) of YUV channels and depth channel. They can be formulated as: Dd = MSEd, Dr = (4MSEy + MSEu + MSEv))/6. (9) Rr and Rd denote the bit rate cost, which can be calculated by the likelihood of latent representations. According to the configuration of YUV420 color domain, the weighting ratio between Y, U, and V is 4:1:1. Experiments Datasets SUN-RGBD The SUN-RGBD dataset (Song, Lichtenberg, and Xiao 2015) is a widely used computer vision research dataset for indoor scene understanding and depth perception tasks. The dataset provides data such as RGB images, depth images, and semantic segmentation labels in indoor environments, and is suitable for lots of different computer vision tasks. The dataset contains 10,000 RGB-D images. For training, 8,000 image pairs were randomly selected, while 1,000 image pairs were chosen for validation, and an additional 1,000 image pairs were reserved for testing. NYU-Depth V2 NYU-Depth V2 dataset (Chodosh, Wang, and Lucey 2019) comprises video sequences capturing diverse indoor scenes recorded by the RGB and depth cameras of Microsoft Kinect. It includes 1,449 annotated RGB images and depth images. The images are from 464 scenes in three cities. We divide the entire dataset into three parts, 1,159 image pairs for training, 145 image pairs for validating, and 145 image pairs for testing. Experimental Details Training Strategy We train the whole network jointly. The proposed network is based on the CUDA-enabled PyTorch implementation. We set different values for the hyperparameter λ to control the bit rate. The λ configuration is referred to CompressAI (B´egaint et al. 2020). Adam optimizer (Kingma and Ba 2014) is adopted in the training process. We initialize the learning rate to 1e −4. It gradually decreases with the update of the model during training and eventually falls to 1e−5. The batch size is set to 4. We train about 1000 epochs for each model. It costs about ten days for the training stage according to Tesla V100. The input training data is trimmed to the size of 256×256 convenient for model inference. The training data is mainly based on SUN-RGBD dataset. When the model is tested on the NYU-Depth V2 dataset, we finetune the pretrain model using training dataset in NYU-Depth V2 dataset for about 100 epochs. Evaluation Metric We adopt PSNR as the evaluation metrics. PSNR is an objective metric to evaluate image quality, which reflects the signal fidelity of an image. Additionally, we compare the Bjontegaard delta rate (BD-Rate) (Bjontegaard 2001) in order to obtain the quantitative rate-distortion performance. Noted that the PSNR and BD-Rate metrics are evaluated in YUV420 domain. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7566 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Bits per pixel (BPP) 30 35 40 45 50 55 YUV-PSNR (dB) RGB image YUV-PSNR (SUN-RGBD dataset) Factorized Hyperprior Mbt2018 Cheng2020 BPG WBA TIC Coarse2fine SASIC Proposed 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Bits per pixel (BPP) 35.0 37.5 40.0 42.5 45.0 47.5 50.0 52.5 55.0 YUV-PSNR (dB) Depth image YUV-PSNR (SUN-RGBD dataset) Factorized Hyperprior Mbt2018 Cheng2020 BPG WBA TIC Coarse2fine SASIC Proposed Figure 5: Rate-distortion curves for RGB images (left) and depth images (right) tested in SUN-RGBD dataset. Methods SUN-RGBD NYU-Depth V2 RGB Depth RGB Depth Factorized 76.871 29.656 67.362 21.290 Hyperprior 46.893 15.706 38.214 9.816 Mbt2018 41.027 9.762 28.963 6.372 Cheng2020 8.753 -1.727 9.242 0.004 WBA -4.623 -7.962 -6.352 -9.251 TIC -1.555 -16.809 -3.375 -12.378 Coarse2fine 5.306 -18.567 3.957 -19.619 SASIC -9.310 -25.918 -8.118 -30.462 Proposed -15.717 -36.244 -12.376 -38.971 Table 1: BD-Rate (%) comparisons on SUN-RGBD dataset and NYU-Depth V2 dataset against BPG. The bold numbers represent the optimal results in this classification. Baseline We compare our method against serval wellperforming single image methods (WBA (Zou, Song, and Zhang 2022), TIC (Lu et al. 2021), Coarse2Fine (Hu, Yang, and Liu 2020)), a stereo image compression method (SASIC (W¨odlinger et al. 2022)) and some classic learning-based methods (Factorized (Ball´e, Laparra, and Simoncelli 2016), Hyperprior (Ball´e et al. 2018), Mbt2018 (Minnen, Ball´e, and Toderici 2018), Cheng2020attention (Cheng et al. 2020)). In addition, traditional single-modality image compression method BPG (Bellard 2018) is also compared with our proposed framework. Experiment Results Quantitative Results Table 1 presents the coding performance of the methods against BPG in two datasets. The BD-Rate value is negative, indicating that the coding performance of this algorithm is better than that of the benchmark algorithm. Otherwise, it is worse than the benchmark algorithm. To ensure a fair comparison, we employ the same training dataset and training methods as this model to train other learning-based methods. It is evident that our proposed method attains the the best RD performance. In comparison Model BD-Rate(%) Baseline Baseline + Conditional Context-based Entropy Model -7.56 Baseline + Proposed CPM -4.18 Baseline + Proposed CFM -2.35 Table 2: Ablation study of each component in conditional entropy model. Our entropy model is based on Mbt2018. with the single image compression methods, the RD performance of our proposed method is significantly improved. To be specific, our approach offers over 10% gains against BPG on BD-Rate metric for RGB images in both datasets. In addition, we plot the RD curve to further visualize the performance gap between the various methods. Fig. 5 shows the YUV-RSNR result for RGB images and depth images in SUN-RGBD dataset. It indicates that our proposed framework surpassed other frameworks, showcasing the best RD performance. Besides, from Fig. 5, it is obvious that the compression effect of the model on depth images is obviously better than that on RGB images. Qualitative Results In order to show the compression effect of each model more intuitively, we visualize the compressed image of each model in Fig. 6. It is important to note that, for the sake of fairness, we try to keep all models compress at the same bit rate. As depicted in Fig. 6, our method exhibits superior subjective visual quality under the premise of using less bit rate. After the local details are enlarged, our method can still retain the semantic information (such as the letters in the figure) of the original image. Running Time and Complexity The number of our proposed model parameters is 69.03 M. For a RGB-D image pair with the input resolution of 256×256, the FLOPs reach 6.93 Mil/pixel. When we test our proposed model in both two datasets on the Tesla V100, the average encoding time The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7567 Original image WBA,bpp=0.189 Hyperprior,bpp=0.197 TIC,bpp=0.171 Mbt2018,bpp=0.192 Coarse2fine,bpp=0.184 Cheng2020,bpp=0.179 SASIC,bpp=0.183 BPG,bpp=0.176 Proposed,bpp=0.163 Figure 6: Visual quality comparison for the RGB image compression results. Model BD-Rate(%) Baseline Baseline + IMA -3.98 Baseline + CMA -5.16 Baseline + IMA + CMA -6.71 Table 3: Ablation study of IMA and CMA. It should be noted that after removing the CMA, the upper and lower branches no longer have information interaction in the encoder and decoder. Therefore, we can change the dual-branch structure of this model to a single-branch structure. and decoding time are 11.696 s and 8.582 s, respectively. Compared with other learning-based model, our method introduces additional computational cost, but obtains significant rate-distortion performance gain. Ablation Study and Analysis Case 1: Effectiveness of conditional entropy model. As illustrated in Table 2, We verify the validity of each module in the entropy model through substitution. It is conducted on the SUN-RGBD dataset. We can find that each module contributes to enhancing the overall coding performance. In addition, it is noticed that conditional context-based entropy model contributes most to the RD performance. Case 2: Effectiveness of YUV domian compression. In order to verify that for RGB-D images, compression in YUV domain is more efficient, compared with the proposed framework, we design a framework that the original inputs are RGB images and depth images, instead of four channels. To ensure the fairness of the comparison experiment, we retain both IMA and CMA. The ablation experiments show that YUV domain compression methods have obvious performance gain compared with RGB domain compression algorithm when tested in the YUV domain. Case 3: Effectiveness of IMA and CMA. We assess the efficacy of IMA and CMA, and the results are presented in Table 3. It is shown that each module improves the whole RD performance. It is noteworthy that the results are better when CMA is used alone than when IMA is used alone. The results imply the importance of different modality information interactions and cross-modality redundancy removal in RGB-D image compression. Conclusion In this paper, we propose a novel learning-based RGB-D image compression framework, which significantly improves the compression efficiency of RGB-D images. First, we convert input image pairs from the RGB domain to the YUV420 domain to eliminate spatial redundancy. Intra-Modality Attention (IMA) is designed in the feature extraction and feature reconstruction stage to reduce cross-channel redundancy. Then, Cross-Modality Attention (CMA) is adapted in the encoder and decoder to remove cross-modality redundancy. To leverage the prior information between modalities effectively, Conditional Context-based Entropy Model is adopted for better symbol probability estimation. In the entropy model, we change the Context Prediction Module (CPM) with Mask Scaled Cosine Attention (MSCA). Context Fusion Module (CFM) is also proposed to aggregate cross-modality information. The comparative experiment results and the ablation study confirms the effectiveness of the proposed method. Acknowledgments This work was supported by Natural Science Foundation of China (62271013, 62031013), Shenzhen Fundamental Research Program (GXWD2020123116580700720200806163656003), Shenzhen Science and Technology Program (JCYJ20230807120808017), CAAIMindSpore Open Fund, developed on OpenI Community (CAAIXSJLJJ-2023-MindSpore07). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7568 References Alcorn, M. A.; and Nguyen, A. 2021. baller2vec++: A look-ahead multi-entity transformer for modeling coordinated agents. arXiv preprint arXiv:2104.11980. Ball´e, J.; Laparra, V.; and Simoncelli, E. P. 2016. Endto-end optimized image compression. arXiv preprint arXiv:1611.01704. Ball´e, J.; Minnen, D.; Singh, S.; Hwang, S. J.; and Johnston, N. 2018. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436. B´egaint, J.; Racap´e, F.; Feltman, S.; and Pushparaja, A. 2020. Compressai: a pytorch library and evaluation platform for end-to-end compression research. arXiv preprint arXiv:2011.03029. Bellard, F. 2018. Bpg image format. http://bellard.org/bpg/. Accessed: 2023-08-01. Bjontegaard, G. 2001. Calculation of average PSNR differences between RD-curves. VCEG-M33, Austin, TX, USA. Bozic, A.; Zollhofer, M.; Theobalt, C.; and Nießner, M. 2020. Deepdeform: Learning non-rigid rgb-d reconstruction with semi-supervised data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7002–7012. Chen, T.; Liu, H.; Ma, Z.; Shen, Q.; Cao, X.; and Wang, Y. 2021. End-to-end learnt image compression via nonlocal attention optimization and improved context modeling. IEEE Transactions on Image Processing, 30: 3179–3191. Cheng, Z.; Sun, H.; Takeuchi, M.; and Katto, J. 2020. Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7939–7948. Chodosh, N.; Wang, C.; and Lucey, S. 2019. Deep convolutional compressed sensing for lidar depth completion. In Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part I 14, 499–513. Springer. Deng, X.; Yang, W.; Yang, R.; Xu, M.; Liu, E.; Feng, Q.; and Timofte, R. 2021. Deep homography for efficient stereo image compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1492– 1501. Gao, W.; Liao, G.; Ma, S.; Li, G.; Liang, Y.; and Lin, W. 2021. Unified information fusion network for multi-modal RGB-D and RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology, 32(4): 2091–2106. Gao, W.; Sun, S.; Zheng, H.; Wu, Y.; Ye, H.; and Zhang, Y. 2023. OpenDMC: An Open-Source Library and Performance Evaluation for Deep-learning-based Multi-frame Compression. In Proceedings of the 31st ACM International Conference on Multimedia, 9685–9688. Gao, W.; Tao, L.; Zhou, L.; Yang, D.; Zhang, X.; and Guo, Z. 2020. Low-rate image compression with super-resolution learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 154– 155. Hu, Y.; Yang, W.; and Liu, J. 2020. Coarse-to-fine hyperprior modeling for learned image compression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 11013–11020. Huang, Z.; Sun, Z.; Duan, F.; Cichocki, A.; Ruan, P.; and Li, C. 2021. L3c-stereo: Lossless compression for stereo images. arXiv preprint arXiv:2108.09422. Ji, W.; Li, J.; Zhang, M.; Piao, Y.; and Lu, H. 2020. Accurate RGB-D salient object detection via collaborative learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16, 52–69. Springer. Jiang, W.; Yang, J.; Zhai, Y.; Ning, P.; Gao, F.; and Wang, R. 2023. MLIC: Multi-Reference Entropy Model for Learned Image Compression. In Proceedings of the 31st ACM International Conference on Multimedia, 7618–7627. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lee, J.-H.; Jeon, S.; Choi, K. P.; Park, Y.; and Kim, C.-S. 2022. DPICT: Deep progressive image compression using trit-planes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16113–16122. Li, M.; Li, J.; Gu, S.; Wu, F.; and Zhang, D. 2022. End-toEnd Optimized 360° Image Compression. IEEE Transactions on Image Processing, 31: 6267–6281. Liao, G.; Gao, W.; Jiang, Q.; Wang, R.; and Li, G. 2020. Mmnet: Multi-stage and multi-scale fusion network for rgbd salient object detection. In Proceedings of the 28th ACM international conference on multimedia, 2436–2444. Liu, J.; Wang, S.; and Urtasun, R. 2019. Dsic: Deep stereo image compression. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3136–3145. Liu, N.; Zhang, N.; and Han, J. 2020. Learning selective self-mutual attention for RGB-D saliency detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 13756–13765. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Lu, M.; Guo, P.; Shi, H.; Cao, C.; and Ma, Z. 2021. Transformer-based image compression. arXiv preprint arXiv:2111.06707. Minnen, D.; Ball´e, J.; and Toderici, G. D. 2018. Joint autoregressive and hierarchical priors for learned image compression. Advances in neural information processing systems, 31. Minnen, D.; and Singh, S. 2020. Channel-wise autoregressive entropy models for learned image compression. In 2020 IEEE International Conference on Image Processing (ICIP), 3339–3343. IEEE. Shi, Y.; Xu, X.; Xi, J.; Hu, X.; Hu, D.; and Xu, K. 2022. Learning to detect 3D symmetry from single-view RGB-D images with weak supervision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4882–4896. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7569 Song, S.; Lichtenberg, S. P.; and Xiao, J. 2015. Sun rgb-d: A rgb-d scene understanding benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recognition, 567–576. Tao, L.; Gao, W.; Li, G.; and Zhang, C. 2023. AdaNIC: Towards Practical Neural Image Compression via Dynamic Transform Routing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16879–16888. Tian, M.; Pan, L.; Ang, M. H.; and Lee, G. H. 2020. Robust 6d object pose estimation by learning rgb-d features. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 6218–6224. IEEE. Toderici, G.; O’Malley, S. M.; Hwang, S. J.; Vincent, D.; Minnen, D.; Baluja, S.; Covell, M.; and Sukthankar, R. 2015. Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085. Van Den Oord, A.; Kalchbrenner, N.; and Kavukcuoglu, K. 2016. Pixel recurrent neural networks. In International conference on machine learning, 1747–1756. PMLR. W¨odlinger, M.; Kotera, J.; Xu, J.; and Sablatnig, R. 2022. Sasic: Stereo image compression with latent shifts and stereo attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 661–670. Wu, Y.; and Gao, W. 2022. End-to-end lossless compression of high precision depth maps guided by pseudo-residual. In 2022 Data Compression Conference (DCC), 489–489. IEEE. Wu, Y.; Qi, Z.; Zheng, H.; Tao, L.; and Gao, W. 2021. Deep image compression with latent optimization and piecewise quantization approximation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1926–1930. Xiang, Y.; Xie, C.; Mousavian, A.; and Fox, D. 2021. Learning rgb-d feature embeddings for unseen object instance segmentation. In Conference on Robot Learning, 461–470. PMLR. Zhao, Z.; Wang, S.; Jia, C.; Zhang, X.; Ma, S.; and Yang, J. 2018. Light field image compression based on deep learning. In 2018 IEEE International conference on multimedia and expo (ICME), 1–6. IEEE. Zhu, X.; Song, J.; Gao, L.; Zheng, F.; and Shen, H. T. 2022. Unified multivariate gaussian mixture for efficient neural image compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17612–17621. Zollh¨ofer, M.; Stotko, P.; G¨orlitz, A.; Theobalt, C.; Nießner, M.; Klein, R.; and Kolb, A. 2018. State of the art on 3D reconstruction with RGB-D cameras. In Computer graphics forum, volume 37, 625–652. Wiley Online Library. Zou, R.; Song, C.; and Zhang, Z. 2022. The devil is in the details: Window-based attention for image compression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 17492–17501. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7570
2024
840
18,674
Any-Size-Diffusion: Toward Efficient Text-Driven Synthesis for Any-Size HD Images Qingping Zheng1, 2*, Yuanfan Guo2*, Jiankang Deng2, Jianhua Han2, Ying Li1†, Songcen Xu2, Hang Xu2† 1Northwestern Polytechnical University 2Huawei Noah’s Ark Lab zhengqingping2018@mail.nwpu.edu.cn, jiankangdeng@gmail.com, lybyp@nwpu.edu.cn, {guoyuanfan1, hanjianhua4, xusongcen}@huawei.com, chromexbjxh@gmail.com Abstract Stable diffusion, a generative model used in text-to-image synthesis, frequently encounters resolution-induced composition problems when generating images of varying sizes. This issue primarily stems from the model being trained on pairs of single-scale images and their corresponding text descriptions. Moreover, direct training on images of unlimited sizes is unfeasible, as it would require an immense number of text-image pairs and entail substantial computational expenses. To overcome these challenges, we propose a twostage pipeline named Any-Size-Diffusion (ASD), designed to efficiently generate well-composed HD images of any size, while minimizing the need for high-memory GPU resources. Specifically, the initial stage, dubbed Any Ratio Adaptability Diffusion (ARAD), leverages a selected set of images with a restricted range of ratios to optimize the text-conditional diffusion model, thereby improving its ability to adjust composition to accommodate diverse image sizes. To support the creation of images at any desired size, we further introduce a technique called Fast Seamless Tiled Diffusion (FSTD) at the subsequent stage. This method allows for the rapid enlargement of the ASD output to any high-resolution size, avoiding seaming artifacts or memory overloads. Experimental results on the LAION-COCO and MM-CelebA-HQ benchmarks show that ASD can produce well-structured images of arbitrary sizes, cutting down the inference time by 2× compared to the traditional tiled algorithm. The source code is available at https://github.com/ProAirVerse/Any-Size-Diffusion. Introduction In text-to-image synthesis, Stable Diffusion (SD) (Rombach et al. 2022) has emerged as a significant advancement. Existing SD models (Ruiz et al. 2023; Meng et al. 2023) transform text aligned with image components into high-quality images, typically sized at 512 × 512 pixels. Despite these models having the ability to handle varying sizes, they noticeably struggle with resolution changes, resulting in poor composition (e.g., improper cropping and unnatural appearance), a problem demonstrated in Figure 1(a). The root of this issue lies in the models trained mainly on pairs of text *These authors contributed equally. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) (b) (c) Ours ASD 900 x 1024 1024 x 512 1024 x 1024 A cute teddy bear in front of a plain white wall. The teddy bear has a warm, brown fur that looks soft and fluffy, sitting on the brown wooden tabletop. Figure 1: Resolution-induced Poor Composition. Given the text, (a) SD2.1 and (b) MD2.1, a MultiDiffusion model, raise poor composition issues in red boxes when synthesizing images of varying sizes, as opposed to (c) our ASD. and images of a uniform size, overlooking the complexities of handling images at multiple resolutions. Consequently, this leads to observed deficiencies in image composition. In pursuit of generating well-structured images at arbitrary aspect ratios, guided by textual descriptions, the MultiDiffusion methodology (Bar-Tal et al. 2023) leverages a pretrained text-conditional diffusion (e.g., stable diffusion), as a reference model and controls image synthesis through the utilization of several reference diffusion processes. Remarkably, the entire process is realized without requiring further training or fine-tuning. While efficient, it does not completely resolve the limitations associated with handling the reference model’s multi-resolution images. As a result, the production of images may exhibit suboptimal composiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7571 tional quality. The underlying reason is also tied to the reference model’s training on images constrained to a singlescale size, as illustrated in Figure 1(b). A direct and appealing solution to the problem is to train the SD model to cope with every possible image size. Yet, this approach encounters an immediate and significant barrier: the infinite diversity of image ratios, which makes it practically unfeasible. Furthermore, it’s challenging to gather an extensive collection of high-resolution images and corresponding text pairs. Even with a plethora of high-quality datasets available, the intrinsic pixel-based nature of SD requires substantial computational resources, particularly when dealing with high-resolution images of various sizes. The problem is further aggravated when considering the use of megapixel images for SD training, as this involves extensive repeated function equations and gradient computations in the high-dimensional space of RGB images (Ho, Jain, and Abbeel 2020). Even when a trained model is ready, the inference step is also time-consuming and memory-intensive. Through empirical observation, we have found that attempts to generate 4K HD images using the SD model trigger out-of-memory errors when executed on a GPU with a 32GB capacity. The key insight of this paper is to introduce a pioneering Any-Size-Diffusion (ASD) model, executed in two stages, which can synthesize high-resolution images of arbitrary sizes from text prompts. In its dual-phase approach, our ASD not only efficiently handles the resolution-induced poor composition but also successfully circumvents out-ofmemory challenges. At the outset, we are faced with the complexity of accommodating all conceivable image sizes, a challenge that might seem intractable. To address this, in the first stage, we introduce a multi-aspect ratio training strategy that operates within a well-defined, manageable range of ratios. This strategy is used to optimize our proposed Any Ratio Adaptability Diffusion (ARAD) model. As a result, it enables the production of well-composed images that are adaptable to any size within a specified range, while also ensuring a reduced consumption of GPU memory. To yield images that can fit any size, in the second stage, we propose an additional method called Fast Seamless Tiled Diffusion (FSTD) to magnify the image output originating from the preceding ARAD. Contrary to the existing tiled diffusion methods ( ´Alvaro Barbero Jim´enez 2023), which address the seaming issue but compromise on the speed of inference, our proposed FSTD designs an implicit overlap within the tiled sampling diffusion process. This innovation manages to boost inference speed without the typical seaming problems, achieving the high-fidelity image magnification. To sum up, the contributions of this paper are as follows: • We are the first to develop the Any-Size-Diffusion (ASD) model, a two-phase pipeline that synthesizes highresolution images of any size from text, addressing both composition and memory challenges. • We introduce a multi-aspect ratio training strategy, implemented within a defined range of ratios, to optimize ARAD, allowing it to generate well-composed images adaptable to any size within a specified range. • We propose an implicit overlap in FSTD to enlarge images to arbitrary sizes, effectively mitigating the seaming problem and simultaneously accelerating the inference time by 2× compared to the traditional tiled algorithm. Related Work Stable Diffusion. Building upon the foundations laid by the Latent Diffusion Model (LDM) (Rombach et al. 2022), diffusion models (Ho, Jain, and Abbeel 2020; Song et al. 2021) have achieved substantial success across various domains, including text-to-image generation (Nichol et al. 2022; Ramesh et al. 2022; Saharia et al. 2022), image-toimage translation (Dhariwal and Nichol 2021; Nichol and Dhariwal 2021), and multi-modal generation (Ruan et al. 2023). Owing to their robust ability to capture complex distributions and create diverse, high-quality samples, diffusion models excel over other generative methods (Goodfellow et al. 2014). In the field, Stable Diffusion (SD) (Rombach et al. 2022) has emerged as a leading model for generating photo-realistic images from text. While adept at producing naturalistic images at certain dimensions (e.g., 512×512), it often yields unnatural outputs with sizes beyond this threshold. This constraint principally originates from the fact that existing stable diffusion models are exclusively trained on images of a fixed size, leading to a deficiency in high-quality composition on other sizes. Diffusion-based Image Super-Resolution. The objective of image super-resolution is to infer a high-resolution image from a corresponding low-resolution counterpart. The utilization of generative models to magnify images often omits specific assumptions about degradation, leading to challenging situations in real-world applications. Recently, diffusionbased methods (Sahak et al. 2023; Saharia et al. 2023; Li et al. 2023; Ma et al. 2023) have shown notable success in real-world SR by exploiting generative priors within these models. Though effective, these approaches introduce considerable computational complexity during training, with a quadratic increase in computational demands as the latent space size increases. An optimized method, known as StableSR(Wang et al. 2023), was developed to enhance performance while reducing GPU memory consumption. However, this method can become time-inefficient when processing images divided into numerous overlapping regions. Method To resolve the issue of resolution-induced poor composition when creating high-fidelity images of various sizes from any text prompt, we propose a straightforward yet efficient approach called Any Size Diffusion (ASD). This approach simplifies the process of text-to-image synthesis by breaking it down into two stages (see Figure 2). • Stage-I, termed as Any Ratio Adaptability Diffusion (ARAD), trains on multiple aspect-ratio images and generates an image conditioned on a textual description and noise size, avoiding poor composition issues. • Stage-II, known as Fast Seamless Tiled Diffusion (FSTD), magnifies the image from Stage-I to a predeterThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7572 𝓔 𝓓 Stage-II: Fast Seamless Tiled Diffusion offset Non−overlap tiles upscale (c) Implicit-Overlap in Tiled Sampling 𝐴ny−𝑆ize Image T-steps ~𝒩(0,1) Input-texts Gandalf from the lord of rings T-steps Image of varying size 𝓓 T-steps 𝓔 Stage-I: Any Ratio Adaptability Diffusion (a) Multi-aspect Ratio Training (b) Inference A set of ratios resize 𝑟1 𝑟𝑛 ⋮𝑟𝑚 ⋮⋮ match ℒ Generated images GT 𝓓 Input text & size Embedding Text-condition Text-condition t -1 t copy from step t to t-1 LoRA Trainer Frozen W x H Figure 2: The Any-Size-Diffusion (ASD) pipeline, including: 1) Stage-I translates text into images, adapting to various aspect ratios, and 2) Stage-II is responsible for transforming low-resolution images from the Stage-I into high-resolution versions of any specified size. For procedure (c), the implicit overlap in tiled sampling, only the solid green line region is sent to the UNetModel for current denoising. At Stage-II, the dashed green arrow represents regions that are directly copied from the preceding denoised latents, potentially enhancing efficiency and consistency within the overall process. mined larger size, ultimately producing a high-resolution synthesized image, adjustable to any specified size. Pipeline Given a user-defined text prompt (e.g., “Gandalf from the lord for the rings”) and noise size, ARAD employs the pretrained text encoder (Cherti et al. 2023) to generate a contextual embedding τθ(y) and initializes random noise ϵ at the base resolution. The noisy input conditioned on the textual embedding p(ϵ|y) is progressively denoised by the UNetModel (Cherti et al. 2023). This process is iterated through T times, leveraging the DDPM algorithm (Song, Meng, and Ermon 2020) to continuously remove noises and restore the latent representation z. Ultimately, a decoder D converts the denoised latent back into an image I ∈RH×W ×3, where H and W denote the image’s height and width respectively. Afterward, FSTD takes the resulting image as input and performs inference based on the image-conditional diffusion (Wang et al. 2023). In detail, the image is magnified by a specified size. A pretrained visual encoder E is employed to map the resulting image I′ ∈RH′×W ′×3 into a latent representation z = E(I′). A normal distributionbased noise ϵ ∼N(0, 1) is then added to it, yielding the noisy latent variable z′ = A(z). The image, conditioned on itself p(z′|z), undergoes progressive iterations by the UNetModel, utilizing our proposed tiled sampling I ∈RH×W ×3 for T cycles. Lastly, the decoder D is employed to project the denoised latent variable into the final output, effectively transforming the latent space back into the image domain. Any Ratio Adaptability Diffusion (ARAD) In this stage, ARAD is proposed to make the model have the capability of generating an image, adjustable to varying aspect ratios, resolving the issue of resolution-induced poor composition. This stage is mainly achieved by our designed multi-aspect ratio training strategy. Multi-aspect ratio training. Instead of directly training on the original image and text pairs, we employ our aspectratio strategy to map the original image into an image with a specific ratio. To be more precise, we define a set of ratios {r1, r2, ..., rn}, each corresponding to specific sizes {s1, s2, ..., sn}, where n represents the number of predefined aspect ratios. For each training image x ∈RH×W ×3, we calculate the image ratio as r = H/W. This ratio r is then compared with each predefined ratio, selecting the one m with the smallest distance as the reference ratio. The index m is determined by arg min m f(m) = {|r1−r|, · · · , |rm−r|, · · · , |rn−r|}, (1) where f(m) represents the smallest distance between the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7573 (𝒙, 𝒚) (a) w/o overlap t-1 t =T seam | fast ⋯ t-1 t =T seamless | slow ⋯ (c) Ours implicitly overlap (b) explicitly overlap t-1 t =T ⋯ 𝑧𝑐 seamless | fast (𝒙+ ∆𝒙, 𝒚+ ∆𝒚) 𝑧𝑐 𝑧𝑠 Figure 3: Comparison of various tiling strategies: (a) without overlapping, (b) with explicit overlapping, and (c) with implicit overlapping. Green tiles are explicit overlaps, and the orange tile is our implicit overlap at step t-1. current ratio and the predefined ratio. Therefore, if the image has a ratio similar to the mth predefined size sm, the original size of the training image is resized to sm. Forward ARAD process. During the training process, a pair consisting of an image and its corresponding text (x, y) is processed, where x represents an image in the RGB space RH×W ×3, and y denotes the associated text. A fixed visual encoder, E, is used to transform the resized image sm into a spatial latent code z. Meanwhile, the corresponding text is converted into a textual representation τθ(y) via OpenCLIP (Cherti et al. 2023). For the total steps T, conditional distributions of the form p(zt|y), t = 1 · · · T, can be modeled using a denoising autoencoder ϵθ(zt, t, y). Consequently, the proposed ARAD can be learned using an objective function LARAD = EE(x),y,ϵ∼N (0,1),t h ∥ϵ−ϵθ(zt, t, τθ(y))∥2 2 i . (2) Fast Seamless Tiled Diffusion (FSTD) In the second stage, we propose FSTD, a training-free approach built on StableSR (Wang et al. 2023) that amplifies the ARAD-generated image to any preferred size. To achieve efficient image super-resolution without heavy computational demands during inference, we devise an implicit overlap technique within the tiled sampling method. Tiled sampling. For clarity, consider an upscaled image I ∈ RH′×W ′×3, partitioned into M petite tiles, symbolized as {Pi ∈Rh×w×3 | 1 ≤i ≤M}, where w and h denote the width and height of each tile. We initially encode each tile Pi using an encoder function E, adding the random noise, to generate a set of noisy latent representations Z = {Zi = E(Pi) + ϵi | ϵi ∼N(0, 1), 1 ≤i ≤M}. Subsequently, each noisy tile is processed by the UNetModel conditioned on the original tile for T steps, resulting in a set of denoised latents Z′ = {Z′i| ϵi ∼N(0, 1), 1 ≤i ≤M}. Finally, a decoder fD is applied to convert them back into image space, culminating in the reconstructed image I′ = {P ′ i ∈Rh×w×3 | P ′ i = fD(Z′ i), 1 ≤i ≤M}. (3) Herein, P ′ i represents the ith tile decoded from its corresponding denoised latent tile. However, a seaming problem emerges when any two tiles in the set are disjoint, as depicted in Figure 3(a). To tackle this, we implement overlaps between neighboring tiles that share common pixels (Figure 3(b)). While increasing explicit overlap can effectively mitigate this issue, it substantially escalates the denoising time. As a consequence, the inference time quadratically increases with the growth in overlapping patches. Indeed, it’s practically significant to strike a balance between inference time and the amount of overlap. Implicit overlap in tiled sampling. To speed up the inference time while avoiding the seaming problem, we propose an implicit overlap in tiled sampling. As depicted in Figure 3(c), the magnified image is divided into L nonoverlapping tiles and we keep the quantity of disjoint noisy latent variables constant during the reverse sampling process. Before each sampling step, we apply a random offset to each tile, effectively splitting Z into two components: Zs (the shifted region with tiling) and Zc (the constant region without tiling). This can be mathematically represented as Z = Zs∪Zc. Take note that at the initial time step, Zc = Ø. At each sampling, the shifted part, Zs, is a collection of L disjoint tiles, denoted as Zs = {Zs i | 1 ≤i ≤L}. Here, each Zs i symbolizes a shifted tile. The shifted portion, Zs, comprises L disjoint tiles that change dynamically throughout the sampling process. Within this segment, each tile is expressed as Zs i,x,y = Zyi+∆yi,xi+∆xi for 1 ≤i ≤L. Here, ∆xi and ∆yi denote the random offsets for tile Zs i implemented in the preceding step. As for the constant section without tiling, denoted as Zc, the pixel value is sourced from the corresponding latent variable in the previous sampling step. It is worth noting that after each time step, Zc is non-empty, symbolically represented as Zc ̸= Ø. This approach ensures implicit overlap during tiled sampling, effectively solving the seaming issue. Experiments Experimental Settings Datasets. The ARAD of our ASD is trained on a subset of LAION-Aesthetic (Schuhmann 2022) with 90k textimage pairs in different aspect ratios. It is evaluated on MALAION-COCO with 21,000 images across 21 ratios (selecting from LAION-COCO (Schuhmann et al. 2022)), and MA-COCO built from MS-COCO (Lin et al. 2014) containing 2,100 images for those ratios. A test split of MMCelebA-HQ (Xia et al. 2021), consisting of 2,824 face image pairs in both low and high resolutions, is employed to evaluate our FSTD and whole pipeline. Implementation Details. Our proposed method is implemented in PyTorch (Paszke et al. 2019). A multi-aspect ratio training method is leveraged to finetune ARAD (using The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7574 (a) SR-Plus (d) AR-Plus A white, grand and elegant victorian mansion with intricate details in a sunny day. [4096 x 1024] (c) SR-Tile-Plus (f) ASD (Ours) (b) SR-Tile (e) AR-Tile (a) SR-Plus (b) SR-Tile (c) SR-Tile-Plus (d) AR-Plus (e) AR-Tile (f) ASD (Ours) A panoramic dream castle under the blue sky at the Disney. [1024 x 2048] Figure 4: Qualitative comparison of our proposed ASD method with other baselines, including (a) SR-Plus, (b) SR-Tile, (c) SRTile-Plus, (d) AR-Plus, (e) AR-Tile and (f) our proposed ASD. The yellow box indicates the resolution-induced poor composition. The orange box indicates better composition. The red solid line box is the zoom-in of the red dashed line box, aiming to inspect if there are any seaming issues. Our ASD outperforms others in both composition quality and inference time. Exp. Stage-I Stage-II Capability MM-CelebA-HQ Ratio Tile Overlap Composition Max Resolution Seam FID ↓ IS ↑ CLIP ↑ (a) S % % Poor 20482 N 118.83 2.11 27.22 (b) S ! % Poor 184322 Y 111.96 (- 6.87) 2.46 (+ 0.35) 27.46 (+ 0.24) (c) S ! ! Poor 184322 N 111.06 (- 7.77) 2.53 (+ 0.42) 27.55 (+ 0.33) (d) A % % Excellent 20482 N 92.80 (- 26.03) 3.97 (+ 1.86) 29.15 (+ 1.93) (e) A ! % Excellent 184322 Y 85.66 (- 33.17) 3.98 (+ 1.87) 29.17 (+ 1.95) (f) A ! ! Excellent 184322 N 85.34 (- 33.49) 4.04 (+ 1.93) 29.23 (+ 2.01) Table 1: Quantitative evaluation against baselines. (a) SR-Plus, (b) SR-Tile, (c) SR-Tile-Plus, (d) AR-Plus, (e) AR-Tile and (f) our ASD. ‘S’ and ‘A’ denote single and arbitrary ratios, respectively. All tests run on a 32G GPU. Notably, under the same GPU memory, our ASD achieves at least 9× higher resolution than the original SD model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7575 LoRA (Hu et al. 2021)) for 10,000 steps with a batch size of 8. We use Adam (Kingma and Ba 2014) as an optimizer and the learning rate is set to 1.0e-4. Our FSTD (the second stage model) is training-free and is built upon StableSR (Wang et al. 2023). During inference, DDIM sampler (Song, Meng, and Ermon 2020) of 50 steps is adopted in ARAD to generate the image according to the user-defined aspect ratio. In the second stage, we follow StableSR to use 200 steps DDPM sampler (Ho, Jain, and Abbeel 2020) for FSTD. Evaluation metrics. For benchmarks, we employ common perceptual metrics to assess the generative text-to-image models, including FID (Heusel et al. 2017), IS (Salimans et al. 2016) and CLIP (Radford et al. 2021). IS correlates with human judgment, important to evaluate the metric on a large enough number of samples. FID captures the disturbance level very well and is more consistent with the noise level than the IS. CLIP score is used to measure the cosine similarity between the text prompt and the image embeddings. Besides, the extra metrics (e.g., PSNR, SSIM (Wang et al. 2004) and LPIPS (Zhang et al. 2018)) are employed to assess the super-resolution ability of the second stage of our ASD. PSNR and SSIM scores are evaluated on the luminance channel in the YCbCr color space. LPIPS quantifies the perceptual differences between images. Baseline Comparisons • SR-Plus: employs SD 2.1 (Rombach et al. 2022) for the direct synthesis of text-guided images with varying sizes. • SR-Tile: utilizes SD 2.1 for initial image generation, magnified using StableSR (Wang et al. 2023) with a nonoverlap in tiled sampling( ´Alvaro Barbero Jim´enez 2023). • SR-Tile-Plus: A two-stage method that initiates with SD 2.1 and refines the output using our proposed FSTD, facilitating the synthesis of images of arbitrary dimensions. • AR-Plus: deploys our proposed ARAD model for direct, text-driven image synthesis across a spectrum of sizes. • AR-Tile: commences with our ARAD model for initial image generation, followed by magnification via StableSR employing a non-overlap in tiled sampling. • ASD: is our proposed novel framework, integrating ARAD in Stage I and FTSD in Stage II, designed to synthesize images with customizable dimensions. Quantitative evaluation. As reported in Table 1, our proposed ASD method consistently outperforms the baseline methods. Specifically, our ASD model shows a 33.49 reduction in FID score compared to (a) SR-Plus, and an increase of 1.92 and 2.01 in IS and CLIP scores, respectively. On a 32GB GPU, SR-Plus fails to synthesize images exceeding 20482 resolution. In contrast, our ASD effectively mitigates this constraint, achieving at least 9× higher resolution than SR-Plus under identical hardware conditions. Additionally, we also have the following observations: (i) Utilizing multiaspect ratio training results in notable improvements across various comparisons, specifically reducing FID scores from 118.83 to 92.80 in (a)-(d), 111.96 to 85.66 in (b)-(e), and 111.06 to 85.34 in (c)-(f). (ii) Introducing a tiled algorithm at Method MA-LAION-COCO MA-COCO FID ↓ IS ↑ CLIP ↑ FID ↓ IS ↑ CLIP ↑ SD2.1 14.32 31.25 31.92 42.50 30.20 31.63 MD2.1 14.57 28.95 32.11 43.25 28.92 30.92 ARAD 13.98 34.03 32.60 40.28 29.77 31.87 Table 2: Comparison of our ARAD and other diffusionbased approaches. We compare their compositional ability to handle the synthesis of images across 21 different sizes. (a) 𝑆𝐷2.1 (c) 𝐴𝑅𝐴𝐷 A Pomeranian dog sitting in front of a mini tipi tent. A fine style toy sport sedan, CG art. A mannequin in white suit. (b) 𝑀𝐷2.1 1024 X 640 576 X 768 512 X 1024 1024 X 576 Figure 5: Comparison of visual results. Composition quality of the text-to-image synthesis using (a) SD2.1, a stable diffusion 2.1, (b) MD2.1, a multi-diffusion based on SD 2.1, and (c) our ARAD. Color boxes indicate poor composition. the second stage enables the generation of images with unlimited resolution, while simultaneously enhancing performance, e.g., FID scores improve from 92.80 to 85.66 when comparing (a)-(b) and (d)-(c). (iii) Implementing overlap in tiled sampling effectively addresses the seaming issue, as evidenced by the comparisons between (b)-(c) and (e)-(f). Qualitative comparison. As depicted in Fig. 4, the images synthesized by ASD exhibit superior composition quality (e.g. proper layout) when compared to other baseline methods. Additionally, ASD can generate 4K HD images that are not only well-composed but also free from seaming artifacts. Specifically, when guided by a text description, the AR-Plus method is observed to generate a more complete castle than SR-Plus, as demonstrated in Fig.4(a) vs. Fig.4(d). Compared with SR-Plus, AR-Tile can produce realistic images but is hindered by the presence of seaming issues (see Fig. 4(e)). In contrast, Fig. 4(f) shows that our ASD successfully eliminates seaming artifacts and ensures the production of wellcomposed images, while minimizing GPU memory usage. ARAD Analysis Impact of ARAD. Table 2 highlights the performance of ARAD, showing improvements of 13.98, 34.03, and 32.60 in FID, IS, and CLIP, respectively, on MA-LAION-COCO The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7576 Types MA-LAION-COCO MA-COCO FID ↓ IS ↑ CLIP ↑ FID ↓ IS ↑ CLIP ↑ 3 14.36 32.53 32.38 41.28 29.58 31.71 5 14.10 33.61 32.58 40.25 29.63 31.80 All 13.98 34.03 32.60 40.28 29.77 31.87 Table 3: Performance on ARAD trained on the various types of aspect ratios. “All” denotes the 9 aspect ratios. Method MM-CelebA-HQ Time Overlap & Offset PSNR↑SSIM↑LPIPS↓FID↓ 1/fps w/o overlap 26.89 0.76 0.09 22.80 75.08s explicit 27.49 0.76 0.09 24.15 166.8s implicit & fixed 26.83 0.75 0.08 21.37 75.01s implicit & random 27.53 0.76 0.08 22.25 75.19s Table 4: The versatility of tiled sampling in FSTD. “w/o”, “explicit”, and “implicit” describe non-overlapping, explicit, and implicit overlap in tile sampling respectively. “fixed”, and “random” refer to different tile offset strategies. Here, the overlap of two adjacent tiles is 32×32. over original SD 2.1 and MultiDiffusion (Bar-Tal et al. 2023) (MD2.1). This superiority is further illustrated in Fig. 5. While SD2.1 and MD2.1 exhibit composition problems, our ASD produces images that are consistent with user-defined textual descriptions. For example, MD2.1 incorrectly generates two overlapped blue suits from a prompt for a white suit, a mistake not present in our ASD’s results. Influence on the number of aspect ratios. Table 3 reveals the model’s performance across various aspect ratios. The data shows that increasing the number of aspect ratios in the training dataset improves performance, with FID scores falling from 14.36 to 13.98. A comparison between 3 and 5 aspect ratios highlights a significant improvement, as the FID score drops from 14.36 to 14.10. Further increasing the aspect ratios continues this trend, reducing the FID score to 13.98. This pattern emphasizes the importance of aspect ratios in enhancing model performance. FSTD Analysis Importance of tiles with overlap. The first two lines from Table 4 reveal a comparison between the perceptual performance of explicit overlap and non-overlap in tiled sampling. Specifically, the explicit overlap exhibits superior performance (e.g., 27.49 vs. 26.89 on PSNR). However, nonoverlap tiled sampling offers an approximately 2× faster inference time compared to the explicit overlap. Despite this speed advantage, Fig. 6(b) exposes the seaming problem associated with non-overlap tiled sampling, highlighting the trade-off between performance and efficiency. Implicit vs. explicit overlap. Table 4 and Fig.6(c)-(d) confirm that the use of implicit overlap in tiled sampling yields the best performance across both perceptual metrics and visual representation. Further examination of the last column in Table 4 demonstrates that the inference time for implicit (a) Zoomed LR (b) w/o overlap (c) Explicit (d) Implicit Figure 6: The super-resolved results of ×4 for different methods, including (a) Zoomed LR (bicubic), tiled diffusion with (b) non-overlap and (c) explicit overlap tiles; and (d) our FSTD which uses implicit overlap in tiled sampling. overlap in tiled sampling is nearly equivalent to that of tiling without overlap. Moreover, the implementation of implicit overlap successfully reduces the inference time from 166.8s to approximately 75.0s. This validates our FSTD’s superiority in balancing quality and inference time optimally. Effect of various offset strategies. The last two lines of Table 4 demonstrate the advantage of using a random offset in implicit overlap tiled sampling. Specifically, when comparing the fixed and random offset methods in implicit overlap, the random offset yields a PSNR value of 27.53, outperforming the fixed offset, which registered at 26.83. The results for other perceptual metrics and visual performance indicators are found to be nearly identical, further emphasizing the preference for a random offset in this context. Conclusion In this study, we address the challenge of resolution-induced poor composition in creating high-fidelity images from any text prompt. We propose Any Size Diffusion (ASD), a method consisting of ARAD and FSTD. Trained with multiaspect ratio images, ARAD generates well-composed images within specific sizes. FSTD, utilizing implicit overlap in tiled sampling, enlarges previous-stage output to any size, reducing GPU memory consumption. Our ASD is validated both quantitatively and qualitatively in real-world scenes. Acknowledgments This work is supported by the National Natural Science Foundation of China (62271400), and the Shaanxi Provincial Key R&D Program, China (2023-GHZD-02). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7577 References Bar-Tal, O.; Yariv, L.; Lipman, Y.; and Dekel, T. 2023. MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation. In ICML. Cherti, M.; Beaumont, R.; Wightman, R.; Wortsman, M.; Ilharco, G.; Gordon, C.; Schuhmann, C.; Schmidt, L.; and Jitsev, J. 2023. Reproducible Scaling Laws for Contrastive Language-Image Learning. In CVPR, 2818–2829. Dhariwal, P.; and Nichol, A. 2021. Diffusion Models Beat Gans on Image Synthesis. NIPS, 34: 8780–8794. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative Adversarial Nets. In NIPS, volume 27. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. NIPS, 30. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. NIPS, 33: 6840–6851. Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2021. LoRA: Low-Rank Adaptation of Large Language Models. arXiv preprint arXiv:2106.09685. Kingma, D. P.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. Li, R.; Zhou, Q.; Guo, S.; Zhang, J.; Guo, J.; Jiang, X.; Shen, Y.; and Han, Z. 2023. Dissecting Arbitrary-scale Superresolution Capability from Pre-trained Diffusion Generative Models. arXiv preprint arXiv:2306.00714. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In ECCV, 740–755. Ma, Y.; Yang, H.; Yang, W.; Fu, J.; and Liu, J. 2023. Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution. arXiv. Meng, C.; Rombach, R.; Gao, R.; Kingma, D.; Ermon, S.; Ho, J.; and Salimans, T. 2023. On Distillation of Guided Diffusion Models. In CVPR, 14297–14306. Nichol, A. Q.; and Dhariwal, P. 2021. Improved Denoising Diffusion Probabilistic models. In ICML, 8162–8171. Nichol, A. Q.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; Mcgrew, B.; Sutskever, I.; and Chen, M. 2022. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In ICML, 16784– 16804. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. NIPS, 32. Radford, A.; Wook Kim, J.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In ICML, 8821–8831. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution Image Synthesis with Latent Diffusion Models. In CVPR, 10684–10695. Ruan, L.; Ma, Y.; Yang, H.; He, H.; Liu, B.; Fu, J.; Yuan, N. J.; Jin, Q.; and Guo, B. 2023. Mm-diffusion: Learning Multi-modal Diffusion Models for Joint Audio and Video Generation. In CVPR, 10219–10228. Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. DreamBooth: Fine Tuning Textto-Image Diffusion Models for Subject-Driven Generation. In CVPR, 22500–22510. Sahak, H.; Watson, D.; Saharia, C.; and Fleet, D. 2023. Denoising Diffusion Probabilistic Models for Robust Image Super-Resolution in the Wild. arXiv preprint arXiv:2302.07864. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic Text-toImage Diffusion Models with Deep Language Understanding. NIPS, 35: 36479–36494. Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2023. Image Super-Resolution via Iterative Refinement. TPAMI, 45(4): 4713–4726. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; and Chen, X. 2016. Improved Techniques for Training GANs. NIPS, 29. Schuhmann, C. 2022. LAION-AESTHETICS. https://laion. ai/blog/laion-aesthetics/. Accessed: 2022-8-16. Schuhmann, C.; K¨opf, A.; Vencu, R.; Coombes, T.; and Beaumont, R. 2022. LAION COCO: 600M Synthetic Captions from LAION2B-EN. https://laion.ai/blog/laion-coco/. Accessed: 2022-9-15. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising Diffusion Implicit Models. In ICLR. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In ICLR. Wang, J.; Yue, Z.; Zhou, S.; Chan, K. C.; and Loy, C. C. 2023. Exploiting Diffusion Prior for Real-World Image Super-Resolution. arXiv preprint arXiv:2305.07015. Wang, Z.; Bovik, A.; Sheikh, H.; and Simoncelli, E. 2004. Image Quality Assessment: from Error Visibility to Structural Similarity. TIP, 13(4): 600–612. Xia, W.; Yang, Y.; Xue, J.-H.; and Wu, B. 2021. TediGAN: Text-Guided Diverse Face Image Generation and Manipulation. In CVPR, 2256–2265. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In CVPR, 586–595. ´Alvaro Barbero Jim´enez. 2023. Mixture of Diffusers for Scene Composition and High Resolution Image Generation. arXiv preprint arXiv:2302.02412. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7578
2024
841
18,675
Spatio-Temporal Fusion for Human Action Recognition via Joint Trajectory Graph Yaolin Zheng1, Hongbo Huang1, 2, Xiuying Wang1, 2, Xiaoxu Yan1, Longfei Xu1 1Computer School, Beijing Information Science & Technology University, Beijing, China 2Institute of Computing Intelligence, Beijing Information Science & Technology University, Beijing, China zhengyaolin574@bistu.edu.cn, hhb@bistu.edu.cn, wxiuying520@126.com, 2021020568@bistu.edu.cn, 2022020601@bistu.edu.cn Abstract Graph Convolutional Networks (GCNs) and Transformers have been widely applied to skeleton-based human action recognition, with each offering unique advantages in capturing spatial relationships and long-range dependencies. However, for most GCN methods, the construction of topological structures relies solely on the spatial information of human joints, limiting their ability to directly capture richer spatiotemporal dependencies. Additionally, the self-attention modules of many Transformer methods lack topological structure information, restricting the robustness and generalization of the models. To address these issues, we propose a Joint Trajectory Graph (JTG) that integrates spatio-temporal information into a uniform graph structure. We also present a Joint Trajectory GraphFormer (JT-GraphFormer), which directly captures the spatio-temporal relationships among all joint trajectories for human action recognition. To better integrate topological information into spatio-temporal relationships, we introduce a Spatio-Temporal Dijkstra Attention (STDA) mechanism to calculate relationship scores for all the joints in the JTG. Furthermore, we incorporate the Koopman operator into the classification stage to enhance the model’s representation ability and classification performance. Experiments demonstrate that JT-GraphFormer achieves outstanding performance in human action recognition tasks, outperforming state-of-the-art methods on the NTU RGB+D, NTU RGB+D 120, and N-UCLA datasets. Introduction Human action recognition aims to accurately identify and classify different human actions from input videos or sequence data. As an important task in computer vision, human action recognition has been extensively researched and widely applied in fields such as human-computer interaction, intelligent surveillance, and motion reconstruction. In particular, skeleton-based methods are less affected by changes in lighting conditions, background clutter, and occlusions. This robustness enhances the model’s ability to focus on motion-related information, making skeleton-based methods increasingly popular. Currently, two popular deep learning models, Graph Convolutional Networks (GCNs) and Transformers, have demonstrated strong performance in skeleton-based human Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. …… …… (b) Visualization of Dijkstra weights in STDA Heatmap of Dijkstra matrix of the JTG (a) The JTG with frame size N Heatmap of Dijkstra matrix of the original graph (c) Heatmap of the Dijkstra matrix 0 8 10 12 2 4 6 10 10 70 70 30 30 50 50 0 5 10 15 20 0 5 10 15 20 0 2 4 6 8 10 Figure 1: Visualization and Heatmaps. (a) Visualization depicting the Joint Trajectory Graph (JTG). (b) Visualization of Dijkstra weights within the Spatio-Temporal Dijkstra Attention (STDA). The thicker curves denote higher correlation weights among interconnected joints. For illustrative purposes, lower correlation weights are depicted using dotted lines and are selectively sampled. (c) Heatmaps showing Dijkstra matrices for the original human joint graph and the proposed JTG. action recognition tasks. GCNs treat human joint data as graph structures and employ convolutional operations based on adjacency matrices to capture spatial dependencies between different body parts, thereby improving the accuracy of action recognition (Yan, Xiong, and Lin 2018; Shi et al. 2019b; Ye et al. 2020). Additionally, Transformers have demonstrated remarkable success due to their strong selfattention mechanisms and capability to capture long-range dependencies (Vaswani et al. 2017; Dosovitskiy et al. 2020). Despite the promising performance of GCNs and Transformers, accurate and robust skeleton-based action recognition remains a challenging task. This is primarily due to several factors. Firstly, conventional GCN methods do not directly utilize spatio-temporal topology to capture more The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7579 comprehensive spatio-temporal dependencies. They aggregate information from neighboring nodes in the graph to update node representations, which is effective for capturing spatial dependencies. However, simply extending the spatial graph is not sufficient for effectively capturing temporal dynamic correlations. Secondly, in joint coordinate sequences, the density of information may vary between the spatial and temporal dimensions, with greater redundancy in the temporal dimension. Finally, although self-attention mechanisms can adaptively compute correlation scores for sequence elements, they may not be able to capture the hidden topological information of each sequence element, leading to a negative impact on the model’s robustness and generalization. Motivated by these observations, we propose a Joint Trajectory GraphFormer (JT-GraphFormer) model with a Joint Trajectory Graph (JTG). The JTG introduces a temporal dimension on top of the original spatial graph structure, enabling it to better encapsulate complex discriminative details associated with joint trajectories. Unlike ST-GCN(Yan, Xiong, and Lin 2018), the proposed JTG focuses on constructing the topological structure between joints within a certain spatio-temple period. This enhancement greatly enriches its potential for action recognition. Specifically, we construct a dynamic trajectory topology for all joints within a certain frame sequence, as shown in Fig. 1 (a). For action recognition tasks, an action is usually described as a temporal sequence, characterized by temporal order and dynamic evolution. To more effectively capture the intricate spatio-temporal inter-dependencies, JTG extends connections to nodes in neighboring frames. This strategy serves to reduce redundant temporal information and utilize a uniform graph structure to capture the inherent dependencies within the spatio-temporal dimension, facilitating aggregation of features across both spatial and temporal domains. To extract features more effectively, we utilize an improved Transformer structure. When JTG is used as input data, a node within a single frame will concurrently compute the relationship scores for all nodes within neighboring frames, imposing a strong requirement on the model to handle complex spatio-temporal associations. Inspired by the spatial encoding in Graphormer (Ying et al. 2021), we propose a Spatio-Temporal Dijkstra Attention (STDA) mechanism, which adds the distances between joints in JTG as spatio-temporal topology information to the calculation of attention scores. This enables each node to learn to pay more attention to neighbor nodes that are more relevant to the action. Unlike Graphormer’s method of encoding discrete data, our method is more suitable for processing continuous data such as human action recognition. STDA combines the global attention score and shortest path weights and shows stronger expressive power by adding prior information present in the joint trajectory. The correlation weights of a node to its neighbors are shown in Fig. 1 (b) and the heatmaps of Dijkstra matrices are shown in Fig. 1 (c). Furthermore, we introduce the Koopman operator into the classification stage. The Koopman operator is a linear operator that describes a nonlinear dynamical system by mapping it into an infinite-dimensional Hilbert space. In our approach, the Koopman operator is employed to classify extracted features by learning feature evolution of the different categories. Specifically, the Koopman operator serves to linearize extracted features within either the temporal or spatial dimension and characterizes dynamic shifts inherent to different action categories for effectively capturing dynamic interrelationships of trajectories. This enhances the robustness and generalization ability of the model. Our contributions can be succinctly summarized as follows: • Introduction of JTG as an input data representation, leveraging trajectory information to enrich feature aggregation capabilities for nodes and their interactions across frames. • Proposal of STDA, augmenting feature aggregation among neighboring nodes via the integration of shortest path concepts between joints. • Incorporation of the Koopman operator for classification, facilitating an encompassing perspective and superior classification performance. • Rigorous evaluation of our proposed model across three diverse datasets (NTU RGB+D, NTU RGB+D 120, and N-UCLA), revealing its superiority over existing stateof-the-art (SOTA) methods and underscoring its potential as a promising solution for action recognition tasks. Related Work GCNs for Action Recognition. The extracted skeleton sequences from videos exhibit non-Euclidean characteristics, and the inherent interconnections among individual joints can be succinctly represented using a graph structure. Representing skeleton data as simple vectors does not fully capture the complex configurations and correlations. In contrast, the topological graph representation may be more suitable for this purpose. As a result, several GCN-based approaches have been emerged to process skeleton data as graphs. For instance, ST-GCN introduces spatio-temporal GCN for skeleton-based human action recognition (Yan, Xiong, and Lin 2018), automatically learning spatial and temporal patterns from skeleton data. Specifically, it estimates pose information from input videos and achieves action representation with strong generalization capability through graphs. However, it focuses solely on the relationships between physically adjacent joints, neglecting implicit joint co-occurrence correlations. Numerous methods have been proposed in an effort to address this limitation. Actional-Structural GCN (AS-GCN) (Li et al. 2019) combines action links and structure links into a generalized skeleton graph, with a greater emphasis on dependencies between non-physically adjacent joints. A dualstream adaptive GCN, named 2s-AGCN (Shi et al. 2019b), explores the fusion of diverse input modalities. It utilizes both bone vectors and joint coordinates as inputs, inspiring further research on multiple joint modalities data (Shi et al. 2019a; Ye et al. 2020; Zhang et al. 2020). In recent years, Channel-wise Topology Refinement GCN (CTR-GCN) has demonstrated promising outcomes in the context of dynamic topology and multi-channel feature The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7580 modeling (Chen et al. 2021a). CTR-GCN employs a shared topology matrix as a universal prior for channels and subsequently refines it through the inference of channel-specific correlations. Furthermore, InfoGCN based on information bottleneck learning utilizes attention-based graph convolutions to deduce contextually relevant skeleton topology (Chi et al. 2022). This approach guides the model in acquiring potential representations that are information-rich yet structurally compact. Transformers for Action Recognition. The capability of Transformers to model long-range dependencies results in their successful application in modeling and classifying human action sequence data. The Spatio-Temporal Transformer network (ST-TR) utilizes spatial and temporal selfattention modules to learn intra-frame joint interactions and motion dynamics, leading to enhanced outcomes through integrating the two input streams (Plizzari, Cannici, and Matteucci 2021). The Spatio-Temporal Tuple Transformer (STTFormer) investigates interdependencies among distinct sequence segments (Qiu et al. 2022). It divides a skeleton sequence into non-overlapping units and subsequently captures multi-joint dependencies between neighboring frames via spatio-temporal self-attention modules, followed by a feature aggregation module for sub-action fusion. To handle variable-length skeleton inputs without requiring additional preprocessing, a Sparse Transformer-based Action Recognition (STAR) is proposed (Shi et al. 2021). STAR consists of two modules. The first module is a sparse self-attention module that learns spatial relationships using sparse matrix multiplication. The second module is a segment-wise linear self-attention module that models temporal co-dependencies. Additionally, to incorporate more comprehensive skeleton information, the Intra-InterPart Transformer (IIP-Transformer) utilizes part-level skeleton data encoding for action recognition (Wang et al. 2021), considering human body joints as five distinct parts to capture dependencies within and between them. It can be observed that the majority of research efforts are inclined towards handling sequences or modeling spatial topological structures, while the exploration of spatiotemporal topological structures remains relatively scarce. The JT-GraphFormer complements the research in this area, utilizing the trajectory topology and a unified graphical structure known as JTG to unravel the inherent dependencies within trajectories across the spatio-temporal dimension. Method Undoubtedly, spatial and temporal information are both crucial for representing human actions. Structures that integrate these two types of features can capture more accurate and richer spatio-temporal dependencies. Therefore, we propose a JTG, which models the spatio-temporal topological structures of the joint trajectories to capture potential spatio-temporal dependencies. Moreover, we propose a STDA mechanism and a simple sequential aggregation module named TCN to augment feature aggregation among neighboring nodes and model the temporal dependencies. These two modules form a JT-GraphFormer block for feature extraction. Furthermore, we introduce the Koopman operator in the classification stage to globally linearize the feature functions and fit the dynamic changes of various action categories. The overall process is illustrated in Figure 2. Joint Trajectory Graph We divide the action sequences into several groups. Each group has N frames and describes the joint trajectories with a graph structure, named Joint Trajectory Graph GJT = (Gt, Gt+1, . . . , Gt+N−1, ET ) = (VJT , EJT ), where Gt is a spatial graph of joints in a frame, ET is the corresponding set of edges, denoting the joint trajectory of the nodes in N frames, and VJT , EJT denote the set of nodes and edges in JTG, respectively. To understand JTG more clearly, we represent it as (1). AST =   A A + I A · · · A A + I A A + I ... ... A A + I ... ... A ... ... ... A A + I A · · · A A + I A   , (1) where AJT is an adjacency matrix of a JTG, A denotes the physical connectivity of all joints in a frame, and I is a unit diagonal matrix indicating the connectivity of the same joints in neighboring frames. JT-GraphFormer Positional Encoding. In previous work, a human skeleton graph sequence is usually represented as a joint feature vector X ∈RC×T ×V , where T is the number of frames, C is the number of channels, and V is the number of joints. In this paper, the vector X of JTGs with N frames is denoted as X ∈RC×T/N×V ∗N. In many Transformer structures, embedding map linearly transforms the joint features into a vector with learnable parameters. However, such vectors lack the position information of the joints and have trouble in distinguishing the sequential order of the joints in the subsequent parallel computation (Qiu et al. 2022). In JTG, the trajectories of the joints involve specific temporal information. Following (Vaswani et al. 2017), we add position encoding (PE) for each frame to express the sequential relationship correctly. PE uses sine and cosine functions with different frequencies to incorporate the inter- and intraframe position information, as Eq. (2). PE(p, 2i) = sin(p/100002i/Cin), PE(p, 2i + 1) = cos(p/100002i/Cin), (2) where p is the position of the joints in a JTG, i is the dimension of the position encoding vector, and Cin is the feature dimension. STDA Module. To better utilize the spatial information of a graph structure, Graphormer (Ying et al. 2021) adds spatial encoding to compute the self-attention of the nodes. Inspired by this, we propose a STDA mechanism, which adds the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7581 spatio-temporal topological information to the multi-head attention mechanism, increasing the weight of associations between neighboring nodes, thus making the nodes more biased towards aggregating the features of the local neighbors. We compute the Dijkstra matrix D ∈Z+V ∗N×V ∗N of JTG to describe its topology information, where Dij = Dji. Then, we compute the topological weights W via Eq. (3): W = exp(−D) + b, (3) where −D stands for inverting all the entries in the D matrix, exp(·) computes the exponential values for all the entries in the matrix, and b ∈RV ∗N×V ∗N is a learnable matrix for learning the adaptive inter-joint dynamic weights. W will be multiplied element-wisely with the attention map amap obtained by the self-attention calculation, as: amap = Tanh(QKT / p dK × α), ascore = amap · W, (4) where Q,K are the query and key vector in the self-attention calculation, which performs a 1×1 convolution operation on the input. dk represents the dimension of K, α is a learnable parameter that assigns adaptive weights to different heads, ascore is the final weighted attention score. In the forward propagation, the STDA output is obtained by a FFN structure and a residual structure via Eq. (5). STDA(Hl) = σ(FFN(ascoreHl)) + res(Hl)), (5) where Hl is the input feature of the lth block. σ is a activation function, we use the Leaky ReLU function here. The FFN structure contains a 1 × ks convolution operation and a Batch Normalization (BN) operation. The res represents the residual operation and consists of a 1 × 1 convolution operation and a BN operation. TCN Module. For convenience of understanding, we name the sequential aggregation module as TCN, which aims to aggregate features of the joint trajectories and consists of a kt × 1 convolution operation and a BN operation. The input Hin ∈RCl×T/N×V ∗N is reshaped to the output Hout ∈RCl×T ×V during this process, where Cl denotes the output channel number of the lth block. The residual operation res is utilized in both the input and output stages, as shown in Fig. 2. Koopman Operator The Koopman operator is a linear operator that describes a nonlinear dynamical system by mapping it into an infinitedimensional Hilbert space. This mapping allows the system’s evolution to be depicted in a linear space, which can be easier to analyze than the original nonlinear space (Proctor, Brunton, and Kutz 2018). In deep learning, the Koopman operator can be utilized to extract evolution features of nonlinear dynamical systems for enhancing classification performance. In this study, we establish the temporal evolution function (for illustrative purposes, we take the temporal connection as an example) f(·) for the JT-GraphFormer’s output feature H across distinct frames to relate the feature ht at the tth frame to the feature ht+1 at the next frame step, i.e., ht+1 = f(ht). Skeleton Sequences Embedding Map Linear PE STDA Module Muti-Head Self-Attention Spatial Temporal Dijkstra Attention Exp(-D) 1×1 Conv BN TCN Classifier Encoder Encoding Blocks TCN Kt×1 Conv BN STDA JTG Topology TCN Module Dijkstra ×L Feature Norm Mean Class Scores Figure 2: JT-GraphFormer architecture. The model is composed of an encoder and a classifier. The encoder with the STDA module captures context-dependent spatio-temporal joint topology to better represent action. The STDA and TCN modules form a JT-GraphFormer block for feature extraction. We define the Koopman operator Kop as a Ncls × Cout × Cout linear operator, where Ncls denotes the number of action categories, and Cout denotes the output channel amount of the last JT-GraphFormer block. Kop applies a linear approach to approximate the interrelations among various categories of action features in the temporal dimension, satisfying Eq. (6). ht+1 ≈Kopht (6) Since we establish the linear correlations at different frame steps, it is possible to approximate the representation of any continuous frame segment feature Hy x, which denotes the feature segment from the xth frame to the yth frame. Thus the features HT −1 1 can be represented as: HT −1 1 ≈[h1, Koph1, K2 oph1, · · · , KT −2 op h1] (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7582 According to Eq. (6), it can be deduced that: HT t+1 ≈KopHT −1 t (8) We adopt the DMD algorithm (Kutz et al. 2015) and minimize the Frobenius norm of ∥HT 2 −KopHT −1 1 ∥2 to update the Kop. Since Kop denotes the feature evolution of various action categories, we can average the Kop in the temporal dimension to get the probability distribution of each category and finally complete the classification, as shown in Fig. 2. Four-stream Ensemble Previous studies demonstrate that simultaneously using different streams can significantly enhance the performance of human action recognition (Shi et al. 2019b,a). Therefore, we evaluate the performance of the trained models utilizing streams for joint, bone, joint motion, and bone motion. The bone stream utilizes bone modality as input data, proposed by (Shi et al. 2019b). The joint motion and the bone motion streams align with the method presented in (Shi et al. 2019a). The ultimate result is calculated through a weighted average of the models’ inference outputs. Experiments To demonstrate the advantages of the proposed JTGraphFormer, we conducted comprehensive experiments on the NTU-60, NTU-120 and N-UCLA datasets. Furthermore, we performed a comparative analysis with current SOTA models and detailed ablation studies to explore the performance of the proposed modules under various conditions. Datasets NTU RGB+D (NTU-60) contains 56880 skeleton action sequences in 60 classes (Shahroudy et al. 2016). Each sample contains one action, and each action is performed by up to two subjects and captured by three cameras from different views. The dataset is divided into two test benchmarks based on different subjects and different views, i.e., cross-subject (XSub) and cross-view (XView). NTU RGB+D 120 (NTU-120) extends NTU RGB+D with 114480 samples in 120 classes (Liu et al. 2019). The dataset was also taken by three cameras and contains 32 settings, each indicating a specific location and background. The dataset is divided into two benchmarks based on parity across subjects and sample IDs, i.e., cross-subject (XSub) and cross-setup (XSet). Northwestern-UCLA (N-UCLA) contains 1494 video clips in 10 classes. Each action is performed by 10 subjects and captured through three cameras with different camera views. The same evaluation protocol in (Wang et al. 2014) is adopted. Experimental Setting All experiments are performed on 2 GTX 3090 GPUs. The skeleton sequences processed as in (Chen et al. 2021a) are resized to 120, 120, 56 frames for NTU-60, NTU-120, and N-UCLA, respectively. No other data processing or augmentation is applied for fair comparisons. Our model is trained utilizing a stochastic gradient descent (SGD) optimizer with frame count N Top-1 Accuracy (%) 1 (baseline) 87.4 2 89.4 (↑2.0) 4 90.0 (↑2.6) 6 90.4 (↑3.0) 8 89.7 (↑2.3) fusion (2,4,6,8) 91.7 (↑4.3) Table 1: Top-1 accuracy using joint input modality with different frame count N on the NTU-60 dataset under the x-sub setting. The JTG is configured to be dynamic and inter-layer weight-sharing without normalization. a Nesterov momentum of 0.9, and the weight decay is set to 0.0005. The Cross-entropy is taken as the loss function. The training epoch is set to 110 for NTU-60 & 120, and to 30 for N-UCLA. The initial learning rate is 0.1, and a warm up strategy is employed in the first 5 epochs for more stable learning (He et al. 2016). For the NTU-60, NTU-120, and N-UCLA datasets, the learning rate is decayed at epochs on [60, 80, 100], [60, 80, 100], [15], and the batch size is set to 64, 64 and 16, respectively. The JT-GraphFormer block amount is set to 8, and the output channel amounts are [64, 64, 64, 128, 128, 256, 256, 256]. The dimensions dq, dk, dv in each layer are set to 0.25 × Cout. Convolution kernels kt, ks are set to [3, 5]. The shape of Kop is [Ncls, 256, 256], where Ncls denotes the number of categories in the dataset. Ablation Studies To analyze the impact of the different components of the proposed JT-GraphFormer, we examine the performance of our model under different configurations and conditions. Number of Frames in JTG. Different number of frames in JTG means different sequence and information densities in the temporal dimension, which can affect the effectiveness of STDA and TCN modules. In this regard, we experimentally analyze the effect of the different frame count N to the accuracy. We also explore the performance of fusing models with multiple frame counts. The comparison results are shown in Table 1, with the best performance marked in bold. It can be observed that at the frame count 6, the accuracy of the model on the NTU-60 dataset under the X-Sub setting increases by margins of 3.0% compared to the baseline model, and 0.7% compared to the frame count 8. This phenomenon is consistent with intuitive expectations. In scenarios with excessively simplistic spatio-temporal structures, the model struggles to capture intricate spatio-temporal features. Conversely, within complex spatio-temporal graph structures, the spatio-temporal correlation of actions tends to diminish, thereby affecting the extraction of effective features. The model with fusion of different frame count can compensate for the need of different actions for the number of adapted frames to some extent, which shows a performance improvement of 1.3% compared to the frame count 6 and 4.3% compared to the baseline. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7583 method dynamic normalized shared Acc. (%) baseline 89.8 M1 ✓ 89.7 M2 ✓ 90.0 M3 ✓ ✓ 90.0 M4 ✓ ✓ 90.4 M5 ✓ ✓ ✓ 89.8 Table 2: Ablation study of Top-1 accuracy (%) using the JTG with different settings, where baseline represents the method that does not apply these settings. Dynamic and Weight-sharing. We investigated the performance differences between the static and dynamic configuration of the JTG structure, as well as whether interlayer weight-sharing is employed. Additionally, an exploration was conducted on whether JTG should be normalized. It is worth noting that the parameters of the static JTG are fixed, hence the issue of inter-layer weight-sharing does not require consideration. We conducted tests employing solely the joint modality on the NTU-60 dataset under the X-Sub setting, utilizing the JTG with 6 frames as input data, and the results are presented in Table 2. Experiments demonstrate that the dynamic configuration of the JTG has superior expressive power, particularly when a layer-sharing structure is employed without normalization. We conduct the following analysis. Firstly, the spatial distances between nodes in a JTG frame are theoretically not uniformly distributed, neither are the distances along the temporal dimension. Thus, the dynamic structure enables the precise modeling of such distances, enhancing the model’s expressive capabilities. Secondly, the improved performance of the layer-sharing dynamic configuration demonstrates the capability of the JTG in global spatio-temporal modeling, which reduces the number of parameters and training costs. Finally, normalization typically align the variation magnitude of features and improves the optimization conditions. However, in JTG, the difference in the variation magnitude of distance is not significant, thus the advantage of normalization is quite slight. Furthermore, normalization narrows the variation range of distances, which leads to averaging of weights and affects the performance negatively. Koopman Operator. To capture more trajectory information and dynamic correlation, the Koopman operator is applied in this work. We explored the performance differences between methods using the Koopman operator and those using global average pooling and fully connected (FC). The results are shown in Table 3, where the Temporal (or Spatial) Kop denotes global linearization in the temporal (or spatial) dimension of the features. In this experiment, we utilize four different modalities of skeleton sequences, i.e., joint, bone, joint motion, and bone motion, as input data respectively. The frame count of the JTG is set to 6. The result indicates that employing the Koopman operamodality FC Temporal Kop Spatial Kop Joint 90.4 90.5 90.3 Bone 88.9 89.5 89.5 Joint motion 87.9 87.9 88.4 Bone motion 86.9 87.2 87.3 Table 3: Top-1 accuracy (%) using the Koopman operator and fully connected methods on the NTU-60 dataset under the X-Sub setting of using various data modalities. modality NTU-60 NTU-120 X-Sub X-View X-Sub X-Set Joint 91.8 96.8 87.6 89.6 Bone 91.2 96.4 87.4 89.6 Joint motion 90.2 95.0 84.7 86.7 Bone motion 90.0 94.1 84.6 86.1 Joint+Bone 92.5 97.1 89.0 91.0 4 ensemble 93.4 97.5 89.9 91.7 Table 4: Top-1 accuracy (%) of the methods using various data modalities on the NTU-60 and NTU-120 datasets. tor in JT-GraphFormer yields superior results compared to utilizing FC in all the input modalities. It is worth noting that motion modalities encompass information describing variations in joint coordinates or bone vectors. The spatial linearization process of this information involves using the variation trend of the joint to infer its spatial relationship, i.e., to deduce the change of other joints, while its temporal linearization process can be regarded as the evolution of the variation amount. The latter approach is more abstract, which may lose some important dynamic information and may require more complex feature processing methods or more training samples. Therefore, the Spatial Kop method is advantageous in capturing the associations of the trajectories and exhibits superior performance for the motion modalities input. Four-stream Ensemble. We utilize four-stream ensemble method to represent the performance of the trained models, namely joint, bone, joint motion, and bone motion. Each stream and the ensemble methods are tested on the NTU60 and NTU-120 datasets via the proposed JT-GraphFormer. The results are shown in Table 4. We observe that the performance of the model gradually improves as the stream number of the ensemble method increases. On the NTU-60 dataset under the X-Sub setting, the accuracy of joint + bone and four-stream ensemble methods increases by margins of 0.7% and 1.6%, respectively, compared with the accuracy of using the joint only modality. It fully demonstrates that the multi-modal representation increases the variety of input features, and improves the representational and the generalization ability of the model. Comparison with the State-of-the-art Methods We evaluate the proposed JT-GraphFormer on three popular benchmarks and compare the performance with recent prevailing approaches. Table 5 shows that the JT-GraphFormer The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7584 Method Year NTU-60 NTU-120 N-UCLA X-Sub X-View X-Sub X-Set ST-GCN (Yan, Xiong, and Lin 2018) AAAI 2018 81.5 88.3 2s-AGCN (Shi et al. 2019b) CVPR 2019 88.5 95.1 82.9 84.9 DGNN (Shi et al. 2019a) CVPR 2019 89.9 96.1 Dynamic-GCN (Ye et al. 2020) ACM MM 2020 91.5 96.0 87.3 88.6 SGN (Zhang et al. 2020) CVPR 2020 89.0 94.5 79.2 81.5 92.5 DDGCN (Korban and Li 2020) ECCV 2020 91.1 97.1 DC-GCN+ADG (Cheng et al. 2020) ECCV 2020 90.8 96.6 86.5 88.1 MS-G3D (Liu et al. 2020) CVPR 2020 91.5 96.2 86.9 88.4 MST-GCN (Chen et al. 2021b) AAAI 2021 91.5 96.6 87.5 88.8 CTR-GCN (Yan, Xiong, and Lin 2018) ICCV 2021 92.4 96.8 88.9 90.6 96.5 InfoGCN (4s) (Chi et al. 2022) CVPR 2022 92.7 96.9 89.4 90.7 96.6 InfoGCN (6s) (Chi et al. 2022) CVPR 2022 93.0 97.1 89.8 91.2 97.0 STF (2s) (Ke, Peng, and Lyu 2022) AAAI 2022 92.5 96.9 88.9 89.9 Ta-CNN (4s) (Xu et al. 2022) AAAI 2022 90.4 94.8 85.4 86.8 96.1 EffificientGCN (3s) (Song et al. 2022) TPAMI22 91.7 95.7 88.3 89.1 CTR-GCN+FR (4s) (Zhou, Liu, and Wang 2023) CVPR 2023 92.8 96.8 89.5 90.9 96.8 Ours(Joint only) 91.8 96.8 87.6 89.6 95.5 Ours(2s) 92.5 97.1 89.0 91.0 96.6 Ours(4s) 93.4 97.5 89.9 91.7 97.2 Table 5: Performance comparison between JT-GraphFormer and prevailing SOTA methods in skeleton-based human action recognition tasks on the NTU-60, NTU-120, and N-UCLA datasets. For clarity, we provide multi-stream descriptions for the models published between 2022 and 2023. surpasses the listed methods under all settings with equivalent streams, even in several cases where fewer streams are applied. By utilizing the joint only modality, the JTGraphFormer already surpasses the majority of models on the NTU-60 dataset under the X-View setting. When excluding the joint motion and bone motion streams, the JTGraphFormer outperforms the STF (2s) (Ke, Peng, and Lyu 2022) by a margin of 0.1% on the NTU-120 dataset under both the X-Sub and X-Set settings. On the N-UCLA, the accuracy of the JT-GraphFormer (4s) shows a margin of 0.4% improvement compared to the CTR-GCN+FR (4s) (Chen et al. 2021a; Zhou, Liu, and Wang 2023). Furthermore, despite employing four streams, the JT-GraphFormer (4s) outperforms the InfoGCN (6s) (Chi et al. 2022) by a margin of 0.4% on the NTU-60 dataset in both the X-Sub and X-View configurations. To the best of our knowledge, our model attains superior performances compared with the SOTA methods on the three listed datasets. Limitations Despite JT-GraphFormer’s advanced performance on the NTU-60, NTU-120, and N-UCLA datasets, testing it on Kinetics-400 (Kay et al. 2017) will be challenging due to its large parameter count when using Kop (14.54M parameters for NTU-120). Additionally, JT-GraphFormer only considers integrating four streams and lacks exploration of using more streams fusions to fully exploit its potential. Furthermore, JT-GraphFormer is limited to processing structured data sequences, such as the motion of regular objects. Finally, the use of unlabeled data in JT-GraphFormer is a promising area of future work. In future work, it is suggested and encouraged to explore its methods and potential in unsupervised mode. Conclusions In this endeavor, we present a JT-GraphFormer model based on a joint trajectory topology structure. By constructing the JTG, our model effectively captures the semantic information of the input joint trajectory data, enhancing the Transformer’s capability. The proposal and use of STDA, which incorporates intra-graph distances of joints within a JTG, empower each node to discerningly allocate attention. Furthermore, our incorporation of the Koopman operator linearizes the extracted features in either the temporal or the spatial dimension, which effectively captures dynamic shifts inherent to distinct action categories. This culminates in augmented representational capacity and classification performance. Empirical validations unequivocally accentuate the outstanding performance of JT-GraphFormer in human action recognition tasks. Through meticulous comparative analysis and quantitative evaluations across three distinct datasets, the JT-GraphFormer emerges as a standout contender, firmly establishing its superiority over prevailing SOTA methods. Acknowledgments This work was supported by the National Natural Science Foundation of China (Grant No. 62376286). References Chen, Y.; Zhang, Z.; Yuan, C.; Li, B.; Deng, Y.; and Hu, W. 2021a. Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. In Proceedings The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7585 of the IEEE/CVF International Conference on Computer Vision. Chen, Z.; Li, S.; Yang, B.; Li, Q.; and Liu, H. 2021b. Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition. In AAAI. Cheng, K.; Zhang, Y.; Cao, C.; Shi, L.; Cheng, J.; and Lu, H. 2020. Decoupling gcn with dropgraph module for skeletonbased action recognition. In Computer Vision–ECCV 2020: 16th European Conference. Chi, H.-g.; Ha, M. H.; Chi, S.; Lee, S. W.; Huang, Q.; and Ramani, K. 2022. Infogcn: Representation learning for human skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, G., Sylvain; Jakob, U.; and Neil, H. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. Kay, W.; Carreira, J.; Simonyan, K.; Zhang, B.; Hillier, C.; Vijayanarasimhan, S.; Viola, F.; Green, T.; Back, T.; Natsev, P.; et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Ke, L.; Peng, K.-C.; and Lyu, S. 2022. Towards to-at spatiotemporal focus for skeleton-based action recognition. In AAAI. Korban, M.; and Li, X. 2020. Ddgcn: A dynamic directed graph convolutional network for action recognition. In Computer Vision–ECCV 2020: 16th European Conference. Kutz, J. N.; Fu, X.; Brunton, S. L.; and Erichson, N. B. 2015. Multi-resolution dynamic mode decomposition for foreground/background separation and object tracking. In 2015 IEEE international conference on computer vision workshop (ICCVW). Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; and Tian, Q. 2019. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Liu, J.; Shahroudy, A.; Perez, M.; Wang, G.; Duan, L.-Y.; and Kot, A. C. 2019. Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding. IEEE transactions on pattern analysis and machine intelligence, 2684– 2701. Liu, Z.; Zhang, H.; Chen, Z.; Wang, Z.; and Ouyang, W. 2020. Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Plizzari, C.; Cannici, M.; and Matteucci, M. 2021. Spatial temporal transformer network for skeleton-based action recognition. In Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, 694–701. Proctor, J. L.; Brunton, S. L.; and Kutz, J. N. 2018. Generalizing Koopman theory to allow for inputs and control. SIAM Journal on Applied Dynamical Systems, 17: 909–930. Qiu, H.; Hou, B.; Ren, B.; and Zhang, X. 2022. Spatiotemporal tuples transformer for skeleton-based action recognition. arXiv preprint arXiv:2201.02849. Shahroudy, A.; Liu, J.; Ng, T.-T.; and Wang, G. 2016. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition. Shi, F.; Lee, C.; Qiu, L.; Zhao, Y.; Shen, T.; Muralidhar, S.; Han, T.; Zhu, S.-C.; and Narayanan, V. 2021. Star: Sparse transformer-based action recognition. arXiv preprint arXiv:2107.07089. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019a. Skeletonbased action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019b. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Song, Y.-F.; Zhang, Z.; Shan, C.; and Wang, L. 2022. Constructing stronger and faster baselines for skeleton-based action recognition. IEEE transactions on pattern analysis and machine intelligence, 45: 1474–1488. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, J.; Nie, X.; Xia, Y.; Wu, Y.; and Zhu, S.-C. 2014. Cross-view action modeling, learning and recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. Wang, Q.; Peng, J.; Shi, S.; Liu, T.; He, J.; and Weng, R. 2021. Iip-transformer: Intra-inter-part transformer for skeleton-based action recognition. arXiv preprint arXiv:2110.13385. Xu, K.; Ye, F.; Zhong, Q.; and Xie, D. 2022. Topologyaware convolutional neural network for efficient skeletonbased action recognition. In AAAI. Yan, S.; Xiong, Y.; and Lin, D. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI. Ye, F.; Pu, S.; Zhong, Q.; Li, C.; Xie, D.; and Tang, H. 2020. Dynamic gcn: Context-enriched topology learning for skeleton-based action recognition. In Proceedings of the 28th ACM international conference on multimedia, 55–63. Ying, C.; Cai, T.; Luo, S.; Zheng, S.; Ke, G.; He, D.; Shen, Y.; and Liu, T.-Y. 2021. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34: 28877–28888. Zhang, P.; Lan, C.; Zeng, W.; Xing, J.; Xue, J.; and Zheng, N. 2020. Semantics-guided neural networks for efficient skeleton-based human action recognition. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7586 Zhou, H.; Liu, Q.; and Wang, Y. 2023. Learning discriminative representations for skeleton based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7587
2024
842
18,676
ODTrack: Online Dense Temporal Token Learning for Visual Tracking Yaozong Zheng1,2, Bineng Zhong1,2*, Qihua Liang1,2, Zhiyi Mo3, Shengping Zhang4, Xianxian Li1,2 1Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University 2Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University 3Guangxi Key Laboratory of Machine Vision and Intelligent Control, Wuzhou University 4Harbin Institute of Technology yaozongzheng@stu.gxnu.edu.cn, bnzhong@gxnu.edu.cn, qhliang@gxnu.edu.cn zhiyim@gxuwz.edu.cn, s.zhang@hit.edu.cn, lixx@gxnu.edu.cn Abstract Online contextual reasoning and association across consecutive video frames are critical to perceive instances in visual tracking. However, most current top-performing trackers persistently lean on sparse temporal relationships between reference and search frames via an offline mode. Consequently, they can only interact independently within each image-pair and establish limited temporal correlations. To alleviate the above problem, we propose a simple, flexible and effective video-level tracking pipeline, named ODTrack, which densely associates the contextual relationships of video frames in an online token propagation manner. ODTrack receives video frames of arbitrary length to capture the spatiotemporal trajectory relationships of an instance, and compresses the discrimination features (localization information) of a target into a token sequence to achieve frame-to-frame association. This new solution brings the following benefits: 1) the purified token sequences can serve as prompts for the inference in the next video frame, whereby past information is leveraged to guide future inference; 2) the complex online update strategies are effectively avoided by the iterative propagation of token sequences, and thus we can achieve more efficient model representation and computation. ODTrack achieves a new SOTA performance on seven benchmarks, while running at real-time speed. Code and models are available at https://github.com/GXNU-ZhongLab/ODTrack. Introduction Visual tracking aims to uniquely identify and track an object within a video sequence by using arbitrary target queries. In the visual world, objects rarely exist in isolation but rather within a larger and dynamic context. Therefore, visual perception is a complex process that involves interpreting and understanding the surrounding environment of an object. In such a situation, equipping a model with the ability to perform online contextual reasoning and establish associations presents a challenge in the field of visual tracking. Despite this challenge, a significant number of current tracking methods overlook this problem and instead rely on the offline image-pair matching to localize instances in the current frame. As shown in Fig.1(a), these offline methods(Bertinetto et al. 2016; Li et al. 2019; Chen et al. 2021; *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Backbone Backbone Matching/Fu sion Module Head Reference frame Search frame (a) The offline image-level method Tracking result Search video-clip ... Encoder Encoder Encoder Reference video-clip Token Token Token (b) Our online video-level method Tracking results Figure 1: Comparison of tracking methods. (a) The offline image level tracking methods(Li et al. 2019; Chen et al. 2021) based on sparse sampling and image-pair matching. (b) Our online video-level tracking method based on video sequence sampling and temporal token propagation. Yan et al. 2021a; Ye et al. 2022; Cui et al. 2022) typically follow a three-phase process: (i) extracting features by sampling two video frames (i.e., reference and search frames); (ii) propagating the initial target information from the reference to the search frame through a matching/fusion module; and (iii) utilizing a bounding box prediction head to output the localization results. Most trackers have performed well under this paradigm, but still exhibit the following drawbacks: (1) The sampling frames are sparse (i.e., using only one reference frame and one search frame). Although visual tracking inherently contains rich temporal data, this simple sampling strategy falls short in accurately representing the motion state of an object, posing a significant challenge for trackers to comprehend dynamic video content, and (2) The target information is matched offline and limited to image-pair level, preventing the association of the targets across video frames. Traditional feature matching/fusion methods(Chen et al. 2020; Zhang et al. 2020; Guo et al. 2021; Xie et al. 2022) focus on the appearance similarThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7588 ity of object, without considering the property that tracking instance rely on continuous cross-frame associations. To incorporate temporal information into the model, some approaches commonly design online updating techniques, such as updating templates(Yan et al. 2021a; Cui et al. 2022) and updating model parameters(Bhat et al. 2019). Despite being successful, these methods still rely on sparse sampling frames (i.e., reference, search, and update frames) and do not effectively explore how information is propagated online across search frames. This inspired us to think: can our visual tracking algorithm densely associate and perceive an object in a video streaming context? The answer is affirmative. Unlike conventional approaches that rely on offline image-pair matching with sparse sampling frames, this paper proposes ODTrack, a novel video-level framework for visual tracking that capitalizes on video stream modeling. Specifically, we reformulate object tracking as a token sequence propagation task that densely associates the contextual relationships of across video frames in an auto-regressive manner, as shown in Fig.1(b). To overcome the limitations of traditional imagepair sampling strategy and explore the rich temporal dependencies, we extend the model’s input from image-pair to the level of a video stream. Under this new input paradigm, we design two simple yet effective temporal token propagation attention mechanism that captures the spatio-temporal trajectory relationships of the target instance using an online token propagation manner, thus allowing the processing of video-level inputs of arbitrary length. Notably, we treat each video sequence as a continuous sentence, enabling us to employ language modeling for a comprehensive contextual understanding of the video content. This novel approach significantly distinguishes our tracker from traditional methods (Yan et al. 2021a; Ye et al. 2022; Cui et al. 2022) and greatly strengthens its ability to understand the spatio-temporal trajectory of target instance. The main contributions of this work are as follows. • We propose a novel video-level tracking pipeline, called ODTrack. In contrast to existing tracking approaches based on sparse temporal modeling, we employ a token sequence propagation paradigm to densely associate contextual relationships across video frames. • We introduce two temporal token propagation attention mechanisms that compress the discriminative features of the target into a token sequence. This token sequence serves as a prompt to guide the inference of future frames, thus avoiding complex online update strategies. • Our approach achieves a new state-of-the-art tracking results on seven visual tracking benchmarks, including LaSOT, TrackingNet, GOT10K, LaSOText, VOT2020, TNL2K, and OTB100. Related Work Traditional Tracking Framework. The current popular trackers(Bertinetto et al. 2016; Li et al. 2019; Chen et al. 2021; Ye et al. 2022) are dominated by the Siamese tracking paradigm, which achieves tracking by image-pair matching. To improve the accuracy and robustness of trackers, several different approaches are proposed, such as prediction head networks (Li et al. 2018; Chen et al. 2020; Zhang et al. 2020), cross-correlation modules (Han et al. 2021; Liao et al. 2020; Chen et al. 2021), powerful backbone (Chen et al. 2022; Cui et al. 2022) and attention mechanisms (Guo et al. 2021; Yu et al. 2020). In recent years, the introduction of the transformer (Vaswani et al. 2017) enables trackers (Yan et al. 2021a; Xie et al. 2022; Cui et al. 2022; Ye et al. 2022) to explore more powerful and deeper feature interactions, resulting in significant advances in tracking algorithm development. However, most of these methods are designed based on offline mode and sparse image-pair strategy. With this design paradigm, the tracker struggles to accurately comprehend the object’s motion state in the temporal dimension and can only resort to traditional Siamese similarity for appearance modeling. In contrast to these approaches, we reformulate object tracking as a token sequence propagation task and aim to extend Siamese tracker to efficiently exploit target temporal information in an auto-regressive manner. Temporal Modelling in Visual Tracking. Multi-object tracking algorithms(Meinhardt et al. 2022; Zeng et al. 2022) typically involve the recognition and association of individual objects in a video, making the study of trajectory information a common practice. However, there exists a relatively limited amount of research exploring the utilization of spatio-temporal trajectory information in single-object tracking algorithms. To explore temporal cues within the Siamese framework, several online update methods are carefully designed. UpdateNet(Zhang et al. 2019) introduces an adaptive updating strategy, which utilizes a custom network to fuse accumulated templates and generate a weighted updated template feature for visual tracking. DCF-based trackers(Danelljan et al. 2019; Bhat et al. 2019; Danelljan, Gool, and Timofte 2020) excel at updating model parameters online using sophisticated optimization techniques, thereby improving the robustness of the tracker. STMTrack(Fu et al. 2021) and TrDiMP(Wang et al. 2021a) employ attention mechanism to effectively extract contextual information along the temporal dimension. STARK(Yan et al. 2021a) and Mixformer(Cui et al. 2022) specifically design target quality branch for updating template frame, which aids in improving the tracking results. Recently, there has been a gradual surge in research attention towards modeling temporal context from various perspectives. TCTrack (Cao et al. 2022) introduces an online temporal adaptive convolution and an adaptive temporal transformer that aggregates temporal contexts at two levels containing feature extraction and similarity map refinement. VideoTrack (Xie et al. 2023) designs a new tracker based on video transformer and uses a simple feedforward network to encode temporal dependencies. ARTrack (Xing et al. 2023) presents a new time-autoregressive tracker that estimates the coordinate sequence of an object progressively. Nevertheless, the above tracking algorithms still suffer from the following limitations: (1) The optimization process is complex, involving the design of specialized loss functions(Bhat et al. 2019), multi-stage training strategies(Yan The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7589 Visual Encoder Ref video-clip .... Visual Encoder Visual Encoder T1 Tt-1 Tt ... Token Propagation Token Propagation Token Propagation Inital Token .... Visual Encoder Tt+1 Token Propagation Frame k Frame t-1 Frame t Frame t+1 Figure 2: ODTrack Framework Architecture. The ODTrack pipeline takes video clips, consisting of reference and search frames, of arbitrary length as input. Then, the model utilizes a temporal token propagation attention mechanism to generate a temporal token for each video frame. These temporal tokens are subsequently propagated to the following frames in an auto-regressive manner, enabling cross-frame propagation of target trajectory information. et al. 2021a), and manual update rules(Yan et al. 2021a), and (2) Although they explore temporal information to some extent, they fail to investigate how temporal cues propagate across search frames. In this work, we introduce a new dense context propagation mechanism from a token propagation perspective, which offers a solution to circumvent intricate optimization processes and training strategies. Further, we propose a new baseline approach, called ODTrack, focused on unlocking the potential of temporal modeling through the propagation of target motion/trajectory information. Approach We introduce ODTrack, a new video-level framework that employs token sequence propagation for visual tracking, as shown in Fig.2. This section first describes the concept of video-level visual object tracking, followed by the introduction of temporal token propagation attention mechanism and how they are trained in a new design paradigm. Question Formulation To provide a comprehensive understanding of our ODTrack framework, it is pertinent to first offer a review of previously prominent image-pair matching tracking methodologies(Bertinetto et al. 2016; Chen et al. 2021; Ye et al. 2022). Given a pair of video frames, i.e., a reference frame R ∈R3×Hr×Wr and a search frame S ∈R3×Hs×Ws, the mainstream visual trackers Ψ are formulated as B ← Ψ : {R, S}, where B denotes the predicted box coordinates of the current search frame. If Ψ is a conventional convolutional siamese tracker(Li et al. 2019; Chen et al. 2020, 2021), it undergoes three stages, namely feature extraction, feature fusion, and bounding box prediction. Whereas if Ψ is a transformer tracker(Ye et al. 2022; Cui et al. 2022; Chen et al. 2022), it consists solely of a backbone and a prediction head network, where the backbone integrates the processes of feature extraction and fusion. Specifically, the transformer tracker receives a series of non-overlapping image patches (the resolution of each image patch is p × p) as input. This means that a 2D referencesearch image pair needs to pass through a patch embedding layer to generate multiple 1D image token sequences {fr ∈RD×Nr, fs ∈RD×Ns}, where D is the token dimension, Nr = HrWr/p2, and Ns = HsWs/p2. These 1D image tokens are then concatenated and loaded into a L-layer transformer encoder for feature extraction and relationship modeling. Each transformer layer δ contains a multi-head attention and a multi-layer perceptron. Here, we formulate the forward process of the lth transformer layer as follows: f l rs = δl(f l−1 rs ), l = 1, 2, ..., L (1) where f l−1 rs denotes the concatenated token sequence of the reference-search image pair generated from the (l −1)th transformer layer, and f l rs represents the token sequence generated by the current lth transformer layer. By adopting the modeling approach mentioned above, we can construct a concise and elegant tracker to achieve perframe tracking. Nevertheless, this modeling approach has a clear drawback. The created tracker solely focuses on intraframe target matching and lacks the ability to establish interframe associations necessary for tracking object across a video stream. Consequently, this limitation hinders the research of video-level tracking algorithms. In this work, we aim to alleviate this challenge and propose a new design paradigm for video-level tracking algorithms. First, we extend the inputs of the tracking framework from the image-pair level to the video level for temporal modeling. Then, we introduce a new temporal token/prompt T designed to propagate information about the appearance, spatio-temporal location and trajectory of the target instance in a video sequence. Formally, we formulate video-level The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7590 tracking as follows: B ←Ψ : {R1, R2, ..., Rk, S1, S2, ..., Sn, T} (2) where {R1, R2, ..., Rk} denotes the reference frames of length k, and {S1, S2, ..., Sn} represents the search frames of length n. Our video-level tracking framework receives video clip of arbitrary length to model spatio-temporal trajectory relationships of the target object. We describe the proposed core module in more detail in the next section. Video-Level Tracking Pipeline Fig.2 gives an overview of our ODTrack framework. In this section, our focus lies in constructing a video-level tracking pipeline. Theoretically, we model the entire video as a continuous sequence, and decode the localization of target frame by frame in an auto-regressive manner. Firstly, we present a novel video sequence sampling strategy designed specifically to meet the input requirements of the video-level model. Subsequently, to capture the spatio-temporal trajectory information of the target instance within the video sequences, we introduce two simple yet effective temporal token propagation attention mechanisms. Video Sequence Sampling Strategy Most existing trackers (Yan et al. 2021a; Cui et al. 2022; Ye et al. 2022) commonly sample image-pairs within a short-term interval, such as 50, 100, or 200 frame intervals. However, this sampling approach poses a potential limitation as these trackers fail to capture the long-term motion variations of the tracked object, thereby constraining the robustness of tracking algorithms in long-term scenarios. To obtain richer spatio-temporal trajectory information of the target instance from long-term video sequences, we deviate from the traditional short-term image-pair sampling method and propose a new video sequence sampling strategy. Specifically, we establish a larger sampling interval and randomly extract multiple video frames within this interval to form video clips {R1, R2, ..., Rk, S1, S2, ..., Sn} of any lengths. Although this sampling approach may seem simplistic, it enables us to approximate the content of the entire video sequence. This is crucial for video-level modeling. Temporal Token Propagation Attention Mechanism Instead of employing a complex video transformer (Xie et al. 2023) as the foundational framework for encoding video content, we approach the design from a new perspective by utilizing a simple 2D transformer architecture, i.e., 2D ViT (Dosovitskiy et al. 2021). To construct an elegant instance-level inter-frame correlation mechanism, it is imperative to extend the original 2D attention operations to extract and integrate video-level features. In our approach, we design two temporal token attention mechanisms based on the concept of compressionpropagation, namely concatenated token attention mechanism and separated token attention mechanism, as shown in Fig.3(left). The core design involves injecting additional information into the attention operations, such as more video sequence content and temporal token vectors, enabling them to extract richer spatio-temporal trajectory information of the target instance. S R S R (a) (b) S T S T R1...k R1...k (c) T S T R1...k R1...k R1...k S S R1...k Token Attention FFN qt+1 kt+1vt+1 Tt+1 + Token Attention FFN qt kt vt Tt + Empty Token ith Temproal Token Propagation Frame Feature Figure 3: Left: the architecture of temporal token propagation attention mechanism. Right: illustration of online token propagation. (a) Original reference-search attention mechanism, (b) and (c) Different variants of the proposed temporal token propagation attention mechanisms. R is a single reference frame, R1...k denotes the reference frames of length k, S represents the current search frame, and T is the temporal token sequence of current video frames. In Fig.3(a), the original attention operation commonly employs an image pair as inputs, where the process of modeling their relationships can be represented as f = Attn([R, S]). In this paradigm, the tracker can only engage in independent interactions within each image pair, establishing limited temporal correlations. In Fig.3(b), the proposed concatenated token attention mechanism extends the input to the aforementioned video sequence, enabling dense modeling of spatio-temporal relationships across frames. Inspired by the contextual nature of language formed through concatenation, we apply the concatenation operation to establish context for video sequences as well. Its formula can be represented as: ft = Attn([R1, R2, ..., Rk, St, Tt]) = X s′′t′′ Vs′′t′′ · exp⟨qst, ks′′t′′⟩ P s′t′ exp⟨qst, ks′t′⟩ (3) Where Tt is the temporal token sequence of tth video frame. [· · · , · · · ] denotes concatenation among tokens. qst, kst and vst are spatio-temporal linear projections of the concatenated feature tokens. It is worth noting that we introduce a temporal token for each video frame, with the aim of storing the target trajectory information of the sampled video sequence. In other words, we compress the current spatio-temporal trajectory information of the target into a token vector, which is used to propagate to the subsequent video frames. Once the target information is extracted by the temporal token, we propagate the token vector from tth frame to (t + 1)th frame in an auto-regressive manner, as shown in Fig.3(right). Firstly, the tth temporal token Tt is added to the (t + 1)th empty token Tempty, resulting in an update of the content token Tt+1 for (t + 1)th frame, which is then propagated as input to the subsequent frames. Formally, the propagation process is: Tt+1 = Tt + Tempty ft+1 = Attn([R1, R2, ..., Rk, St+1, Tt+1]) (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7591 Method Resolution Params FLOPs Speed Device SeqTrack-B 384 × 384 89M 148G 11fps 2080Ti ODTrack-B 384 × 384 92M 73G 32fps 2080Ti Table 1: Comparison of model parameters, FLOPs, and inference speed. In this new design paradigm, we can employ temporal tokens as prompts for inferring the next frame, leveraging past information to guide future inference. Moreover, our model implicitly propagates appearance, localization, and trajectory information of the target instance through online token propagation. This significantly improves tracking performance of the video-level framework. On the other hand, as illustrated in Fig.3(c), the proposed separated token attention mechanism decomposes attention operation into three sub-processes: self-information aggregation between reference frames, cross-information aggregation between reference and search frames, and crossinformation aggregation between temporal token and video sequences. This decomposition improves the computational efficiency of the model to a certain extent, while the token propagation aligns with the aforementioned procedures. Discussions with Online Update. Most previous tracking algorithms combine online updating methods to train a spatio-temporal tracking model, such as adding an extra score quality branch(Yan et al. 2021a) or an IoU prediction branch(Danelljan et al. 2019). They typically require complex optimization processes and update decision rules. In contrast to these methods, we avoid complex online update strategies by utilizing online iterative propagation of token sequences, enabling us to achieve more efficient model representation and computation. Prediction Head and Loss Function For the design of the prediction head network, we employ conventional classification head and bounding box regression head to achieve the desired outcome. The classification score map R1× Hs p × Ws p , bounding box size R2× Hs p × Ws p , and offset size R2× Hs p × Ws p for the prediction are obtained through three sub-convolutional networks, respectively. We adopt the focal loss(Lin et al. 2017) as classification loss Lcls, and the L1 loss and GIoU loss(Rezatofighi et al. 2019) as regression loss. The total loss L can be formulated as: L = Lcls + λ1L1 + λ2LGIoU (5) where λ1 = 5 and λ2 = 2 are the regularization parameters. Since we use video segments for modeling, the task loss is computed independently for each video frame, and the final loss is averaged over the length of the search frames. Experiments Implementation Details Training. We use ViT-Base (Dosovitskiy et al. 2021) model as the visual encoder, and its parameters are initialized with MAE(He et al. 2022) pre-training parameters. The training Viewpoint Change (0.688,0.755) Illumination Variation (0.675,0.728) Out-of-View (0.633,0.693) Aspect Ratio Change (0.656,0.719) Camera Motion (0.659,0.761) Fast Motion (0.542,0.606) Rotation (0.666,0.722) Backgroud clutter (0.574,0.666) Motion Blur (0.645,0.709) Deformation (0.683,0.738) Partial Occlusion (0.651,0.707) Scale Variation (0.668,0.730) Full Occlusion (0.589,0.651) All (0.671,0.732) Low Resolution (0.596,0.670) Stark-ST101 SeqTrack-B OSTrack-384 ODTrack-B Mixformer-22K Figure 4: AUC scores of different attributes on LaSOT. data includes LaSOT (Fan et al. 2019), GOT-10k (Huang, Zhao, and Huang 2021), TrackingNet (M¨uller et al. 2018), and COCO (Lin et al. 2014). In terms of input data, we take the video sequence including three reference frames with 192 × 192 pixels and two search frames with 384 × 384 pixels as the input to the model. We employ the AdamW to optimize the network parameters with initial learning rate of 1 × 10−5 for the backbone, 1 × 10−4 for the rest, and set the weight decay to 10−4. We set the training epochs to 300 epochs. 60, 000 image pairs are randomly sampled in each epoch. The learning rate drops by a factor of 10 after 240 epochs. The model is conducted on a server with two 80GB Tesla A100 GPUs and set the batch size to be 8. Inference. To align with the training setting, we incorporate three reference frames at equal intervals into our tracker during the inference phase. Concurrently, search frames and temporal token vectors are input frame-by-frame. Further, we conduct comparative experiments in model parameters, FLOPs and inference speed, as shown in Tab.1. The proposed ODTrack is tested on a 2080Ti, and it runs at 32 fps. Comparison with the SOTA GOT10K. GOT10K is a large-scale tracking dataset that contains more than 10,000 video sequences. The GOT10K benchmark proposes a protocol, which the trackers only use its training set for training. We follow the protocol to train our framework. As shown in Tab.2, the proposed method outperforms previous trackers and exhibits very competitive performance (77.0% AO) when compared to the previous best-performing tracker ARTrack (75.5% AO). These results demonstrate that one benefit of our ODTrack comes from the video-level sample strategy, which is design to release the potential of video-level modeling framework. LaSOT. LaSOT is a large-scale long-term tracking benchmark that includes 1120 sequences for training and 280 sequences for testing. As shown in Tab.2, compared to most other tracking algorithms, our ODTrack-B achieves a new state-of-the-art result. For example, compared with the latest ARTrack, our method achieves 0.5%, 1.8%, and 1.5% gains in terms of AUC, PNorm and P score, respectively. Furthermore, Fig.4 shows the results of attribute evaluation, demonThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7592 Method GOT10K∗ LaSOT TrackingNet LaSOText AO SR0.5 SR0.75 AUC PNorm P AUC PNorm P AUC PNorm P SiamFC (Bertinetto et al. 2016) 34.8 35.3 9.8 33.6 42.0 33.9 57.1 66.3 53.3 23.0 31.1 26.9 ATOM (Danelljan et al. 2019) 55.6 63.4 40.2 51.5 57.6 50.5 70.3 77.1 64.8 37.6 45.9 43.0 SiamPRN++ (Li et al. 2019) 51.7 61.6 32.5 49.6 56.9 49.1 73.3 80.0 69.4 34.0 41.6 39.6 DiMP (Bhat et al. 2019) 61.1 71.7 49.2 56.9 65.0 56.7 74.0 80.1 68.7 39.2 47.6 45.1 SiamRCNN (Voigtlaender et al. 2020) 64.9 72.8 59.7 64.8 72.2 81.2 85.4 80.0 Ocean (Zhang et al. 2020) 61.1 72.1 47.3 56.0 65.1 56.6 STMTrack (Fu et al. 2021) 64.2 73.7 57.5 60.6 69.3 63.3 80.3 85.1 76.7 TrDiMP (Wang et al. 2021a) 67.1 77.7 58.3 63.9 61.4 78.4 83.3 73.1 TransT (Chen et al. 2021) 67.1 76.8 60.9 64.9 73.8 69.0 81.4 86.7 80.3 Stark (Yan et al. 2021a) 68.8 78.1 64.1 67.1 77.0 82.0 86.9 SBT-B (Xie et al. 2022) 69.9 80.4 63.6 65.9 70.0 Mixformer (Cui et al. 2022) 70.7 80.0 67.8 69.2 78.7 74.7 83.1 88.1 81.6 TransInMo (Guo et al. 2022) 65.7 76.0 70.7 81.7 OSTrack (Ye et al. 2022) 73.7 83.2 70.8 71.1 81.1 77.6 83.9 88.5 83.2 50.5 61.3 57.6 AiATrack (Gao et al. 2022) 69.6 80.0 63.2 69.0 79.4 73.8 82.7 87.8 80.4 47.7 55.6 55.4 SeqTrack (Chen et al. 2023) 74.5 84.3 71.4 71.5 81.1 77.8 83.9 88.8 83.6 50.5 61.6 57.5 GRM (Gao, Zhou, and Zhang 2023) 73.4 82.9 70.4 69.9 79.3 75.8 84.0 88.7 83.3 VideoTrack (Xie et al. 2023) 72.9 81.9 69.8 70.2 76.4 83.8 88.7 83.1 ARTrack (Xing et al. 2023) 75.5 84.3 74.3 72.6 81.7 79.1 85.1 89.1 84.8 51.9 62.0 58.5 ODTrack-B 77.0 87.9 75.1 73.2 83.2 80.6 85.1 90.1 84.9 52.4 63.9 60.1 ODTrack-L 78.2 87.2 77.3 74.0 84.2 82.3 86.1 91.0 86.7 53.9 65.4 61.7 Table 2: Comparison with state-of-the-arts on four popular benchmarks: GOT10K, LaSOT, TrackingNet, and LaSOText. Where ∗denotes for trackers only trained on GOT10K. Best in bold, second best underlined. strating that our tracker outperforms other tracking methods on multiple challenge attributes. These results show that the token propagation mechanism helps the model to learn trajectory information about the target instance and greatly improves target localization in long-term tracking scenarios. TrackingNet. TrackingNet is a large-scale short-term dataset that provides a test set with 511 video sequences. As reported in Tab.2, compared with the high-preformance tracker SeqTrack, our method achieves good tracking results that outperform 1.2%, 1.3%, and 1.3% in terms of success, normalized precision and precision score, respectively. This demonstrates that our ODTrack exhibits strong generalization capabilities. LaSOText. LaSOText is the extended version of LaSOT, which comprises 150 long-term video sequences. As reported in Tab.2, our method achieves the good tracking results that outperform most compared trackers. For example, our tracker gets a AUC of 52.4%, PNorm score of 63.9%, and P score of 60.1%, outperforming the ARTrack by 0.5%, 1.9%, and 1.6%, respectively. There results meet our expectation that video-level modeling has more stable object localization capabilities in complex scenarios. VOT2020. VOT2020(Kristan, Leonardis, and et.al 2020) contains 60 challenging sequences, and it uses binary segmentation masks as the groundtruth. We use Alpha-Refine (Yan et al. 2021b) as a post-processing network for ODTrack to predict segmentation masks. As shown in Tab.3, our ODTrack-B and -L achieve the best results with EAO of 58.1% and 60.5% on mask evaluations, respectively. TNL2K and OTB100. We evaluate our tracker on TNL2K(Wang et al. 2021b) and OTB100(Wu, Lim, and Yang 2015) benchmarks. They include 700 and 100 video sequences, respectively. These results in Tab.4 show that the Method EAO (↑) Accuracy (↑) Robustness (↑) SiamMask 0.321 0.624 0.648 Ocean 0.430 0.693 0.754 D3S 0.439 0.699 0.769 SuperDiMP 0.305 0.492 0.745 AlphaRef 0.482 0.754 0.777 STARK 0.505 0.759 0.819 SBT 0.515 0.752 0.825 Mixformer 0.535 0.761 0.854 SeqTrack-B 0.522 ODTrack-B 0.581 0.764 0.877 ODTrack-L 0.605 0.761 0.902 Table 3: State-of-the-art comparison on VOT2020. ODTrack-B and -L achieve the best performance on TNL2K and OTB100 benchmarks, demonstrating the effectiveness of the temporal token propagation attention mechanism. Ablation Study Importance of token propagation. To investigate the effect of token propagation in Eq.4, we perform experiments whether propagating temporal token in Tab.5(a). w/o Token denotes the experiment employing video-level sampling strategy without token propagation. From the second and third columns, it can be observed that the absence of the token propagation mechanism leads to a decrease in the AUC score by 1.2%. This result indicates that token propagation plays a crucial role in cross-frame target association. Different token propagation methods. We conduct experiments to validate the effectiveness of two proposed token propagation methods in the video-level tracking frameThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7593 ATOM Ocean DiMP TransT OSTrack Mixformer SeqTrack ARTrack ODTrack-B ODTrack-L TNL2K 40.1 38.4 44.7 50.7 55.9 56.4 59.8 60.9 61.7 OTB100 66.3 68.4 68.4 69.6 70.0 72.3 72.4 Table 4: Comparison with state-of-the-art methods on TNL2K and OTB100 benchmarks in AUC score. LaSOT (a) Method (b) Sequence Length (c) Sample Rangle Baseline w/o Token Separate Concatenation 2 3 4 5 200 400 800 1200 AUC 70.1 71.0 72.2 72.8 72.8 73.1 72.5 72.0 72.8 73.1 73.0 73.0 PNorm 80.2 81.1 82.3 83.0 83.0 83.0 82.9 82.1 83.0 83.5 83.3 83.1 P 76.9 78.0 79.2 80.3 80.3 80.4 79.9 79.3 80.3 80.6 80.4 80.1 Table 5: Ablation Studies of different token propagation designs on LaSOT benchmark. #861 #724 #593 #111 #62 #130 #293 #211 Ground Truth ODTrack-B SeqTrack OSTrack Mixformer Figure 5: Qualitative comparison results of our tracker with other three SOTA trackers on LaSOT benchmark. work in Tab.5(a). We can be observe that both the separate and concatenation methods achieve significant performance improvements, with the concatenation method showing slightly better results. This demonstrates the effectiveness of both attention mechanisms. The length of search video-clip. As shown in Tab.5(b), we ablate the impact of search video sequence length on the tracking performance. When the length of video clip increases from 2 to 3, the AUC metric improves by 0.3%. However, continuous increment in sequence length does not lead to performance improvement, indicating that overly long search video clips impose a learning burden on the model. Hence, we should opt for an appropriate the length of search video clip. The sampling range. To validate the impact of sampling range on algorithm performance, we conduct experiments on the sampling range of video frames in Tab.5(c). When the sampling range is expanded from 200 to 1200, there is a noticeable improvement in performance on the AUC metric, indicating that the video-level framework can learn target trajectory information from a larger sampling range. Visualization and Limitation Visualization. To intuitively show the effectiveness of the proposed method, especially in complex scenarios including similar distractors, we visualize the tracking results of our ODTrack and three advanced trackers on LaSOT dataset. As shown in Fig.5, due to its ability to densely propagate trajectory information of the target, our tracker far outperforms the latest tracker SeqTrack on these sequences. #1 #407 #245 #170 #470 #628 #188 #165 #132 #92 #66 #1 Figure 6: The attention map of temporal token attention operation. Furthermore, we visualize the attention map of temporal token attention operation, as shown in Fig.6. We can observe that the temporal token continuously propagate and attend to motion trajectory information of object, which aids our tracker in accurately localizing target instance. Limitation. This work models the entire video as a sequence and decode the localization of instance frame by frame in an auto-regressive manner. Despite achieving remarkable results, our video-level modeling method is a global approximation due to constraints in GPU resources, and we are still unable to construct the framework in a costeffective manner. A promising solution would involve improving the computational complexity and lightweight modeling of the transformer. Conclusion In this work, we present ODTrack, a new video-level framework for visual object tracking. We reformulate visual tracking as a token propagation task that densely associates the contextual relationships of across video frames in an autoregressive manner. Furthermore, we propose a video sequence sampling strategy and two temporal token propagation attention mechanisms, enabling the proposed framework to simplify video-level spatio-temporal modeling and avoid intricate online update strategies. Extensive experiments show that our ODTrack achieves promising results on seven tracking benchmarks. We hope that this work inspires further research in video-level tracking modeling. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7594 Acknowledgements This work is supported by the National Natural Science Foundation of China (No.U23A20383, 61972167 and U21A20474), the Project of Guangxi Science and Technology (No.2022GXNSFDA035079 and 2023GXNSFDA026003), the Guangxi ”Bagui Scholar” Teams for Innovation and Research Project, the Guangxi Collaborative Innovation Center of Multi-source Information Integration and Intelligent Processing, the Guangxi Talent Highland Project of Big Data Intelligence and Application, and the Research Project of Guangxi Normal University (No.2022TD002). References Bertinetto, L.; Valmadre, J.; Henriques, J. F.; Vedaldi, A.; and Torr, P. H. S. 2016. Fully-Convolutional Siamese Networks for Object Tracking. In ECCV Workshops, 850–865. Bhat, G.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2019. Learning Discriminative Model Prediction for Tracking. In ICCV, 6181–6190. Cao, Z.; Huang, Z.; Pan, L.; Zhang, S.; Liu, Z.; and Fu, C. 2022. TCTrack: Temporal Contexts for Aerial Tracking. In CVPR, 14778–14788. Chen, B.; Li, P.; Bai, L.; Qiao, L.; Shen, Q.; Li, B.; Gan, W.; Wu, W.; and Ouyang, W. 2022. Backbone is All Your Need: A Simplified Architecture for Visual Object Tracking. In ECCV (22), 375–392. Chen, X.; Peng, H.; Wang, D.; Lu, H.; and Hu, H. 2023. SeqTrack: Sequence to Sequence Learning for Visual Object Tracking. CVPR, abs/2304.14394. Chen, X.; Yan, B.; Zhu, J.; Wang, D.; Yang, X.; and Lu, H. 2021. Transformer Tracking. In CVPR, 8126–8135. Chen, Z.; Zhong, B.; Li, G.; Zhang, S.; and Ji, R. 2020. Siamese Box Adaptive Network for Visual Tracking. In CVPR, 6667–6676. Cui, Y.; Jiang, C.; Wang, L.; and Wu, G. 2022. MixFormer: End-to-End Tracking with Iterative Mixed Attention. In CVPR, 13598–13608. Danelljan, M.; Bhat, G.; Khan, F. S.; and Felsberg, M. 2019. ATOM: Accurate Tracking by Overlap Maximization. In CVPR, 4660–4669. Danelljan, M.; Gool, L. V.; and Timofte, R. 2020. Probabilistic Regression for Visual Tracking. In CVPR, 7181– 7190. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Fan, H.; Lin, L.; Yang, F.; Chu, P.; Deng, G.; Yu, S.; Bai, H.; Xu, Y.; Liao, C.; and Ling, H. 2019. LaSOT: A HighQuality Benchmark for Large-Scale Single Object Tracking. In CVPR, 5374–5383. Fu, Z.; Liu, Q.; Fu, Z.; and Wang, Y. 2021. STMTrack: Template-Free Visual Tracking With Space-Time Memory Networks. In CVPR, 13774–13783. Gao, S.; Zhou, C.; Ma, C.; Wang, X.; and Yuan, J. 2022. AiATrack: Attention in Attention for Transformer Visual Tracking. In ECCV (22), 146–164. Gao, S.; Zhou, C.; and Zhang, J. 2023. Generalized Relation Modeling for Transformer Tracking. CVPR, abs/2303.16580. Guo, D.; Shao, Y.; Cui, Y.; Wang, Z.; Zhang, L.; and Shen, C. 2021. Graph Attention Tracking. In CVPR, 9543–9552. Guo, M.; Zhang, Z.; Fan, H.; Jing, L.; Lyu, Y.; Li, B.; and Hu, W. 2022. Learning Target-aware Representation for Visual Tracking via Informative Interactions. In IJCAI, 927– 934. Han, W.; Dong, X.; Khan, F. S.; Shao, L.; and Shen, J. 2021. Learning To Fuse Asymmetric Feature Maps in Siamese Trackers. In CVPR, 16570–16580. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. B. 2022. Masked Autoencoders Are Scalable Vision Learners. In CVPR, 15979–15988. Huang, L.; Zhao, X.; and Huang, K. 2021. GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild. IEEE Trans. Pattern Anal. Mach. Intell., 43(5): 1562–1577. Kristan, M.; Leonardis, A.; and et.al. 2020. The Eighth Visual Object Tracking VOT2020 Challenge Results. In ECCV Workshops (5), volume 12539 of Lecture Notes in Computer Science, 547–601. Springer. Li, B.; Wu, W.; Wang, Q.; Zhang, F.; Xing, J.; and Yan, J. 2019. SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks. In CVPR, 4282–4291. Li, B.; Yan, J.; Wu, W.; Zhu, Z.; and Hu, X. 2018. High Performance Visual Tracking With Siamese Region Proposal Network. In CVPR, 8971–8980. Liao, B.; Wang, C.; Wang, Y.; Wang, Y.; and Yin, J. 2020. PG-Net: Pixel to Global Matching Network for Visual Tracking. In ECCV, 429–444. Lin, T.; Goyal, P.; Girshick, R. B.; He, K.; and Doll´ar, P. 2017. Focal Loss for Dense Object Detection. In ICCV, 2999–3007. Lin, T.; Maire, M.; Belongie, S. J.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In ECCV, 740–755. Meinhardt, T.; Kirillov, A.; Leal-Taix´e, L.; and Feichtenhofer, C. 2022. TrackFormer: Multi-Object Tracking with Transformers. In CVPR, 8834–8844. M¨uller, M.; Bibi, A.; Giancola, S.; Al-Subaihi, S.; and Ghanem, B. 2018. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild. In ECCV, 310– 327. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I. D.; and Savarese, S. 2019. Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression. In CVPR, 658–666. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In NIPS, 5998–6008. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7595 Voigtlaender, P.; Luiten, J.; Torr, P. H. S.; and Leibe, B. 2020. Siam R-CNN: Visual Tracking by Re-Detection. In CVPR, 6577–6587. Wang, N.; Zhou, W.; Wang, J.; and Li, H. 2021a. Transformer meets tracker: Exploiting temporal context for robust visual tracking. In CVPR, 1571–1580. Wang, X.; Shu, X.; Zhang, Z.; Jiang, B.; Wang, Y.; Tian, Y.; and Wu, F. 2021b. Towards More Flexible and Accurate Object Tracking With Natural Language: Algorithms and Benchmark. In CVPR, 13763–13773. Wu, Y.; Lim, J.; and Yang, M. 2015. Object Tracking Benchmark. IEEE Trans. Pattern Anal. Mach. Intell., 37(9): 1834– 1848. Xie, F.; Chu, L.; Li, J.; Lu, Y.; and Ma, C. 2023. VideoTrack: Learning to Track Objects via Video Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 22826–22835. Xie, F.; Wang, C.; Wang, G.; Cao, Y.; Yang, W.; and Zeng, W. 2022. Correlation-Aware Deep Tracking. In CVPR, 8741–8750. Xing, W.; Yifan, B.; Yongchao, Z.; Dahu, S.; and Yihong, G. 2023. Autoregressive Visual Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9697–9706. Yan, B.; Peng, H.; Fu, J.; Wang, D.; and Lu, H. 2021a. Learning Spatio-Temporal Transformer for Visual Tracking. In ICCV, 10428–10437. Yan, B.; Zhang, X.; Wang, D.; Lu, H.; and Yang, X. 2021b. Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box Estimation. In CVPR, 5289–5298. Computer Vision Foundation / IEEE. Ye, B.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2022. Joint Feature Learning and Relation Modeling for Tracking: A One-Stream Framework. In ECCV (22), 341–357. Yu, Y.; Xiong, Y.; Huang, W.; and Scott, M. R. 2020. Deformable Siamese Attention Networks for Visual Object Tracking. In CVPR, 6727–6736. Zeng, F.; Dong, B.; Zhang, Y.; Wang, T.; Zhang, X.; and Wei, Y. 2022. MOTR: End-to-End Multiple-Object Tracking with Transformer. In ECCV (27), 659–675. Zhang, L.; Gonzalez-Garcia, A.; van de Weijer, J.; Danelljan, M.; and Khan, F. S. 2019. Learning the Model Update for Siamese Trackers. In ICCV, 4009–4018. Zhang, Z.; Peng, H.; Fu, J.; Li, B.; and Hu, W. 2020. Ocean: Object-Aware Anchor-Free Tracking. In ECCV, 771–787. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7596
2024
843
18,677
PVALane: Prior-Guided 3D Lane Detection with View-Agnostic Feature Alignment Zewen Zheng1,2*, Xuemin Zhang1, Yongqiang Mou1†, Xiang Gao1,2, Chengxin Li1,3, Guoheng Huang2, Chi-Man Pun4, Xiaochen Yuan5 1X Lab, GAC R&D CENTER, Guangdong, China 2Guangdong University of Technology, Guangdong, China 3South China Normal University, Guangdong, China 4University of Macau, Macau, China 5Macao Polytechnic University, Macao, China yongqiang.mou@gmail.com Abstract Monocular 3D lane detection is essential for a reliable autonomous driving system and has recently been rapidly developing. Existing popular methods mainly employ a predefined 3D anchor for lane detection based on front-viewed (FV) space, aiming to mitigate the effects of view transformations. However, the perspective geometric distortion between FV and 3D space in this FV-based approach introduces extremely dense anchor designs, which ultimately leads to confusing lane representations. In this paper, we introduce a novel prior-guided perspective on lane detection and propose an end-to-end framework named PVALane, which utilizes 2D prior knowledge to achieve precise and efficient 3D lane detection. Since 2D lane predictions can provide strong priors for lane existence, PVALane exploits FV features to generate sparse prior anchors with potential lanes in 2D space. These dynamic prior anchors help PVALane to achieve distinct lane representations and effectively improve the precision of PVALane due to the reduced lane search space. Additionally, by leveraging these prior anchors and representing lanes in both FV and bird-eye-viewed (BEV) spaces, we effectively align and merge semantic and geometric information from FV and BEV features. Extensive experiments conducted on the OpenLane and ONCE-3DLanes datasets demonstrate the superior performance of our method compared to existing state-of-the-art approaches and exhibit excellent robustness. Introduction As a fundamental module in autonomous driving systems, robust lane detection has received a lot of research attention and made unprecedented progress. However, the frontviewed (FV) lane detection models that can only provide prediction results in the 2D image space are not directly applicable to complex real-world scenarios (Neven et al. 2018; Pan et al. 2018; Liu et al. 2021b). As a promising direction, 3D lane detection is proposed to tackle the above problem. It *Work done during the internship at GAC R&D CENTER. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Dense 3D Anchors Front-Viewed Image Front-Viewed Feature 3D Lanes Front-Viewed Image Bird-Eye-Viewed Feature Inverse Perspective Mapping (IPM) Indistinguishable Scattered Projection Front-Viewed Image 3D Lanes Dense Sampled Features Loss (a) (b) (c) Projection BEV Lanes Misaligned : Ground-to-FV Projecting Bird-Eye Viewed Feature IPM 2D Predictions BEV Sampled Features Prior Anchor Alignment FV Sampled Features Prior Anchor Network ... : Ground-to-BEV Projecting S S S : Feature Sampling false positive lane Figure 1: Illustration of the difference among (a) the BEVbased methods, (b) the FV-based methods, and (c) our PVALane. aims to construct a model that extracts lane features from a monocular 2D image and then detects the lanes in the ground coordinate system. Fueled by the success of 3D object detection (Liang et al. 2018; Li et al. 2022), current 3D lane detection models (Garnett et al. 2019; Guo et al. 2020; Chen et al. 2022) often detect 3D lanes by transforming FV features to birdeye-viewed (BEV) space using inverse perspective mapping (IPM). Due to the similar appearance and geometry of different lanes on the top-view plane, the BEV-based method exhibits geometry translational invariance. Although the representation in BEV space provides better geometric inforThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7597 (a) FV: Context, BEV: Geometric (b) FV: Near side, BEV: Far side Context (roadside) Near side Geometry (parallel) Far side Figure 2: Visualization of activation maps for two image samples under FV and BEV. We can observe that BEV tends to perceive geometric properties (e.g., parallel) and lanes on the far side, while FV focuses more on capturing near and contextual information (e.g., roadside) for lane detection. (Best viewed in color). mation, the dependence on IPM leads to several unexpected problems. As illustrated in Figure 1(a), the flat ground assumption of IPM makes the BEV and 3D ground truth (GT) spaces misaligned in uphill or downhill cases, and thus the method may not be generalizable to rough ground scenes with varying visual appearances. Secondly, IPM inevitably results in the loss of the original semantic and contextual information within the FV features. To address these challenges, recent approaches (Yan et al. 2022; Huang et al. 2023) have shifted their focus towards 3D lane prediction using semantic information directly from FV. Specifically, this method acquires lane representations by projecting predefined 3D anchors onto corresponding locations in the FV image for sampling and then performs regression based on these anchors. While this approach eliminates the effects of view transformations, it introduces a perspective geometric distortion due to the ground-to-image projection. As shown in Figure 1(b), the anchor projections on the FV image are indistinguishable in the distance but significantly scattered in the nearby areas, thus necessitating a dense anchor design to mitigate these geometric variances. However, the introduction of these dense anchors might result in feature overlap and false detections in dynamic scenarios, which, in turn, limits its effectiveness for geometricoriented tasks like 3D lane detection. In this paper, we introduce PVALane, a prior-guided 3D lane detection model that utilizes 2D prior knowledge to accurately estimate the 3D lane locations, as illustrated in Figure 1(c). Instead of utilizing empirical dense anchors for lane detection directly (Huang et al. 2023), we establish Prior Anchors, which are obtained almost cost-free from the 2D prediction and applied for downstream 3D lane detection. Specifically, at the top of the backbone features, we construct a Prior Anchor Network (PAN) which projects predefined 3D anchors into 2D space to calculate their objectness probabilities, thereby filtering out anchors with no potential lanes and using them as prior anchors. This prior anchor explicitly provides a strong prior indicating lane localization and ensures that only high-quality prior anchors are used for 3D lane detection. Therefore, it can effectively improve the precision of PVALane due to the reduced lane search space. Furthermore, the PAN requires only a few additional fullyconnected layers and proceeds directly on the FV feature, so it can be integrated as an easy-to-deploy module and trained in an end-to-end manner. Based on the intuitive insight that various view features tend to utilize different view properties (e.g., contextual and geometric) and region information for lane detection (shown in Figure 2), we further propose a Prior-Guided View-agnostic Feature Alignment Module (PVFA). Specifically, PVFA projects the prior anchors into the FV and BEV spaces, acquires their corresponding features through sampling, and subsequently fuses them together. Since the prior anchors are defined in 3D space and are extremely sparse, they can effectively narrow the association between the FV and BEV sampling features. In addition, this shared sampled feature space decouples the downstream lane detection from the view space, making PVALane inherently view-agnostic and extendable to multi-view/cross-view scenarios. The contributions of this work are summarized as follows: • We introduce a prior perspective on lane detection and propose an end-to-end PVALane framework, which utilizes 2D prior knowledge to achieve precise and efficient 3D lane detection. • We propose a novel prior anchor, which is obtained almost cost-free from accurate 2D predictions and explicitly provides a strong prior indicating lane localization. • We develop a view-agnostic feature alignment method that leverages the prior anchor to effectively align and merge both geometric and semantic information across different views. • Experiments show that PVALane achieves new state-ofthe-art performance on two popular 3D lane detection benchmarks and exhibits excellent robustness. Related Work Monocular 3D Lane Detection Monocular 3D lane detection (Garnett et al. 2019; Efrat et al. 2020; Chen et al. 2022; Yan et al. 2022; Luo et al. 2023) is a challenging task that has attracted the interest of the computer vision community in recent years. 3DLaneNet (Garnett et al. 2019) is the pioneering work in this domain, which transforms the front-viewed (FV) features to bird-eye-viewed (BEV) for lane detection. Persformer (Chen et al. 2022) proposes a spatial feature transformation based on deformable attention (Zhu et al. 2021) for robust BEV features. BEV-LaneDet (Wang et al. 2023) proposes a virtual camera that guarantees the consistency of spatial relations among cameras. While representing lanes in BEV space offers better geometric properties, the flat ground assumption of IPM introduces several challenges. Anchor3DLane (Huang et al. 2023) directly detects 3D The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7598 lanes from FV images without IPM projection. However, this BEV-free approach requires a dense anchor design to mitigate the perspective geometric distortion, which leads to confusing lane representations. Therefore, our PVALane utilizes sparse prior anchors obtained from FV images to achieve distinct lane representations and thus eliminate the impact of redundant anchors. Furthermore, PVALane leverages both semantic and structural information from FV and BEV to achieve accurate lane detection. Prior-guided Lane Detection Prior knowledge is ubiquitous in lane detection as most methods (Neven et al. 2018; Pan et al. 2018; Liu et al. 2021a) aims to enhance model performance by leveraging existing information. However, those prior-guided approaches often lead to an inevitable increase in model complexity due to the introduction of additional modules or supervised loss. CondLaneNet (Liu et al. 2021a) uses the pre-extracted instance origin to guide the underlying visual features to describe the shape prediction of lane instances accurately. Gen-LaneNet (Guo et al. 2020) used a segmentation subnetwork to generate lane segmentation from 2D images and to support the regression of 3D lanes subsequently. CLGo (Liu et al. 2022) leverages pre-estimated camera pitches and heights to transform raw images into BEV images, enabling precise 3D lane detection. Different from the above methods, our approach introduces a nearly cost-free prior anchor to reduce the lane search space, thereby significantly reducing the complexity of the downstream lane detection model. Methodology Lane Representation Similar to (Garnett et al. 2019; Chen et al. 2022), we define 3D Lanes as a series of 3D points with Np pre-defined fixed y-coordinates. Specifically, given a 3D lane set L3D =  li Nl i=1 containing Nl lanes, we formulate the i-th lane as: li = n x(i,k), yk, z(i,k), vis(i,k)oNp k=1 , (1) where x(i,k), yk, z(i,k) and binary vis(i,k) denote the ground coordinates and visibility of k-th point of the current lane. Architecture The overall architecture of our PVALane is illustrated in Figure 3. Given a 2D front-viewed (FV) image as input, our model first extracts FV features with a ResNet (He et al. 2016) backbone. These features are then passed through the prior anchor network, which generates lane probabilities indicating whether the corresponding 3D anchor contains the lane target. To filter redundant 3D anchors, we further select the anchors that score above a predefined threshold as the prior anchor. Subsequently, the FV and bird’s-eye-view (BEV) features are encoded by two specially designed view encoders and aligned in a shared sampled feature space by projecting the prior anchor to the corresponding view. Finally, the aligned features are passed into the prediction head to obtain the 3D lane predictions. Prior Anchor Network Although 2D features are not directly suitable for geometricoriented tasks due to perspective geometric distortion, we consider using them to enhance 3D lane detection with a prior-guided perspective. On one hand, this initial FV feature contains rich semantic and contextual information. On the other hand, it does not require a view transformation, enabling the quick generation of prior information. As demonstrated in the majority of two-stage object detection literature (Ren et al. 2015), a lightweight end-to-end prior network can quickly generate region prior, thereby significantly reducing the complexity of the object detection task. A visual explanation of the Prior Anchor Network is shown in Figure 4. 3D Lane Anchors Inspired by (Huang et al. 2023), we define lane anchors as 3D anchors in the ground coordinate system to better adapt to 3D lane shapes. Specifically, given a set of fixed y-positions y =  yk Np k=1, the j-th 3D anchor Aj =  q(j,k) Np k=1 defines a 3D lane line using two vectors xj, zj , where xj, zj ∈RNp are horizontal and vertical offsets relative to the positions of the Np predefined points. Anchor Projecting To obtain the lane probability of the 3D anchors, we first project them into the 2D plane of FV feature Ffv ∈RHfv×Wfv×C as 2D anchors to extract their corresponding features. Specifically, given the j-th 3D anchor Aj = xj, y, zj as an example, we define the projection as: ˜zj   uj vj 1  = K · Tg→c   xj y zj 1  , (2) where K ∈R3×3 is the intrinsic matrix, Tg→c ∈R3×4 denotes the transformation matrix from ground coordinates to camera coordinates, ˜zj denotes depth to the camera plane. Given the above projection denoted as Pg2fv(·), we obtain the anchor feature Fj a by sampling the FV feature Ffv: Fj a = Ffv(Pg2fv(Aj)) ∈RNp×C. (3) Prior Anchor Generation Since the anchor feature Fj a contains the semantic feature of the corresponding 3D anchor, we further apply a primary classification head to Fj a to obtain lane classification scores pj pri ∈R1+Nc, where Nc represents the number of lane categories. Then, the ˜pj pri is calculated according to the classification score to supply potential lane probability: ˜pj pri = 1 −1 (cn = 0) S(pj pri) ∈R1, (4) where cn ∈{0, 1, . . . , Nc}, 0 represents non-lane category, 1 (·) denotes an indicator function and S(·) is the softmax function. Each value in the n ˜pj pri oNa j=1 indicates the probability that the corresponding anchor contains lane. Therefore, to select high-quality 3D anchors as prior anchors, we simply filter out the low-probability anchors based on a threshold τ: Apri = {Ak}k∈{Ψj(˜pj pri,τ)}, (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7599 Dense Anchors FV Image BEV Encoder Transformer Layer Projection Layer FV Encoder Inverse Perspective Mapping (IPM) FV Features BEV Features Feature Alignment Fused Features 3D Lane Output Predication Head View-agnostic Feature Alignment Prior Anchors Groud-to-BEV Projecting Groud-to-FV Projecting : Projection to top view : Pooling + Conv : Backbone Block Prior Anchor Network Anchor Projecting Prior Anchor Generation : Intrinsic Matrix : Ground-to- Camera Matrix : Camera-to Ground Matrix FV Feature Sampling BEV Feature Sampling Figure 3: The overall architecture of PVALane. The prior anchor network narrows the lane search space by generating highquality and sparse prior anchors. Afterward, a prior-guided view-agnostic feature alignment module is applied to align and merge geometric and semantic information from different view features. S Sampling Dense Anchors Primary Classification ... ... Prior Anchors ... Eq.(4) Lane Probabilities FV Features (a) Anchor Projecting (b) Prior Anchor Generation Sampled Features F Filtering for each anchor Figure 4: Illustration of Prior Anchor Network. where Ψj(·) denotes an operator that returns the value of j satisfying ˜pj pri > τ. By incorporating the prior provided by this efficient 2D prediction head, the complexity of 3D lane detection can be significantly reduced. This enables the model to prioritize the more challenging task of regression. Loss Function To eliminate the effect of positive and negative sample imbalance due to perspective geometric distortion, we adopt focal loss (Lin et al. 2017b) for training classification: Lpri = − Na X j=1 α  1 −pj t γ log pj t, (6) where pj t is the predicted probability of the current category, α and γ are the hyperparameters for focal loss, which are set to 0.25 and 2 in our experiments, respectively. View-specific Feature Encoding To leverage semantic and geometric information present in FV and BEV, PVALane simultaneously learns features from both views. Given the distinct view representation of FV and BEV features, the model incorporates two specialized encoders to capture the specific information from each view independently. FV Context-aware Encoder For each FV features Ffv ∈RHfv×Wfv×C, we introduce a transformer encoder (Vaswani et al. 2017) with a projection layer to capture global semantic and contextual information: ˜Ffv = P (E (Ffv)) ∈RHfv×Wfv×C, (7) where E(·) denotes transformer encoder layer and P(·) is a linear projection. Such an encoder enables the model to incorporate a larger contextual field and improve its overall scene understanding by leveraging high-level semantic information. BEV Geometric-aware Encoder To fully leverage the geometric properties (i.e., translational invariance) of the top-view plane, we propose a geometry-aware BEV encoder in BEV space. Specifically, given a point pfv with coordinates (uf, vf) in the multi-scale FV feature n Fl fv oNf l=1, IPM maps the point pfv to the corresponding point pbev with coordinates (xb, yb) in BEV space: " xb yb 0 # = Sf→b · Hc→g · K−1 · " uf vf 1 # , (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7600 Front-view Bird’s-eye view (Virtual view) Flat ground-plane Road Groud-to-BEV Projecting Groud-to-FV Projecting Prior 3D Anchor Figure 5: Illustration of 3D prior anchor in FV and BEV projections. where Sf→b is the scale matrix between front-view and BEV, Hc→g ∈R3×3 denotes the homography matrix from camera coordinates to ground coordinates. Similar to FPN (Lin et al. 2017a) structure, two BEV features from adjacent pyramidal layers, Fl−1 bev and Fl bev, are merged after applying the downsampling layer Rl to the spatial dimension of the previous layer. The convolution block Cl(·) then processes this mixture to propagate geometric information to higher layers in a coarse-to-fine manner. The process is defined as: ˜Fl bev = Cl  Fl bev ⊕Rl(˜Fl−1 bev )  ∈RHl bev×W l bev×Cl, (9) where ⊕is the concatenating operation. In the top view, sharing knowledge between these scales has the potential to enhance the model’s robustness in handling complex scenes. Prior-Guided View-agnostic Feature Alignment By leveraging the prior anchors provided by PAN, we further introduce the Prior-Guided View-agnostic Feature Alignment Module (PVFA) to effectively align and merge the rich information from both views in a shared sampled space. Prior-Guided Projection and Sampling Given the j-th prior anchor Aj pri = n q(j,k) pri oNp k=1 as an example, we project it into the FV and BEV spaces separately as shown in Figure 5. Similar to PAN, we use Pg2fv(Aj pri) to denote the projection of prior anchor Aj pri in the FV, which is described in detail in Eq.( 2). Then, we employ bilinear interpolation to sample FV anchor features from the output feature ˜Fg2fv of the FV encoder at the projection points Pg2fv(Aj pri): ˆFj fv = ˜Ffv(Pg2fv(Aj pri)) ∈RNp×Cpri. (10) By defining the projection in the BEV as Pg2bev(·) = Pfv2bev(Pg2fv(·)), we further define the sampling process for BEV space features as follows: ˆFj bev = ˜FNf bev(Pg2bev(Aj pri)) ∈RNp×Cpri. (11) View-agnostic Feature Alignment Since ˆFj fv and ˆFj bev are sampled using a uniform prior anchor projection, they can be regarded as view-agnostic, thus easily aligned with the guidance of the prior anchor. Specifically, we first transform the Np points in ˆFj fv and ˆFj bev into the channel dimension and merge the FV and BEV anchor features using a fusion module Φfus(·) given by: Fj fus = Φfus(F(ˆFj fv), F(ˆFj bev)) ∈RNpCpri, (12) where F(·) is the flatten operation. This enhanced feature contains different view information, which enables the network to infer the 3D structure in a road scene. Furthermore, the alignment of FV and BEV features using a sparse prior anchor significantly reduces the association space between them, thus improving the model efficiency. Prediction Loss Given the fused features corresponding to the j-th prior anchor, we utilize a classification head and a regression head to predict its lane probability pj ∈R1+Nc, x-axis and z-axis offsets ∆xj ∈RNp, ∆zj ∈RNp, and visibility of each point visj ∈RNp respectively. Consequently, we define our 3D lane proposals based on the prior anchor Aj pri = xj, y, zj as Pj = (pj, xj + ∆xj, y, zj + ∆zj, visj). Given Npos pairs of positive proposals {Pi}Npos i=1 and corresponding ground-truth {Gi}Npos i=1 with Gi =  ˜xi, ˜zi, ˜ vis i , the loss function can be written as: L3D = − Npri X j=1 α  1 −pj t γ log pj t + Npos X i=1 Np X k=1 ˜ vis (i,k) ·  x(i,k) + ∆x(i,k) −˜x(i,k) 1 + Npos X i=1 Np X k=1 ˜ vis (i,k) ·  z(i,k) + ∆z(i,k) −˜z(i,k) 1 + Npos X i=1 Np X k=1 ˜ vis (i,k) −vis(i,k) 1 . . (13) To further enhance the representation of the FV and BEV, we introduce a segmentation loss Lseg, inspired by LaneNet (Neven et al. 2018). The total loss function is defined as: Ltotal = L3D + λpriLpri + λsegLseg, (14) Where λpri and λset are set to 1.0 and 0.1 in our experiments, respectively. Experiment Datasets and Evaluation Metrics The experiments are conducted on two popular benchmark datasets of 3D lane detection: OpenLane (Chen et al. 2022) and Once-3DLanes (Yan et al. 2022). In our experiments, we applied the maximum F1 score, close (0-40m) and far (40-100m) X/Z errors to evaluate the performance of the model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7601 Methods Feature Extraction F1(%) Cate Acc(%) X err(m) Z err(m) FPS Backbone Space Trans 3D-LaneNet (Garnett et al. 2019) VGG-16 IPM 44.1 0.479/0.572 0.367/0.443 GenLaneNet (Guo et al. 2020) ERFNet IPM 32.3 0.591/0.684 0.411/0.521 PersFormer (Chen et al. 2022) EfficientNet Transformer 50.5 92.3 0.485/0.553 0.364/0.431 Anchor3DLane (Huang et al. 2023) ResNet-18 53.7 90.9 0.276/0.311 0.107/0.138 BEV-LaneDet (Wang et al. 2023) ResNet-34 MLP 58.4 0.309/0.659 0.244/0.631 102 LATR (Luo et al. 2023) ResNet-50 Transformer 61.9 92.0 0.219/0.259 0.075/0.104 11 PVALane (Ours) ResNet-18 IPM 61.2 93.0 0.249/0.263 0.094/0.122 108 PVALane (Ours) ResNet-50 IPM 62.7 93.4 0.232/0.259 0.092/0.118 53 PVALane (Ours) Swin-B IPM 63.4 93.5 0.226/0.257 0.093/0.119 31 Table 1: Comparison with state-of-the-art methods on OpenLane validation set. “Space Trans” denotes space transform. “Cate Acc” means category accuracy. Our PVALane achieves state-of-the-art performance on F1-score and category accuracy. Methods Backbone Mean Up& Down Curve Extreme Weather Night Intersection Merge& Split 3D-LaneNet (Garnett et al. 2019) VGG-16 41.7 40.8 46.5 47.5 41.5 32.1 41.7 GenLaneNet (Guo et al. 2020) ERFNet 26.4 25.4 33.5 28.1 18.7 21.4 31.0 PersFormer (Chen et al. 2022) EfficientNet 47.3 42.4 55.6 48.6 46.6 40.0 50.7 Anchor3DLane (Huang et al. 2023) ResNet-18 50.1 46.7 57.2 52.5 47.8 45.4 51.2 BEV-LaneDet (Wang et al. 2023) ResNet-34 53.8 48.7 63.1 53.4 53.4 50.3 53.7 LATR (Luo et al. 2023) ResNet-50 58.3 55.2 68.2 57.1 55.4 52.3 61.5 PVALane (Ours) ResNet-18 57.5 52.6 65.7 59.5 56.5 52.2 58.7 PVALane (Ours) ResNet-50 59.0 54.1 67.3 62.0 57.2 53.4 60.0 PVALane (Ours) Swin-B 60.1 56.1 67.7 64.0 58.6 53.6 60.8 Table 2: Comparison with state-of-the-art methods under different scenarios. “Mean” denotes the average F1 score of all scenarios. Our PVALane achieves a significant improvement in extremely challenging scenarios (e.g., Extreme Weather and Night). Method F1(%) R(%) P(%) CD(m) 3D-LaneNet 44.73 35.16 61.46 0.127 PersFormer 74.33 69.18 80.30 0.074 Anchor3DLane 74.87 69.71 80.85 0.064 PVALane (Ours) 76.35 70.83 82.81 0.059 Table 3: Comparison with state-of-the-art methods on ONCE-3DLanes validation set. “P”, “R”, and “CD” denote Precision, Recall, and CD Error respectively. Implementation Details We adopt ResNet-50 (He et al. 2016) with ImageNet (Deng et al. 2009) pre-trained weights as the CNN backbones. Anchor filtering threshold τ in Eq.( 5) is set to 0.2 and the maximum number of prior anchors is set to 1000. For ResNet, the features after block1 are extracted to construct 4 pyramidal layers of the BEV encoder, and the block4 feature is passed into the FV encoder to obtain the FV features. Four A100s are used to train the model and the batch size is set to 32. In addition, PVALane is trained in an end-to-end manner using the Adam optimization algorithm (Kingma and Ba 2017) with a learning rate of 2e−4. During training, λpri and λseg in Eq.( 14) are set to 1.0 and 0.1, respectively. Main Results We compared our approach with five state-of-the-art methods: 3D-LaneNet (Garnett et al. 2019), GenLaneNet (Guo et al. 2020), PersFormer (Chen et al. 2022), Anchor3DLane (Huang et al. 2023) and BEV-LaneDet (Wang et al. 2023). OpenLane We present results on the OpenLane validation set in Table 1, from which it can be seen that PVALane achieves state-of-the-art results on F1 score and category accuracy. Using ResNet-50 as the backbone, we outperform the BEV-LaneDet and LATR by 4.3% and 0.8% in F1 score. Furthermore, by utilizing Swin Transformer as the backbone, PVALane achieves a significant boost in performance. As shown in Table 2, taking the averaged F1 score of all scenarios as the metric, we outperform BEV-LaneDet and LATR by 5.2% and 0.7% respectively. In addition, our method achieves a significant improvement in extremely challenging scenarios (e.g., Extreme Weather and Night), demonstrating the robustness of our method. To demonstrate the efficiency of PVALane, we conduct experiments on the inference speed of PVALane using various backbones, as shown in Table 1. Using the ResNet-18 as the backbone, PVALane achieves a high speed of 108 FPS, meeting the real-time requirements of autonomous driving. ONCE-3DLanes In Table 3, we present results on the ONCE-3DLanes dataset. Specifically, PVALane outperforms state-of-the-art methods by 1.48% and achieves a significant improvement in precision. This indicates that PVALane is capable of filtering redundant anchors in a priorguided manner, thereby reducing false detections as compared to the dense anchor-based approach. Qualitative Results To better demonstrate our method, we visualized the detection results during the testing phase, as shown in Figure 6. It can be found that PVALane significantly reduces false positive lanes and shows more precise The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7602 (a) FV Image (b) Baseline (c) Our-3D Figure 6: Qualitative results of the proposed PVALane and the baseline on the Openlane dataset. The red and purple lanes indicate ground truth and prediction, respectively. Model F1(%) Cate Acc(%) X err(m) (I) Baseline 55.3 88.5 0.317/0.338 (II) + PAN 59.3 92.3 0.264/0.290 (III) + PVFA 60.1 92.2 0.266/0.301 (IV) + PVFA† 60.5 92.5 0.258/0.296 Table 4: Ablation study on OpenLane validation set. “PAN” denotes Prior Anchor Network. “PVFA” denotes PriorGuided View-agnostic Feature Alignment. “PVFA†” denotes incorporating View-specific Feature Encoding. detection than the Anchor3DLane. In addition, we further visualize the generation process of the prior anchor, Figure 7. Based on the prior knowledge provided by the FV features, PVALane significantly reduces the number of anchors used for downstream lane detection. Ablation Study In this section, we show an ablation analysis to validate the effectiveness of the proposed modules and justify the parameter choices we made. All experiments were conducted using the ResNet-18 backbone with a batch size of 8. Different components As we can see in Table 4, without PAN and PVFA, our baseline yields an F1 score of 55.4%. By simply introducing PAN to the baseline method, we achieve a significant boost in performance by 4.0%, improving the F1 score to 59.3%. What’s more, the addition of PVFA was able to further improve the performance of our model to 60.1%, but enhancing information such as semantic or structural information from different views will make it better. We can observe that the view-specific encoding of FV and BEV features increases the F1 score to 60.5%. Score threshold of the prior anchor To find out the influences of prior anchors with different numbers, we conducted a series of experiments with different score thresholds on prior anchor generation. As shown in Table 4, results above the threshold of 0.2 fail to fit the lanes since less than 50 anchors are selected as the prior anchor, which leads to a lower F1 score. When the threshold is higher than 0.2, some extra interference may be introduced into the prior anchor, and (a) FV Image (b) Initial Anchor (c) Prior Anchor Anchors: 4431 Anchors: 37 Anchors: 4431 Anchors: 36 Figure 7: Visualization of the prior anchor generation. τ Anchors F1(%) Cate Acc(%) X err(m) 1000 60.0 91.6 0.274/0.314 0.2 108 60.5 92.5 0.258/0.296 0.4 49 60.2 92.1 0.265/0.300 0.6 25 58.5 91.1 0.284/0.314 Table 5: Ablation study on the threshold τ of prior anchor generation. Methods F1(%) Cate Acc(%) X err(m) PVFA w/o FV 59.8 91.4 0.278/0.308 PVFA w/o BEV 59.1 92.1 0.272/0.300 PVFA 60.5 92.5 0.258/0.296 Table 6: Ablation study on Prior-Guided View-agnostic Feature Alignment Module (PVFA). therefore our model performance gets slightly hurt. Therefore, PVALane equipped with about 100 prior anchors is desirable to guide downstream 3D lane detection. As such, we set τ=0.2 in all of our experiments. Different view features Compared with simply using information from a single view (i.e., FV or BEV), the proposed PFVA achieves gains by 0.7% and 1.4% in F1 score (see Table 6). This demonstrates that FV and BEV often contain different information (i.e., semantic and geometric) in the feature space. By guiding the merge process through prior anchors, the PVFA can effectively utilize the information from both views to enhance the lane representation. Conclusions In this work, we propose PVALane, a simple yet accurate prior-guided framework tailored for 3D lane detection. By utilizing the strong prior provided by 2D predictions, a nearly cost-free prior anchor is generated to reduce the lane search space and thus achieve efficient 3D lane detection. Additionally, we further represent the lanes in different view spaces and align the semantic and geometric information from FV and BEV features under the guidance of the prior anchor. Extensive experiments demonstrate the superior performance of our method compared to existing stateof-the-art approaches. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7603 Acknowledgments This work was supported in part by the Key Areas Research and Development Program of Guangzhou Grant 2023B01J0029, Science and technology research in key areas in Foshan under Grant 2020001006832, the Science and technology projects of Guangzhou under Grant 202007040006, the Guangdong Provincial Key Laboratory of Cyber-Physical System under Grant 2020B1212060069, the Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515012534, the National Statistical Science Research Project of China (No. 2022LY096), the Science and Technology Development Fund, Macau SAR, under Grant 0087/2020/A2 and Grant 0141/2023/RIA2. References Chen, L.; Sima, C.; Li, Y.; Zheng, Z.; Xu, J.; Geng, X.; Li, H.; He, C.; Shi, J.; Qiao, Y.; et al. 2022. Persformer: 3d lane detection via perspective transformer and the openlane benchmark. In Proceedings of the European Conference on Computer Vision, 550–567. Springer. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 248–255. Efrat, N.; Bluvstein, M.; Oron, S.; Levi, D.; Garnett, N.; and Shlomo, B. E. 2020. 3D-LaneNet+: Anchor Free Lane Detection using a Semi-Local Representation. arXiv:2011.01535. Garnett, N.; Cohen, R.; Pe’er, T.; Lahav, R.; and Levi, D. 2019. 3d-lanenet: end-to-end 3d multiple lane detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2921–2930. Guo, Y.; Chen, G.; Zhao, P.; Zhang, W.; Miao, J.; Wang, J.; and Choe, T. E. 2020. Gen-lanenet: A generalized and scalable approach for 3d lane detection. In Proceedings of the European Conference on Computer Vision, 666–681. Springer. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 770–778. Huang, S.; Shen, Z.; Huang, Z.; Ding, Z.-h.; Dai, J.; Han, J.; Wang, N.; and Liu, S. 2023. Anchor3dlane: Learning to regress 3d anchors for monocular 3d lane detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17451–17460. Kingma, D. P.; and Ba, J. 2017. Adam: A Method for Stochastic Optimization. arXiv:1412.6980. Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; and Dai, J. 2022. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In Proceedings of the European Conference on Computer Vision, 1–18. Springer. Liang, M.; Yang, B.; Wang, S.; and Urtasun, R. 2018. Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vision, 641–656. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017a. Feature pyramid networks for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2117–2125. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017b. Focal loss for dense object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2980–2988. Liu, L.; Chen, X.; Zhu, S.; and Tan, P. 2021a. Condlanenet: a top-to-down lane detection framework based on conditional convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3773–3782. Liu, R.; Chen, D.; Liu, T.; Xiong, Z.; and Yuan, Z. 2022. Learning to predict 3d lane shape and camera pose from a single image via geometry constraints. In Proceedings of the AAAI Conference on Artifcial Intelligence, volume 36, 1765–1772. Liu, R.; Yuan, Z.; Liu, T.; and Xiong, Z. 2021b. End-to-end lane shape prediction with transformers. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3694–3702. Luo, Y.; Zheng, C.; Yan, X.; Kun, T.; Zheng, C.; Cui, S.; and Li, Z. 2023. LATR: 3D Lane Detection from Monocular Images with Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7941–7952. Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; and Van Gool, L. 2018. Towards end-to-end lane detection: an instance segmentation approach. In 2018 IEEE intelligent vehicles symposium (IV), 286–291. IEEE. Pan, X.; Shi, J.; Luo, P.; Wang, X.; and Tang, X. 2018. Spatial as deep: Spatial cnn for traffic scene understanding. In Proceedings of the AAAI Conference on Artifcial Intelligence, volume 32. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, R.; Qin, J.; Li, K.; Li, Y.; Cao, D.; and Xu, J. 2023. BEV-LaneDet: An Efficient 3D Lane Detection Based on Virtual Camera via Key-Points. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1002–1011. Yan, F.; Nie, M.; Cai, X.; Han, J.; Xu, H.; Yang, Z.; Ye, C.; Fu, Y.; Mi, M. B.; and Zhang, L. 2022. Once-3dlanes: Building monocular 3d lane detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17143–17152. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2021. Deformable DETR: Deformable Transformers for End-toEnd Object Detection. arXiv:2010.04159. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7604
2024
844
18,678
SpFormer: Spatio-Temporal Modeling for Scanpaths with Transformer Wenqi Zhong*, Linzhi Yu*, Chen Xia†, Junwei Han, Dingwen Zhang† School of Automation, Northwestern Polytechnical University, China wenqizhong@mail.nwpu.edu.cn, 15160557827@mail.nwpu.edu.cn, cxia@nwpu.edu.cn, junweihan2010@gmail.com, zhangdingwen2006yyy@gmail.com Abstract Saccadic scanpath, a data representation of human visual behavior, has received broad interest in multiple domains. Scanpath is a complex eye-tracking data modality that includes the sequences of fixation positions and fixation duration, coupled with image information. However, previous methods usually face the spatial misalignment problem of fixation features and loss of critical temporal data (including temporal correlation and fixation duration). In this study, we propose a Transformer-based scanpath model, SpFormer, to alleviate these problems. First, we propose a fixationcentric paradigm to extract the aligned spatial fixation features and tokenize the scanpaths. Then, according to the visual working memory mechanism, we design a local meta attention to reduce the semantic redundancy of fixations and guide the model to focus on the meta scanpath. Finally, we progressively integrate the duration information and fuse it with the fixation features to solve the problem of ambiguous location with the Transformer block increasing. We conduct extensive experiments on four databases under three tasks. The SpFormer establishes new state-of-the-art results in distinct settings, verifying its flexibility and versatility in practical applications. The code can be obtained from https://github.com/wenqizhong/SpFormer. Introduction The human visual system (HVS) plays an essential role in human perception, which receives and processes the majority of information perceived by humans. Human visual behaviors provide valuable insights into the underlying mechanisms of the HVS. A comprehensive understanding of human vision can greatly benefit various downstream tasks, e.g., saliency prediction (Liu et al. 2015; Huang et al. 2015; Wang et al. 2019), salient object detection (Han et al. 2018; Fan et al. 2021), scanpath prediction (Xia et al. 2019), segmentation (Lang et al. 2022), onfocus detection (Zhang et al. 2022), and auxiliary diagnosis (Liu, Li, and Yi 2016; Xia et al. 2022). Two primary data types used to represent human visual behaviors are saliency maps and saccadic scanpaths (abbreviated as scanpaths). Saliency maps usually represent the static spatial probability distribution of attention for a *These authors contributed equally. †corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Fixation Position f Fixation Duration d Image I Scanpath s Scanpath Meta Scanpath Figure 1: Illustration of the unique and intricate data modality for scanpath and meta scanpath. The scanpath contains a sequence of fixation positions and fixation duration, coupled with image information. The meta scanpath stores only a few local fixations (approximately 3-4 visual items). group, while scanpaths typically depict the spatial-temporal attention distribution for an individual. Therefore, scanpaths are well-suited for individual-level analysis and prediction in various domains (Xia et al. 2022; Dalrymple et al. 2019; Mohammadhasani et al. 2020). Scanpath is a unique and intricate data modality, but existing methods for scanpaths often neglect their complexity. The scanpath is a multivariate time series composed of the fixation position and the corresponding duration (see Fig. 1). Moreover, the scanpath is intimately related to the image stimuli that excite the attention behavior. Overall, the intricate properties of the scanpath can be summarized in three key aspects: 1) The scanpath represents a multivariate time series. 2) Each time step of the scanpath comprises a pair of fixation position and fixation duration. 3) The scanpath exhibits a coupling with the corresponding image stimuli. However, previous scanpath-based models have not comprehensively considered the above-mentioned properties. Generally speaking, existing methods in the medical and psychology fields typically conduct statistic analysis with scanpath representation based on hand-craft features, e.g., the fixation ratio of different regions (Jones and Klin 2013). In the computer community, learning-based models for scanpaths have gradually emerged in recent years (Jiang and Zhao 2017; Dalrymple et al. 2019; Rahman et al. 2021). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7605 However, these methods are inadequate and underpowered to completely model the above-mentioned properties of scanpaths. As a result, these models face the issues such as the spatial misalignment of fixation features and the negligence of temporal correlation and duration of fixations, which cannot provide effective scanpath representation for downstream tasks. For more powerful representation, the model should incorporate specific inductive biases to effectively capture the spatio-temporal structure of scanpaths. To this end, we propose a novel model, SpFormer, which follows a new regime: extracting spatial fixation features, modeling the temporal correlation of fixations, and integrating fixation duration. Specifically, we first introduce a fixation-centric paradigm that crops the image region around each fixation to tokenize the scanpaths and extract the spatially aligned fixation features without semantic deviation. Then, for fixation-to-fixation temporal correlation modeling, we introduce the global temporal correlation with a temporal mask to reconstruct the causality of fixations and eliminate pseudo correlation. More importantly, we construct a local meta attention to reduce the semantic redundancy of fixations. The generation of scanpaths is controlled by the visual working memory (VWM) mechanism (Epelboim and Suppes 2001), indicating only a few local fixations can be stored at each time, which is what we call a meta scanpath (see Fig. 1). The VWM mechanism reduces semantic redundancy within the stored fixations due to limited memory capacity. However, the typical global self-attention may ignore this local characteristic, leading to a slow training process and potential degradation of performance. Therefore, we develop a local meta attention that captures the correlation among stored fixation at each time. Inspired by the VWM mechanism, the local meta attention filters redundant fixations and visual noise, enabling the model to concentrate more effectively on the meta scanpath. We also introduce a consistency loss to ensure that the meta attention of different fixations remains consistent with the meta scanpath. On the other hand, we integrate the cues of fixation duration into the model for a comprehensive scanpath representation. We observe that the fixation duration is often ignored in previous methods, resulting in incomplete information fusion (Liu, Li, and Yi 2016; Jiang and Zhao 2017; Xia et al. 2022). The fixation duration tends to provide additional cues for the visual allocation of fixations and helps filter the background noise. Based on this observation, we further leverage the fixation duration to adjust the weight of the fixation features. Unfortunately, the temporal location becomes ambiguous with the Transformer block increasing. To address this, we propose a progressive decay mechanism that transitions the weights from distinct to ambiguous, adapting to the progressively ambiguous location. We conduct comprehensive experiments to evaluate the performance of SpFormer. We also explore the feasibility and generalization of the proposed model on four databases from three tasks, including recognition for autism spectrum disorder (ASD), toddler age prediction, and visual perceptual task prediction with four datasets. Our primary contributions can be summarized as follows: • We conclude intricate properties of the scanpath modality and propose a novel scanpath-aware Transformer for capturing the spatio-temporal properties of scanpaths. • We propose a local meta attention to guide the model to focus on local fixation and reduce semantic redundancy according to the VWM mechanism. Moreover, we progressively aggregate the fixation duration into fixation features. • We design a fixation-centric paradigm to tokenize the scanpaths and address the spatial misalignment problem between fixations and extracted fixation features. • We present comprehensive experiments across three domains using four datasets. The results highlight that SpFormer achieves new state-of-the-art performance on four real-world scanpath-based tasks. Related Work Scanpath-based Application Scanpath is a type of data representation that offers insights into human visual behavior, which records the eye movements captured by an eye tracker. Scanpaths has wide application across various domains, including healthcare ((Xia et al. 2022; Marsh and Williams 2006; Mohammadhasani et al. 2020)), medical education (Kok and Jarodzka 2017), human-computer interaction (Piumsomboon et al. 2017), education, assisted driving, choice modeling, consumer psychology, and marketing (Klaib et al. 2021). The application paradigm of scanpaths can be broadly categorized into two aspects. Firstly, many factors like age, gender, neurodevelopment, and visual tasks have been examined to comprehend inter-group variances (Xia et al. 2022; Mastergeorge, Kahathuduwa, and Blume 2021). Therefore, many studies have concentrated on classifying distinct groups, such as individuals with ASD and typical development subjects. Secondly, scanpath analysis has been applied to study the visual behavior of individuals within a group for downstream applications. For instance, scanpaths were utilized to analyze the visual expertise of medical professionals and develop intelligent diagnostic systems in medical imaging (Bruny´e et al. 2019). Transformer Transformer (Vaswani et al. 2017) was proposed to use selfattention to cover the long-distance dependencies. It has quickly achieved state-of-the-art performance in almost all the natural language processing (NLP) tasks (Devlin et al. 2018; Clark et al. 2020). For example, the Transformer has been successfully employed in the GPT series models (Radford et al. 2018) like Chat-GPT. More recently, Transformer architecture has been further extended in the image and video domain and displayed advanced performance in various tasks, including image recognition (Dosovitskiy et al. 2020), object detection (Carion et al. 2020), semantic segmentation (Strudel et al. 2021; Zheng et al. 2021), video recognition (Bertasius, Wang, and Torresani 2021; Arnab et al. 2021), and super-resolution (Yang et al. 2020). For example, Dosovitskiy et al. introduced the vision Transformer (ViT), which adopted a convolution-free architecture to replace the traditional CNN with self-attention mechanisms (Dosovitskiy et al. 2020). Their model can capture global The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7606 (b) Fixation Temporal Correlation (T+1)×C1 LN LN Shared Encoder (a) Aligned Spatial Fixation Feature Acquisition … … Fixation Tokens Fa Coupled image I Scanpath s … Image patch p m;l F Global Temporal Attention Local Meta Attention Element-wise Addition Hadamard Product S Softmax L× Block-1 Block-2 Block-3 (c) Progressive Duration Aggregation … ,  d;l w d; 1 l− w d;l W Duration d S … d;0 w scan (T +1) ×C … d;l F MLP meta Q W Q K W K Weighted Matrix Cwig Meta Mask Mmeta Weight Meta Mask Gmeta Learnable Matrix C Gmeta M M QKT QKT Feed Forward Figure 2: Overall architecture of the proposed SpFormer. First, the SpFormer tokenizes the scanpaths by directly cropping the central patches of fixations to acquire aligned spatial fixation features. Then, we utilize masked global attention and local meta attention to explore fixation temporal correlation. We also introduce a consistency loss to guide the meta of different fixations consistent with the meta scanpath. Finally, we progressively integrate the fixation duration into the SpFormer. features and relationships for image classification, which achieved strong performance on several benchmark datasets. The transformer is naturally well-suited for modeling the temporal data (Kim et al. 2022; Wang et al. 2022). In this work, we use the Transformer architecture to capture the spatio-temporal correlation for the intricate data modality. Methodology The objective of scanpath representation is to comprehensively capture the inherent characteristics of scanpaths and generate discriminative features for subsequent tasks. In particular, for a given scanpath s= {f, d}∈S, each of which contains a sequence of fixation positions f={(x1, y1), (x2, y2), · · · , (xT , yT )}∈R2×T and a sequence of fixation duration d={d1, d2, · · · , dT }∈RT , where (xt, yt) and dt denote the fixation coordinate and fixation duration for the t-th fixation, respectively. In addition, the scanpath also depends on the viewed image I ∈I with the size H×W×3, which influences the intrinsic cues of scanapth. Therefore, the input sample can be represented by a coupled variable x=(s, I) ∈X, where X=S×I is the Cartesian product of scanpath set S and the image set I. To provide complete spatio-temporal modeling for scanpaths, we build a novel model, SpFormer, to cover the special modality. The proposed SpFormer consists of three major components (see Fig. 2), i.e., aligned spatial fixation feature acquisition, fixation temporal correlation modeling, and progressive duration aggregation (PDA). We will elaborate on those modules in this section. Aligned Spatial Fixation Feature Acquisition We first tokenize the scanpath using the spatial fixation features. However, previous methods typically extract the entire image features and select the fixation features on the downsampling features (Jiang and Zhao 2017). More specifically, given an image I, utilizing the CNN as the backbone network to obtain the image feature map FI as follows: FI = Fbackbone(I) ∈RC× W ψ × H ψ , (1) where C, H, W, and ψ are the channel, width, height, and downsampling ratio, respectively. The fixation feature sequence sf according to the corresponding fixation position (xt, yt) in feature map FI is can be derived as: st f = FI h :, jxt ψ k , jyt ψ ki ∈RC, (2) sf =  s1 f , s2 f , · · · , sT f ∈RT ×C, (3) where t ∈{1, 2, · · · , T} indexes the time step of scanpath. ⌊·⌋denotes rounding toward negative infinity. sf denotes the fixation feature sequence. However, the above diagram unavoidably introduces a spatial misalignment problem between the fixation f and the extracted scanpath features sf. Specifically, each spatial position in feature map FI corresponds to an ψ × ψ-dimensional region of the original image I. In other words, any fixation falling within the ψ × ψdimensional region will select the same fixation feature that represents the feature of the region center rather than fixation, thereby leading to the issue of spatial misalignment. How to extract aligned spatial fixation features? A typical way to extract aligned fixation features relies on featureware interpolation operation (He et al. 2017). However, the fixation represents the center of the local gaze region, as the fovea is a local region according to the human visual mechanism. Therefore, we propose to clip the original image which does not have many overlapping regions. We clip the fixation-centric region to tokenize the spatial cues of fixation, as shown in Fig. 2 (a). Specifically, we first crop the image I based on the fixation f to obtain the fixation-centric image patch sequence p =  p1, p2, · · · , pT , which can be formulated as: pt ≜I[f t + o, :] ∈R(2s+1)×(2s+1)×3, (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7607 where o ∈[−s, s] × [−s, s] is a 2D integer offset with the patch window of size (2s + 1)×(2s + 1). 2s + 1 represents the width and height of the image patch pt. Note that zero padding is applied when the original spatial size H×W is exceeded. Then, we utilize the encoder network E and convolutional block to generate the fixation tokens Fa as follows: Fa = Fconv(E(p)) ∈RT ×C1, (5) where Fconv denotes the sequential convolution operations, which capture the aligned features of size C1. Note that the pixel size of the image patch pt can carry the changing size of distance by adjusting s to the predefined size. Fixation Temporal Correlation Research has shown the significant role of temporal cues in scanpaths for modeling visual attention and subsequent tasks (Sun, Chen, and Wu 2019). However, previous studies have usually ignored the temporal correlation, concentrating solely on fixation position cues. Therefore, integrating the temporal correlation of scanpaths into the model is an important topic that has not yet been thoroughly discussed. Global Temporal Attention Traditional Transformers employ a self-attention mechanism, which calculates all paired token correlations to capture the global relationships among the current features. However, the current fixation token is only influenced by the preceding fixation tokens and is not affected by the subsequent fixation tokens. This temporal causality is distinct from typical tasks, such as image classification and detection, where different tokens lack temporal causality and can model correlations between arbitrary tokens. Therefore, we add a simple temporal mask M to model the temporal causality relationships, which can be formulated as: Aglobal = (QglobalKT global) ⊙M √ d , (6) where Qglobal ∈RT × C h and Kglobal ∈RT × C h denote the global query and global key, respectively. d = C h is the size of the embedding feature as a scaling factor and h is the number of heads. The temporal mask M is a lower triangular unit matrix in which the lower triangular elements are 1 with the remaining elements 0. Local Meta Attention The generation of scanpaths is controlled by the VWM mechanism, responsible for temporarily storing and manipulating visual information in the cognitive system (Ungerleider, Courtney, and Haxby 1998). However, the capacity of VWM is limited, and it can typically store only a small number of local fixations (approximately 3-4 visual items) at each time (Luck and Vogel 1997), which we called a meta scanpath. Moreover, scanpaths are often long sequences, which may be influenced by the randomness of visual behavior, leading to the production of noisy fixations. Therefore, we propose to find the meta scanpath that is discriminative fixation and to achieve a compressed representation. Specifically, we first embed the query vector and key vector to obtain the learnable matrix as follows: C = QWQKT WK ∈RT ×T , (7) where QWQ and KWK denote the two matrices generated with learnable parameters WQ and WK. Then, we calculate the local meta attention based on the matrix C to distill the meta scanpath. For the i-th fixation, we first calculate the index matrix of the max value along the second axis of C as: Dmax = [arg max j (Cij)] ⊗IT ∈RT ×T , (8) where Dmax denotes an index matrix with each element representing the index of the maximum value along the second axis of C. IT denotes the vector of ones, and the superscript of IT denotes the transpose operation. ⊗denotes the Kronecker product. Then, we calculate the mean value along the second axis of C as: m = [ 1 T T X j=1 Cij] ∈RT , (9) where m is the mean value vector. We argue that the small value for Ci denotes the corresponding noise, which does not need to be stored. In addition, to reduce the semantic redundancy, we also consider the distance to the maximum value due to the semantic redundancy at close fixation, which can be summarized as: Cwig = (C ⊙|Dmax −D|) ∈RT ×T , (10) where Cwig represents the weighted matrix. D is the index matrix in which the element of each row is the index for the second axis. Then, we calculate the meta mask Mmeta as: Mmeta = 1(Cwig > (mT ⊗I)) ∈RT ×T , (11) Gmeta = C ⊙Mmeta ∈RT ×T , (12) where Gmeta and Mmeta represent the weighted meta mask and meta mask, respectively. 1 denotes the indicator function. When the input is true, the output is 1, and when the input is false, the output is 0. Finally, we calculate the local meta attention as: Alocal = (QlocalKT local) ⊙Gmeta ⊙M √ d , (13) where Alocal is the local meta attention. After that, we fuse the global temporal attention and local meta attention as: Af = Fsoftmax (Aglb + Alocal) V, (14) where Af denotes the fused attention. V denotes the value. Fsoftmax denotes the softmax function. Then, we calculate the feature Fm;l through the transformer block Fblock with the fused attention Af, which can be summarized as: Fm;l = Fblock(Fm;l−1; Af;l) ∈RT ×C, (15) where l ∈{1, 2, . . . , L} is the index of transformer block. Moreover, inspired by (Lin et al. 2022), we propose a loss to guide the local meta attention Alocal of each fixation consistent with the meta scanpath sm, which is the mean along the first axis of Mmeta. First, we calculate the attentionweighted features et using only the local meta attention, which can be summarized as: et = [Fm]([1],2) ⋆[[C ⊙Mmeta ⊙M]t]([1]) ∈RT , (16) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7608 where the t of [C ⊙Mmeta ⊙M]t denotes the row index, [·] at the subscript of [Fm]([1],2) is the selected dimension of tensor contraction operator ⋆(Comon 2014). To maintain consistency, we adopt the distance metric to regularize the meta local attention as: Lmeta = 1 T T X t=1 −D(et, 1 T T X t=1 et), (17) where D denotes a distance metric. We adopt cosine similarity for the specific implementation. Progressive Duration Aggregation Previous scanpath-based methods usually discard the fixation duration and only consider the fixation position (Jiang and Zhao 2017; Arru, Mazumdar, and Battisti 2019; Wu et al. 2019; Tao and Shyu 2019). However, the fixation duration is an important attribute that indicates visual perceived behavior and attention distribution, which is usually used to analyze visual behavior (Jones and Klin 2013). Therefore, we propose to progressively integrate the duration information to the fixation features Fm;l. We first leverage the fixation duration d to obtain the initial weight wd;0 as follows: ωj = edj P|I| j=1 edj , dj ∈d, (18) wd;0 = {ω1, ω2, · · · , ωT } ∈RT , (19) where wd;0 denotes the initial aggregation weight. However, the temporal location of fixations can become ambiguous as the number of blocks increases, similar to CNNs, where shallow layers emphasize temporal location while high-level features contain more semantic features with ambiguous temporal location. The ambiguous location makes it challenging to distinguish the relationship between fixation duration and fixation. Therefore, we propose a progressive decay mechanism to more reasonably combine the duration into the fixation features, which can be summarized as: wd;l = αI + βwd;l−1, (20) where α = σ and β = 1 −σ are progressive decay coefficients. After that, the decay duration weight results are integrated into the extract features Fm;l to generate the fusion features Fd;l of the l-th layer: Wd;l = Frepeat(wd;l) ∈RT ×C, (21) Fd;l = Wd;l ⊙Fm;l ∈RT ×C, (22) where Frepeat represents the repeat operation that repeats the wd;l at channel dimension to the size of RT ×C. Training and Inference Following the ViT (Dosovitskiy et al. 2020), we adopt the extra token to aggregate the information of all fixations and input the MLP to generate prediction ˆy. The cross-entropy (CE) loss is leveraged to evaluate the difference between the predicted results ˆyi,j the and ground-truth yi as: Lscan = 1 |Utrain × I| |Utrain| X i=1 |I| X j=1 CE(ˆyi,j, yi), (23) where Utrain denotes the subject set for training. Finally, we combine the CE loss and the consistency loss with the coefficient λ as: L = Lscan + λLmeta. (24) Experiment In this section, we conduct the experiment under three different tasks, including ASD recognition, toddler age prediction, and visual perceptual task prediction, to verify the generalization and effectiveness of the SpFormer. Autism Spectrum Disorder Recognition Datasets ASD recognition is a critical application in eyetracking, as it enables early detection in infants and offers an objective and efficient assessment. We apply two datasets, Saliency4ASD (Duan et al. 2019) dataset and our collected dataset, to evaluate the ASD recognition performance of the SpFormer. The Saliency4ASD dataset was collected from 14 children with ASD and 14 typically developing (TD) children. All subjects viewed 300 images, and each image played for 3 seconds. The 300 images were selected from the MIT1003 dataset (Judd et al. 2009). For our dataset, we recruited 58 subjects between 2 and 8 years of age from the hospital to collect eye-tracking data. The participants included 30 children with ASD and 28 TD children. Baselines For a comprehensive comparison, we adopt the saliency-based models for comparisons. We also follow (Rahman et al. 2021) to report the performance of HoG, Gist, and VGG16, respectively. Evaluation metrics Following the previous work (Chen and Zhao 2019), we report the scanpath-wise results that evaluate the classification performance based on a single scanpath. We also provide the subject-wise results since the ultimate objective of ASD recognition is to obtain a subjectspecific evaluation. Consistent with prior work (Chen and Zhao 2019), we compute the subject-wise probability p(c) that equally sum the scanpath-wise results across all images as p(c) = 1 |I| P|I| j=1 pj(c), where pj(c) denotes the subjectwise probability as the class c for subject i. Main Results Tables 1 and 2 show the experimental results of different approaches under evaluation metrics. Part results follow (Wei et al. 2021) and (Rahman et al. 2021). It can be found that our SpFormer outperforms the advanced models by a considerable margin and sets a new state-ofthe-art. With the subject-wise results, the SpFormer receives 100% performance for AUC, sensitivity, specificity, BA, and accuracy at a threshold of 0.5. We achieve 0.0714 AUC and 10.67% accuracy (0.5 thresholds) improvements over the previous best results on the Saliency4ASD. In addition, we observe that the APM and CETS, which capture the temporal cues, have a noticeable performance superiority compared to the model without temporal modeling. Our method has balanced results between sensitivity and specificity compared to other methods. As for our dataset, the SpFormer receives the best performance under most metrics for scanpath-wise results and significantly surpasses The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7609 Result Method AUC ↑ Sen. ↑ Spe. ↑ BA ↑ Accuracy ↑ 0.4 0.5 0.6 avg. Scanpath-Wise DoF (Jiang and Zhao 2017) 0.6070 0.1707 0.9199 0.5453 0.5728 0.5492 0.5136 0.5452 APM (Chen and Zhao 2019) 0.6099 0.5568 0.6113 0.5841 0.5788 0.5849 0.5778 0.5805 RM3ASD (Arru et al. 2019) 0.5930 0.6843 0.5056 0.5950 0.5950 SSM (Startsev and Dorr 2019) 0.5984 0.7171 0.4843 0.6007 0.6439 IBM (Wu et al. 2019) 0.5513 0.6350 0.4711 0.5531 0.6130 SySM* (Wu et al. 2019) 0.5415 0.7407 0.3506 0.5457 0.5746 SySM†(Wu et al. 2019) 0.5388 0.8071 0.2824 0.5448 0.5440 SP-ASDNET* (Tao and Shyu 2019) 0.5566 0.8771 0.2507 0.5639 0.5639 SP-ASDNET†(Tao and Shyu 2019) 0.5790 0.5921 0.5664 0.5793 0.5793 SP-ASDNET‡(Tao and Shyu 2019) 0.5739 0.5936 0.5558 0.5747 0.5747 CETS* (Wei et al. 2021) 0.6148 0.6857 0.5465 0.6161 0.6433 CETS†(Wei et al. 2021) 0.6065 0.6964 0.5205 0.6085 0.6434 SpFormer 0.6577 0.6046 0.6424 0.6235 0.6266 0.6240 0.6225 0.6447 Subject-Wise HoG (Dalal and Triggs 2005) 0.5700 Gist (Li and Itti 2009) 0.6800 VGG16 (Simonyan et al. 2014) 0.6370 DoF (Jiang and Zhao 2017) 0.9011 0.0000 1.0000 0.5000 0.8500 0.4800 0.4800 0.6033 APM* (Chen and Zhao 2019) 0.9200 0.8600 0.9300 0.8950 0.8900 APM†(Chen and Zhao 2019) 0.9286 0.8571 0.9231 0.8901 0.6267 0.8933 0.4800 0.6667 SpFormer 1.0000 1.0000 1.0000 1.0000 0.8267 1.0000 0.7433 0.8567 Table 1: Performance comparison on the Saliency4ASD dataset, measured by AUC, sensitivity (Sen., classification threshold is 0.5), specificity (Sen., classification threshold is 0.5), balanced accuracy (BA), and accuracy under three classification thresholds, i.e., 0.4, 0.5, and 0.6, respectively. “avg.” denotes the average accuracy performance under three classification thresholds. Results in bold denote the best performance, while the underlined ones indicate the second best. The arrow represents the direction of better performance for the metric. “*”, “†”, and “‡” denote the different implements, respectively. Result Method AUC ↑ Sen. ↑ Spe. ↑ BA ↑ Accuracy ↑ 0.4 0.5 0.6 avg. Scanpath-Wise DoF (Jiang and Zhao 2017) 0.6412 0.5323 0.6568 0.5946 0.5128 0.5984 0.5974 0.5695 APM (Chen and Zhao 2019) 0.7086 0.5868 0.7257 0.6563 0.6465 0.6614 0.6634 0.6571 SpFormer 0.7063 0.6356 0.6961 0.6659 0.6571 0.6675 0.6669 0.6638 Subject-Wise DoF (Jiang and Zhao 2017) 0.8600 0.7650 0.8140 0.7895 0.5060 0.7150 0.5630 0.5647 APM (Chen and Zhao 2019) 0.9700 0.7140 0.9667 0.8404 0.8110 0.8690 0.6810 0.7870 SpFormer 0.9800 0.8929 0.9667 0.9298 0.8426 0.9314 0.7944 0.8561 Table 2: Comparison results on our collected dataset for the ASD recognition task. the best competitors under all metrics for subject-wise results. It indicates that the SpFormer has more consistent results between images. The SpFormer improves sensitivity from 76.50% to 89.29% and accuracy (0.5 thresholds) from 86.90% to 93.14%, compared to the previous best results. Ablation Study We carry out a series of ablation studies on the Saliency4ASD and report the subject-wise results. Effectiveness of ASF. We first consider the aligned fixation feature. We replace the aligned fixation feature with the previous typical misaligned feature (Chen and Zhao 2019). As seen in the results of (a) and (b) in Table 3, our method exhibits a sizeable improvement when using the aligned spatial fixation feature, particularly with a 15.67% enhancement in accuracy at the 0.5 threshold. Effectiveness of temporal correlation. Then, we replace the proposed temporal modeling with the vanilla Transformer to carry out the ablation study. Comparing (c) and (d) of Table 3, we can conclude that the proposed temporal correlation achieves a significant performance gain on AUC from 0.8901 improve to 0.9835, further demonstrating the significance of the temporal mask and meta scanpath in causal modeling, as well as in reducing the semantic redundancy and visual noise. Effectiveness of PDA. Furthermore, we plug the PDA into after each Transformer block to progressively fuse the duration and enhance the scanpath features, achieving a 3.333% accuracy improvement over the model without fusing the duration information, which is based on the comparison between (c) and (d) in Table 3. Effectiveness of consistency loss Lmeta. We investigate the effect of the consistency loss Lmeta. There is also a performance gain shown in the results of (d) and (e) of Table 3, which indicates that the consistency loss Lmeta plays an essential role in the local meta attention learning. Effect of Hyper-parameters. We set α = σ = 0.5, β = 1 −σ = 0.5 in Eq. (20), and λ = 0.1 in Eq.( 24). Finally, we conduct experiments to explore the impact of the hyperparameters on the results (see Tab. 4). For instance, a high value of σ can lead to a rapid decay of duration, which means that duration information cannot be effectively integrated into the model, resulting in performance degradation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7610 id ASF FTC PDA Lmeta AUC Accuracy 0.5 (a) ✗ ✗ ✗ ✗ 0.8681 0.7433 (b) ✓ ✗ ✗ ✗ 0.8901 0.9000 (c) ✓ ✓ ✗ ✗ 0.9835 0.9333 (d) ✓ ✓ ✓ ✗ 0.9835 0.9667 (e) ✓ ✓ ✓ ✓ 1.0000 1.0000 Table 3: Ablation studies on main modules of different design choices. “ASF” denotes the aligned spatial fixation feature. “FTC” is the fixation temporal correlation. “PDA” represents the progressive duration aggregation. σ 0.1 0.3 0.5 0.7 0.9 AUC 0.9615 0.9890 1.0000 0.9945 0.9835 Accuracy 0.9333 0.9333 1.0000 0.9667 0.9267 λ 0.01 0.05 0.1 0.2 0.5 AUC 0.9945 0.9890 1.0000 0.9560 0.9505 Accuracy 0.9333 0.9667 1.0000 0.9333 0.8933 Table 4: Influence of hyper-parameters on Saliency4ASD. Method AUC Accuracy 0.5 HoG (2005) 0.4700 Gist (2009) 0.5700 VGG16 (2014) 0.8290 DoF (2017) 0.6692 0.5500 APM (2019) 0.7525 0.7000 CAET (2019) 0.8400 0.8300 CETSMap* (2021) 0.7580 0.7560 CETSMap†(2021) 0.8300 0.8300 SpFormer 0.8586 0.8750 Table 5: Comparison results on the TAP benchmark. Toddler Age Prediction Datasets Identifying different age groups is another application of scanpaths, because eye movement patterns change with age (Munoz et al. 1998; Davidson et al. 2006; Dalrymple et al. 2019). To evaluate the model performance in age prediction, we utilize a toddler age prediction (TAP) dataset1 obtained from (Dalrymple et al. 2019), which consists of thirty-seven 18-month-old toddlers and thirty-six 30-month-old toddlers. The stimuli are comprised of one hundred images from the object and semantic images eyetracking (OSIE) database (Xu et al. 2014), which contains 700 image stimuli with abundant attributes. Experimental Settings We also comply with previous experimental protocols (Rahman et al. 2021). The experimental setting and training details were the same as those in ASD recognition unless specified otherwise. Main Results Table 5 presents the comparison results on the TAP benchmark. It can be found that our SpFormer receives the best results under all metrics and outperforms the previous methods by a considerable margin. The SpFormer achieves an average accuracy improvement of 11.67%, 1https://osf.io/ugvj4 Method AUC Accuracy 0.5 HoG (2005) 0.7000 Gist (2009) 0.8100 VGG16 (2014) 0.7490 DoF (2017) 0.6640 0.4639 APM (2019) 0.9894 0.8214 PTEM (2016) 0.8438 CETSMap* (2021) 0.8635 CETSMap†(2021) 0.8420 SpFormer 0.9974 0.9750 Table 6: Comparison results on the VPT dataset. which achieves a 4.5% accuracy improvement at the threshold of 0.5, and a 1.586% improvement on AUC compared with the previous best results. Visual Perceptual Task Prediction Datasets Different visual tasks can elicit varying visual behaviors, even when presented with the same visual scene. Therefore, scanpaths can also be applied to identify the visual tasks of the subjects. Previous methods mainly focus on visual behavior without any specific guidance, known as free-viewing. Koehler et al. (2014) proposed a visual perceptual task (VPT) dataset2, which contains 800 natural images and four visual tasks: free-viewing, explicit perceptual judgments, saliency search, and cued object search tasks. Experimental Settings Following the experimental form (Rahman et al. 2021) and (Boisvert and Bruce 2016), we divide the dataset with a series binary classification to classify each two visual tasks. Without loss of generality, we selected the free-viewing and cued object search takes to report results. The experimental setting and training details were the same as those in ASD recognition. Main Results Table 6 presents the results on the VPT dataset. It can be observed that the SpFormer outperforms other advanced methods. Specifically, our model achieves 0.9974 of AUC, achieving a remarkable improvement of 11.15% in accuracy under the 0.5 threshold. These findings demonstrate the superiority of our proposed SpFormer. Conclusion This paper proposes a new model SpFormer to model the spatio-temporal characteristics of scanpath. For the modeling of spatial information, we extract the spatially aligned fixation to represent scanpaths. For temporal cues, we introduce the local meta attention to model the VWM mechanism and progressively aggregate the fixation duration to enhance the fixation feature. Experimental results show that the SpFormer is effective and achieves state-of-the-art performance in multiple scanpath-based tasks. Acknowledgements The authors gratefully acknowledge the support of the National Natural Science Foundation of China under Grants 2https://data.mendeley.com/datasets/8rj98pp6km/1 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7611 62172334, 62027813, 62036005, 62293543, 62202015, and 62322605. References Arnab, A.; Dehghani, M.; Heigold, G.; Sun, C.; Luˇci´c, M.; and Schmid, C. 2021. Vivit: A video vision transformer. In Proceedings of the IEEE/CVF international conference on computer vision, 6836–6846. Arru, G.; Mazumdar, P.; and Battisti, F. 2019. Exploiting visual behaviour for autism spectrum disorder identification. In IEEE International Conference on Multimedia & Expo Workshops, 637–640. IEEE. Bertasius, G.; Wang, H.; and Torresani, L. 2021. Is spacetime attention all you need for video understanding? In International Conference on Machine Learning, 4. Boisvert, J. F.; and Bruce, N. D. 2016. Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features. Neurocomputing, 207: 653– 668. Bruny´e, T. T.; Drew, T.; Weaver, D. L.; and Elmore, J. G. 2019. A review of eye tracking for understanding and improving diagnostic interpretation. Cognitive Research: Principles and Implications, 4: 1–16. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Chen, S.; and Zhao, Q. 2019. Attention-based autism spectrum disorder screening with privileged modality. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1181–1190. Clark, K.; Luong, M.-T.; Le, Q. V.; and Manning, C. D. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Comon, P. 2014. Tensors: a brief introduction. IEEE Signal Processing Magazine, 31(3): 44–53. Dalal, N.; and Triggs, B. 2005. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), volume 1, 886–893. Ieee. Dalrymple, K. A.; Jiang, M.; Zhao, Q.; and Elison, J. T. 2019. Machine learning accurately classifies age of toddlers based on eye tracking. Scientific Reports, 9(1): 1–10. Davidson, M. C.; Amso, D.; Anderson, L. C.; and Diamond, A. 2006. Development of cognitive control and executive functions from 4 to 13 years: Evidence from manipulations of memory, inhibition, and task switching. Neuropsychologia, 44(11): 2037–2078. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Duan, H.; Zhai, G.; Min, X.; Che, Z.; Fang, Y.; Yang, X.; Guti´errez, J.; and Callet, P. L. 2019. A dataset of eye movements for the children with autism spectrum disorder. In Proceedings of the 10th ACM Multimedia Systems Conference, 255–260. Epelboim, J.; and Suppes, P. 2001. A model of eye movements and visual working memory during problem solving in geometry. Vision Research, 41(12): 1561–1574. Fan, D.-P.; Li, T.; Lin, Z.; Ji, G.-P.; Zhang, D.; Cheng, M.M.; Fu, H.; and Shen, J. 2021. Re-thinking co-salient object detection. IEEE transactions on pattern analysis and machine intelligence, 44(8): 4339–4354. Han, J.; Zhang, D.; Cheng, G.; Liu, N.; and Xu, D. 2018. Advanced deep-learning techniques for salient and categoryspecific object detection: a survey. IEEE Signal Processing Magazine, 35(1): 84–100. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2961–2969. Huang, X.; Shen, C.; Boix, X.; and Zhao, Q. 2015. Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 262–270. Jiang, M.; and Zhao, Q. 2017. Learning Visual Attention to Identify People With Autism Spectrum Disorder. In Proc. IEEE Int. Conf. Comput. Vis., 3267–3276. Jones, W.; and Klin, A. 2013. Attention to eyes is present but in decline in 2–6-month-old infants later diagnosed with autism. Nature, 504(7480): 427–431. Judd, T.; Ehinger, K.; Durand, F.; and Torralba, A. 2009. Learning to predict where humans look. In 2009 IEEE 12th International Conference on Computer Vision, 2106–2113. Kim, B.; Chang, H. J.; Kim, J.; and Choi, J. Y. 2022. Globallocal motion transformer for unsupervised skeleton-based action learning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV, 209–225. Springer. Klaib, A. F.; Alsrehin, N. O.; Melhem, W. Y.; Bashtawi, H. O.; and Magableh, A. A. 2021. Eye tracking algorithms, techniques, tools, and applications with an emphasis on machine learning and Internet of Things technologies. Expert Systems with Applications, 166: 114037. Koehler, K.; Guo, F.; Zhang, S.; and Eckstein, M. P. 2014. What do saliency models predict? Journal of Vision, 14(3): 14–14. Kok, E. M.; and Jarodzka, H. 2017. Before your very eyes: The value and limitations of eye tracking in medical education. Med. education, 51(1): 114–122. Lang, C.; Cheng, G.; Tu, B.; and Han, J. 2022. Learning what not to segment: A new perspective on few-shot segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8057–8067. Li, Z.; and Itti, L. 2009. Gist based top-down templates for gaze prediction. Journal of Vision, 9(8): 202–202. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7612 Lin, H.; Ma, Z.; Ji, R.; Wang, Y.; and Hong, X. 2022. Boosting crowd counting via multifaceted attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19628–19637. Liu, N.; Han, J.; Zhang, D.; Wen, S.; and Liu, T. 2015. Predicting eye fixations using convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 362–370. Liu, W.; Li, M.; and Yi, L. 2016. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework. Autism Research, 9(8): 888–898. Luck, S. J.; and Vogel, E. K. 1997. The capacity of visual working memory for features and conjunctions. Nature, 390(6657): 279–281. Marsh, P. J.; and Williams, L. M. 2006. ADHD and schizophrenia phenomenology: visual scanpaths to emotional faces as a potential psychophysiological marker? Neuroscience & Biobehavioral Reviews, 30(5): 651–665. Mastergeorge, A. M.; Kahathuduwa, C.; and Blume, J. 2021. Eye-tracking in infants and young children at risk for autism spectrum disorder: A systematic review of visual stimuli in experimental paradigms. Journal of Autism and Developmental Disorders, 51: 2578–2599. Mohammadhasani, N.; Capr`ı, T.; Nucita, A.; Iannizzotto, G.; and Fabio, R. A. 2020. Atypical visual scan path affects remembering in ADHD. Journal of the International Neuropsychological Society, 26(6): 557–566. Munoz, D.; Broughton, J.; Goldring, J.; and Armstrong, I. 1998. Age-related performance of human subjects on saccadic eye movement tasks. Experimental Brain Research, 121: 391–400. Piumsomboon, T.; Lee, G.; Lindeman, R. W.; and Billinghurst, M. 2017. Exploring natural eye-gaze-based interaction for immersive virtual reality. In IEEE Symposium on 3D User Interfaces, 36–39. IEEE. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. 2018. Improving language understanding by generative pre-training. Rahman, S.; Rahman, S.; Shahid, O.; Abdullah, M. T.; and Sourov, J. A. 2021. Classifying eye-tracking data using saliency maps. In 2020 25th International Conference on Pattern Recognition (ICPR), 9288–9295. IEEE. Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Startsev, M.; and Dorr, M. 2019. Classifying autism spectrum disorder based on scanpaths and saliency. In IEEE International Conference on Multimedia & Expo Workshops, 633–636. IEEE. Strudel, R.; Garcia, R.; Laptev, I.; and Schmid, C. 2021. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 7262–7272. Sun, W.; Chen, Z.; and Wu, F. 2019. Visual scanpath prediction using IOR-ROI recurrent mixture density network. IEEE transactions on pattern analysis and machine intelligence, 43(6): 2101–2118. Tao, Y.; and Shyu, M.-L. 2019. SP-ASDNet: CNN-LSTM based ASD classification model using observer scanpaths. In IEEE International Conference on Multimedia & Expo Workshops, 641–646. IEEE. Ungerleider, L. G.; Courtney, S. M.; and Haxby, J. V. 1998. A neural system for human visual working memory. Proceedings of the National Academy of Sciences, 95(3): 883– 890. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, J.; Bertasius, G.; Tran, D.; and Torresani, L. 2022. Long-short temporal contrastive learning of video transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14010–14020. Wang, W.; Shen, J.; Xie, J.; Cheng, M.-M.; Ling, H.; and Borji, A. 2019. Revisiting video saliency prediction in the deep learning era. IEEE Trans. Pattern Anal. Mach. Intell., 43(1): 220–237. Wei, W.; Liu, Z.; Huang, L.; Wang, Z.; Chen, W.; Zhang, T.; Wang, J.; and Xu, L. 2021. Identify autism spectrum disorder via dynamic filter and deep spatiotemporal feature extraction. Signal Processing: Image Communication, 94: 116195. Wu, C.; Liaqat, S.; Cheung, S.-c.; Chuah, C.-N.; and Ozonoff, S. 2019. Predicting autism diagnosis using image with fixations and synthetic saccade patterns. In IEEE International Conference on Multimedia & Expo Workshops, 647–650. IEEE. Xia, C.; Han, J.; Qi, F.; and Shi, G. 2019. Predicting human saccadic scanpaths based on iterative representation learning. IEEE Trans. Image Process., 28(7): 3502–3515. Xia, C.; Zhang, D.; Li, K.; Li, H.; Chen, J.; Min, W.; and Han, J. 2022. Dynamic Viewing Pattern Analysis: Towards Large-Scale Screening of Children With ASD in Remote Areas. IEEE Transactions on Biomedical Engineering. Xu, J.; Jiang, M.; Wang, S.; Kankanhalli, M. S.; and Zhao, Q. 2014. Predicting human gaze beyond pixels. Journal of Vision, 14(1): 1–20. Yang, F.; Yang, H.; Fu, J.; Lu, H.; and Guo, B. 2020. Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5791–5800. Zhang, D.; Wang, B.; Wang, G.; Zhang, Q.; Zhang, J.; Han, J.; and You, Z. 2022. Onfocus detection: Identifying individual-camera eye contact from unconstrained images. Science China Information Sciences, 65(6): 160101. Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P. H.; et al. 2021. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6881–6890. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7613
2024
845
18,679
ExpCLIP: Bridging Text and Facial Expressions via Semantic Alignment Yicheng Zhong*, Huawei Wei*, Peiji Yang*, Zhisheng Wang 1 Tencent Technology (Shenzhen) Co.Ltd {ajaxzhong, huaweiwei, peijiyang, plorywang}@tencent.com Abstract The objective of stylized speech-driven facial animation is to create animations that encapsulate specific emotional expressions. Existing methods often depend on pre-established emotional labels or facial expression templates, which may limit the necessary flexibility for accurately conveying user intent. In this research, we introduce a technique that enables the control of arbitrary styles by leveraging natural language as emotion prompts. This technique presents benefits in terms of both flexibility and user-friendliness. To realize this objective, we initially construct a Text-Expression Alignment Dataset (TEAD), wherein each facial expression is paired with several prompt-like descriptions. We propose an innovative automatic annotation method, supported by ChatGPT, to expedite the dataset construction, thereby eliminating the substantial expense of manual annotation. Following this, we utilize TEAD to train a CLIP-based model, termed ExpCLIP, which encodes text and facial expressions into semantically aligned style embeddings. The embeddings are subsequently integrated into the facial animation generator to yield expressive and controllable facial animations. Given the limited diversity of facial emotions in existing speech-driven facial animation training data, we further introduce an effective Expression Prompt Augmentation (EPA) mechanism to enable the animation generator to support unprecedented richness in style control. Comprehensive experiments illustrate that our method accomplishes expressive facial animation generation and offers enhanced flexibility in effectively conveying the desired style. Introduction In recent years, speech-driven facial animation has gained importance due to its widespread applications in diverse fields such as gaming, virtual reality, and film production (Zhen et al. 2023). Currently, most research focuses on improving the synchronization between lip movements and speech (Cudeiro et al. 2019; Fan et al. 2021; Chen et al. 2022; Xing et al. 2023). This emphasis only allows for conveying speech content, not style, resulting in a lack of emotional expressions in generated facial animations. A few works (Karras et al. 2017; Danˇeˇcek et al. 2023) have attempted to integrate emotions into facial animation by providing the model with specific emotional labels or use refCopyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. erence facial expressions as style guidance. However, they either have limited flexibility in expressing diverse emotions or necessitate searching for a reference image or video, which may be impractical for users. In this study, we propose to adopt natural language as the style prompt for emotional facial animation generation, which offers both flexibility and user-friendliness. A straightforward approach is to collect animation data with paired text prompts and train a text-guided animation generator. However, the scarcity of such data and the high cost of annotation make this approach infeasible. To address this challenge, we propose a novel CLIP-based model called ExpCLIP in this paper. ExpCLIP is designed to learn an embedding space where the representations of text and facial expressions are semantically aligned. By leveraging the capabilities of ExpCLIP, we can train the animation generator by specifying representative facial expressions as prompts, which can be easily extracted from animations, and utilize text prompts for inference purposes. To train ExpCLIP, a large-scale text-expression dataset is required. However, currently available datasets (Wang et al. 2020; Kollias 2022) only have limited tag-level emotion labels. To address this issue, we propose a novel automated annotation method to construct a Text-Expression Aligned Dataset (TEAD). Specifically, we leverage the visual understanding capability of ChatGPT (OpenAI 2023) to accomplish the annotation task. We find that LLMs are capable of describing the facial expressions corresponding to an emotional text. By harnessing the power of ChatGPT, we collect a rich emotional corpus and use meticulously engineered prompts to ask the ChatGPT to output the corresponding description of facial expressions. Here, we use activated facial Action Units (AUs) to describe facial expressions. Building upon ExpCLIP, which is trained on TEAD, we propose an emotion-controllable facial animation generator. During the training phase of the generator, we employ a self-attention module to extract the expression prompt from an animation clip. Subsequently, the expression prompt is fed into ExpCLIP to obtain the emotion embedding, which is then fused with the input speech to generate the target facial animation. During the inference stage, we can use text prompts to achieve the desired style control, which is attributed to the alignment of facial expressions and text embeddings achieved by ExpCLIP. We show several examThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7614 Talking loudly on the phone often cause an irritation to fellow passengers AUs: Nose wrinkler and lip stretcher Emotion: Tearful Text: Having a good time with family and loved ones. Figure 1: Illustration of our text-guided emotional speech-driven facial animation generation, our approach accommodates diverse textual inputs for style control, encompassing AUs, emotion tags, and other forms of natural language. ples in Figure 1. Notably, our framework can be readily extended to support facial images as input prompts. This can be achieved by augmenting ExpCLIP with an image encoder and training it on paired facial images and facial expressions. The paired data can be easily obtained with monocular 3D face reconstruction methods (Guo et al. 2022; Lei et al. 2023; Chai et al. 2023). Moreover, we propose an effective Expression Prompt Augmentation(EPA) mechanism to enable the animation generator to handle unseen emotions. This mechanism involves incorporating random perturbations to the expression prompt and devising a lip motion constraint to ensure the generated lip motions remain consistent with the original motions. The underlying assumption is that when individuals articulate the same sentence with different emotions, their lip motions tend to be consistent. In summary, the main contributions of our research are: • We leverage the visual understanding capability of ChatGPT to propose an automatic annotation method for inferring facial expressions from emotional text. This enables us to construct a large-scale text-expression aligned dataset. • We propose ExpCLIP, which is capable of aligning the semantic representations of text and facial expressions, empowering the inference of emotional styles from natural language descriptions. • We present the first attempt to use natural language text as prompts to achieve flexible and controllable emotional speech-driven facial animation generation. Related Work Speech-Driven Face Animation This field has witnessed a surge of substantial efforts in recent years. VOCA (Cudeiro et al. 2019) interprets the mapping from speech to animation as a regression problem. FaceFormer (Fan et al. 2021) leverages transformers to capture the long-term dependencies inherent in speech. Meshtalk (Richard et al. 2021) prioritizes addressing the model’s scalability and realism by segregating audiocorrelated and audio-uncorrelated information. However, all these methods primarily focus on enhancing lip synchronization, falling short in conveying emotional expressions. Emotion Guided Generation The incorporation of emotional expressions has been considered in recent works. (Karras et al. 2017) uses a trainable variable to represent the emotion state and (Danˇeˇcek et al. 2023) uses predefined emotion labels to guide the animation generation. Similar approaches have also been observed in several works(Sadoughi and Busso 2019; Ji et al. 2021; Wu et al. 2021; Liang et al. 2022; Sinha et al. 2022) related to talking face generation. (Ji et al. 2022) and (Ma et al. 2023) propose to extract emotional information from reference videos. (Wang et al. 2023) utilize a static facial image as an emotional condition for styled talking face generation. These methods either have limited flexibility in expressing diverse emotions, or necessitate searching for a reference image or video, which may be impractical for users. CLIP-Based Content Synthesis CLIP (Radford et al. 2021) has demonstrated its efficacy text guided image editing (Rombach et al. 2022; Ramesh et al. 2023; Schaldenbrand, Liu, and Oh 2021). It can also be extended to the integration of text with other modalities, such as employing CLIP for 3D motion generation (Tevet et al. 2022). However, to the best of our knowledge, no prior work has utilized CLIP for 3D facial animation generation. In this paper, we propose, for the first time, the use of CLIP for text-guided speech-driven facial animation generation. Method In this section, we first present the construction of TEAD, followed by an overview of the training process for ExpCLIP. Finally, we introduce the proposed text-guided speech-driven facial animation method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7615 Facial Animations SelfAttention Pooling BS Encoder “Like a fitness enthusiast achieving their weight loss goal” Training Inference CLIP Text Encoder CLIP Image Encoder Wav2vec2.0 Encoder Multi-Head Self-Attention Feed Forward Animation Decoder Biased Multi-Head Cross Attention Positional Encoding BS Encoder CLIP Text Encoder CLIP Image Encoder Joint embedding Text projector Image projector “Someone seeing a miracle” Paired Expression Prompt BS Decoder BS Encoder Paired : Frozen ExpCLIP Text-to-Expression (stage 1) Image-to-Expression (stage 2) Text-Guided Speech-Driven Facial Animation BS : Blendshape : Add : Training : Inference … Text projector Image projector ×N ( ) 1: 1, , T T a a =  b sa ( ) 1: 1, , T T b b =  ( ) 1: 1, , T T b b    =  img img img img text text text text Figure 2: Overview of our framework. We first train ExpCLIP to establish semantic alignment between text ,facial expressions and facial images. Then, we employ it for the generation of emotional speech-driven facial animation. The animation generator supports both images and natural language prompts for emotion control. Text-Expression Aligned Dataset In order to train a model capable of aligning the semantic representations of natural language and facial expressions, a text-expression dataset is required. However, currently available datasets only provide limited emotion labels (Cao et al. 2014; Wang et al. 2020), which are insufficient for achieving the desired alignment at a fine-grained level. To address this limitation, we propose a Text-Expression Aligned Dataset (TEAD), which is automatically constructed with the assistance of ChatGPT(OpenAI 2023). Motivation To generate paired emotional text and corresponding facial expression data using ChatGPT, it is imperative to represent facial expressions as textual descriptions. Fortunately, the Facial Action Coding System (FACS) (Ekman and Friesen 1978) offers a systematic approach to describe human facial movements by decomposing facial expressions into independent AUs, with each AU possessing an exhaustive textual description. Therefore, our objective is to let ChatGPT generate corresponding activated AUs from the emotional text. This necessitates that ChatGPT demonstrate cross-modal understanding capability, that is, the ability to “imagine” corresponding facial expressions that depict the emotion in the text, and subsequently translate the expressions into activated AUs. Through our testing, we find that ChatGPT can successfully accomplish this task with a carefully designed prompt through in-context learning. Automatic Data Generation We utilize the abundant corpus from text emotion classification tasks (Mohammad and Bravo-Marquez 2017), which encompass rich real-world human emotions. The emotional transcripts are fed into the ChatGPT for text emotion classification and detection of activated AUs. Specifically, given an emotional transcript t, the ChatGPT predicts its emotional tags e, where e contains several emotional labels, usually 3 to 5, and the activated AUs Prompt: You will be given an explanation about FACS and AUs… Transcript: I’m so angry that I feel like I have to go through life not being as carefree as I want to be Emos: 1. Angry 2. Irritated 3. … AUs: Brow lowerer, Upper lid raiser, … Sentences: 1. Stuck in traffic. 2. A student receiving unfair treatment. 3. … ChatGPT Figure 3: An example of data generation of TEAD. with a one-hot vector u = {ui, i = 1, ..., Nu}. Nu is 36 in our work. Note the combination of these AUs enables a wider range of emotions compared to existing datasets that have only a few emotion labels. We further prompt the ChatGPT to describe situations that may evoke inferred emotions, generating sentence-level labels s. An instance is shown in Figure 3. In addition, we enlist a professional facial animator to devise a mapping rule to convert AUs into blendshape weights. Then, we transform the AUs vector u into a set of blendshape weights b ∈R52, enabling us to leverage publicly available datasets that represent facial expressions as blendshape weights. Consequently, TEAD can be represented as a set of quadruples:T = {(t, e, b, s)i|i = 1, ..., NT }. About 50,000 quadruples are included. ExpCLIP We utilize TEAD to train a CLIP-based model, named ExpCLIP. As illustrated in Figure 2, ExpCLIP is an autoencoder framework designed to align multimodal signals, encompassing diverse human facial expressions, textual descriptions of emotions, and realistic facial images. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7616 Text-to-Expression We propose a blendshape encoder E that maps a given set of blendshape weights b into an embedding. Subsequently, a decoder D reconstructs the blendshape weights from the embedding. In parallel, we utilize a CLIP text encoder Etext based on the work of (Radford et al. 2021), along with a text projector Ptext to map the emotion text, sampled from {t, e, s}, into the joint embedding space. Due to the strong generalization capability of pre-trained CLIP, we keep Etext frozen during training to leverage its acquired extensive knowledge. Image-to-Expression We propose an extension to incorporate facial images as emotion prompts by integrating an image encoder into ExpCLIP and aligning the embeddings of images and expressions. A SOTA 3D face reconstruction method (Guo et al. 2022) is used to create paired blendshape weights for facial images. We leverage the pre-trained image encoder Eimg from CLIP and employ an image projector Pimg to map facial images into the joint embedding space. During this step, we only finetune Pimg, while keeping the the weights of Eimg, E, and D fixed. Objective Function ExpCLIP is trained via three types of losses: auto-encoder reconstruction loss, embedding alignment loss, and cross-modal reconstruction loss: L = λ1Lae + λ2Ld emb + λ3Ld cross, (1) where d ∈{text, image}. Note that d = text indicates the text-to-expression training, and d = image refers to the image-to-expression finetuning. The auto-encoder reconstruction loss Lae measures the L2 distance between the input blendshape weights b and the predicted ones: Lae = Eb∼T ∥D(E(b)) −b∥. (2) We align the embeddings of text (or images) and expressions using cosine embedding loss: Ld emb = 1 −cos(Pd(Ed(S)) −E(b)), (3) where S represents text in (t, e, s) when d = text, and facial images when d = image. To better improve the multi-modal alignment, we propose a cross-modal reconstruction loss, which enforces the blendshape weights reconstructed using text (or image) embeddings close to its paired ones, namely: Ld cross = Eb∼T ∥D(Pd(Ed(S))) −b∥. (4) We propose several strategies to enhance the model’s generality. They include fully exploiting the TEAD dataset by randomly extracting samples from t, e, and s as text inputs for training. Additionally, we borrow text augmentation techniques from the natural language processing literature (Wei and Zou 2019), such as stop-word removal, synonym replacement, and sentence shuffling. Furthermore, we augment the blendshape weights by applying minor random perturbations. Experiments show all these strategies improve the model’s robustness. Text-Guided Speech-Driven Facial Animation Our aim is to automatically generate emotionally expressive facial animation, where the content is determined by speech and the emotional style is controlled by a text prompt. The overview of our method is shown in Figure 2. Training Workflow Leveraging the semantic alignment between text and facial expressions achieved by ExpCLIP, we can employ expression prompts for training and text prompts for inference. Yet, annotating expression prompts for each animation clip is costly. To address this, we propose a self-attention mechanism to automatically extract representative expressions from the animation clips. Consequently, the training workflow of our model is as follows: Let A1:T = a1, . . . , aT  denotes a sequence of speech snippets. B1:T = b1, . . . , bT  is the synchronized facial animation, each frame is represented by a set of blendshape weights bt. We propose a transformer-based selfattention pooling module, denoted as Esa, to derive the attention weights for individual frames within the animation clip. Subsequently, we aggregate all frames of the clip according to the attention weights, resulting in the generation of the expression prompt b. Next, we feed the expression prompt b into the expression encoder E of ExpCLIP to obtain the style embedding. During the training process, the parameters of E are kept frozen. Simultaneously, we utilize a pre-trained wav2vec2.0 model (Baevski et al. 2020) to convert raw waveform input into contextualized speech features. We employ a transformer decoder to predict B′1:T =  b′1, . . . , b′T  from speech features, with style embedding incorporated into the decoding process via cross-attention. A simple L1 loss is utilized for reconstruction: Lrec = 1 T T X t=0 |b′t −bt| (5) Expression Prompt Augmentation The currently publicly accessible speech-driven facial animation datasets exhibit insufficiency in terms of emotional richness. They are typically constrained to a limited number of coarse-grained emotion labels, thereby impeding the model’s ability to handle unseen fine-grained emotions. To address this issue, we propose an Expression Prompt Augmentation (EPA) mechanism. Specifically, we add perturbations to the expression prompts. In order to make the perturbed prompts show certain emotions instead of random weird expressions, the perturbations are obtained by randomly sampling a facial expression baug from TEAD. Subsequently, we blend baug into the original expression prompt using a random weight λ. baug = (1 −λ)b + λbaug, λ ∈[0, 1] (6) Due to the lack of corresponding animations for the perturbed expression prompts, we devise two loss functions to ensure that the generated animations exhibit accurate lip movements and emotional styles. We posit that when a person utters the same sentence with different emotions, their lip movements remain fundamentally consistent, meaning that the displacement between adjacent frames remains consistent. Leveraging this assumption, we formulate a lip motion loss to capture and enforce this consistency in the generated animations. We represent the animation for the perturbed prompt as B′1:T aug =  b′1 aug, . . . , b′T aug  , and the lip The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7617 Text prompts: Sadness Nose wrinkler, lip corner puller, brow lower Witness a person engaging animal cruelty The alien creature bursts out of Kane's chest in "Alien" Image prompts: Figure 4: Multi-modal alignment. Row 1&2: text-toexpression. Row 3&4: image-to-expression. Note the last two columns are out-of-domain samples. motion loss is defined as: Llm = 1 T −1 T −1 X t=0 |(b′t+1 aug −b′t aug) −(bt+1 −bt)| (7) Llm only ensures the correctness of lip motion in animations generated based on perturbed prompts, but it does not guarantee the desired style. To address this limitation, we propose a style loss: Lstyle = ∥E(Esa(B′ aug)) −E(baug)∥ (8) Lstyle ensures the consistency between the emotion of the generated animation B′aug and the augmented expression prompt baug. Inference Phase Since the embeddings of text and facial expressions are semantically aligned by ExpCLIP, we can employ text descriptions as prompts to control the emotions of the generated animations. There is no strict requirement for the text to adhere to a specific format. You have the freedom to express your text prompts in any manner you prefer. For instance, you may provide a precise description of the desired facial expression or simply describe the mood you want to express. Furthermore, in cases where verbal text may not effectively convey the intended emotions, the option of employing a reference facial image as a prompt is also available. This capability is facilitated by ExpCLIP’s alignment of facial images and facial expressions. Experiments In this section, we first introduce the used datasets and implementation details. Subsequently, we present the capabilities of ExpCLIP in multimodal alignment and discuss several key factors for training ExpCLIP. Finally, we show the promising results of style-controllable speech-driven facial animation, accompanied by detailed comparative experiments and ablation studies. Datasets TEAD We train ExpCLIP using the proposed TEAD, which consists of 50,000 quadruples. Each quadruple includes text, a set of emotion tags, AUs, blendshape weights, Text prompts: Images retrieved: Surprised Inner brow raiser, brow lower, lip corner depressor Get promoted at work Feelings when the shark attacked in "Jaws" Figure 5: Text-to-image retrieval on MEAD-3D. (a) (b) Figure 6: (a) t-SNE of expression embeddings in TEAD. (b) smooth interpolations between two distant expressions. and situation sentences. We use 90% of the data for training and the remaining 10% for testing the text-expression alignment of ExpCLIP. MEAD-3D To support the image-expression alignment of ExpCLIP, we generate image-expression paired data based on MEAD . MEAD is a talking-face video corpus featuring 60 actors talking with 8 different emotions at 3 different intensity levels. We sample 150,000 images from MEAD and use a SOTA 3D monocular face reconstruction method(Guo et al. 2022) to obtain the 3D facial meshes. Then we compute the blendshape weights corresponding to each mesh using the SLSQP solver from SciPy(Virtanen et al. 2020). We use 90% of the data for training and the remaining 10% for testing the image-expression alignment effect of ExpCLIP. BEAT We use BEAT (Liu et al. 2022) to train the speechdriven facial animation generator. BEAT comprises 76 hours of speech data, paired with 52D facial blendshape weights. The dataset is collected from 30 speakers, who perform in 8 distinct emotional styles and across 4 different languages. For our experiments, we exclusively employ speech data from English speakers, which totals approximately 35 hours. Implementation Details Our framework is implemented by Pytorch(Paszke et al. 2019). For ExpCLIP, we train a transformer auto-encoder (Vaswani et al. 2017) with 8 layers for both the encoder E and D. The text encoder and image encoder from CLIP-ViTB/32 are utilized. We set the values of λ1 = 1, λ2 = λ3 = 10. ExpCLIP is trained with a learning rate of 1e-5 and a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7618 Datasets TEAD BEAT w/o bs aug 0.051 0.069 w/. bs aug 0.012 0.021 Table 1: MSE of blendshape weights reconstruction on the test set of TEAD and BEAT. Prompts: Extremely terrified Winning a lottery ticket Dobby dies in Harry's arms in "Harry Potter" w/o text aug w/. text aug Figure 7: Qualitative results of ablation study for text augmentation. Red words indicate out-of-domain texts. batch size of 256. For the animation generator, we employ a 4-layer transformer encoder as the self-attention pooling module. An 8-layer transformer decoder is used to modulate the speech features and style embeddings. The animation decoder consists of 2-layer fully connected layers. Each training sample has a duration of 64 frames with FPS=15. The entire framework is trained using the Adam optimizer (Kingma and Ba 2014) on a single A100 GPU. ExpCLIP Multi-modal Alignment ExpCLIP aligns expressions, text, and images into a joint embedding space. We examine this capability through three types of tasks: text-toexpression, text-to-image, and image-to-expression. Text-to-expression transforms text to blendshape weights, which are rendered to meshes for better visualization. Indomain text from the test set and out-of-domain text are collected for evaluation. Image-to-expression can be treated as a 3D face reconstruction task, which transforms images to blendshape weights. Figure 4 shows that ExpCLIP exhibits the ability to generate subtle and nuanced expressions, accommodating various types of textual input such as emotion tags, AU descriptions, and sentences. Furthermore, ExpCLIP also excels in the task of recovering intricate facial expressions from in-the-wild images. As for text-to-image, we employ a text-based image retrieval task to evaluate its performance. we extract image embeddings of the test set of MEAD-3D and employ cosine similarity to retrieve the images that are semantically closest to the embedding of the given text. Figure 5 indicates ExpCLIP achieves a remarkable alignment between textual and visual semantics. Expression Manifold Smoothness We demonstrate the smoothness of the learned expression manifold in Figure 6. We obtain embeddings of facial expressions in the test set of TEAD . These embeddings are then projected onto a 2D space using t-SNE (Van der Maaten and Hinton 2008). Subsequently, we sample pairs of distant points and interpolate between them. The interpolated values are passed through the decoder of ExpCLIP to reconstruct the corresponding blendshape weights. As observed, it illustrates ExpCLIP achieves smooth semantic transition between distinct facial expressions. Ablation Study We conduct an ablation study to validate the effectiveness of blendshape augmentation. The mean squared error (MSE) of blendshape weight reconstruction on the test set of TEAD and BEAT is presented in Table 1. The results demonstrate that blendshape augmentation significantly contributes to reducing the reconstruction error of ExpCLIP. To evaluate the influence of text augmentation, we train a model without text augmentation on TEAD. The qualitative outcomes are depicted in Figure 7. It is evident that text augmentation enhances the consistency between the generated expressions and out-of-domain text prompts. Emotional Speech-Driven Facial Animation Baselines We compare our method with SOTA emotioncontrollable talking face generation methods including: EAMM (Ji et al. 2022), StyleTalk (Ma et al. 2023) and PD-FGC (Wang et al. 2023). We employ the 3D face reconstruction method (Guo et al. 2022) to reconstruct facial meshes from their generated talking face videos for comparative analysis. All these methods employ facial images or video templates as emotional prompts. For comparison purposes, we manually annotate textual descriptions for each template. Notably, as both PD-FGC and our method accept image-based emotional prompts, we can utilize this setting for comparison. Due to the lack of appropriate quantitative metrics to characterize the precision of emotion control in animation generation, we only conduct qualitative evaluations and user studies to evaluate the above methods. Qualitative Results The qualitative results are illustrated in Figure 8. The image and video templates are extracted from the test set of MEAD. Both BEAT and MEAD provided speech samples for evaluation. As can be seen, our method ensures a high level of consistency between the emotional expression in the generated animation and the provided prompt, while also achieving accurate lip synchronization. Even in the more challenging setting of using text as an emotional prompt, our approach achieves high accuracy in emotion control. In comparison, EAMM exhibits inferior lip synchronization and demonstrates significant inconsistency between the emotion of the generated animation and the reference video. Although StyleTalk and PD-FGC achieve satisfactory lip synchronization, their accuracy in emotion control falls short compared to our approach. Additionally, their control methods are inconvenient for users as they require searching for desired emotional reference templates. In contrast, our textbased control approach is flexible and easy to manipulate. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7619 Audio EAMM StyleTalk PD-FGC Ours-text Ours-image A neighbor's dog barks all night Style Prompt A basketball player scores a triple-double Style Prompt I don't like places where the sunshine is too hot… Image Video Revolution now underway … makes this much easier Image Video Figure 8: Qualitative comparisons with SOTA stylized talking face generation methods. Note that the speaking style of our method can be guided by text description or emotional image. Methods EAMM StyleTalk PD-FGC Ours-image Ours-text Lip sync 2.01 3.55 3.49 3.59 3.63 ECv 2.11 3.63 3.55 3.72 3.65 ECt 1.99 3.48 3.37 3.75 3.93 Table 2: Results of the user study User Study We further conduct user studies to evaluate the performance of the comparative methods. We create 10 emotional animations for each method, accompanied by corresponding reference videos or images, as well as text prompts. We invite 20 volunteers to rate these methods from 1 to 5, with higher scores indicating better performance. The volunteers are asked to rate the methods based on the following three aspects: 1. Lip synchronization, 2. Emotion consistency with the reference video/image, and 3. Emotion consistency with the text prompt. As shown in Table 2, our performance in lip synchronization exceeds that of StyleTalk and PD-FGC, and markedly surpasses EAMM. Regarding emotion consistency with video/image (ECv), our text-based control method approximates the level of StyleTalk and PD-FGC, while our imagebased control method exhibits superior performance compared to other methods. As for emotion consistency with text (ECt), our text-based control method outperforms all other methods, and our image-based control method also surpasses other video or image-based approaches. The above results demonstrate that our method not only achieves precise lip synchronization but also enables accurate emotion control. Ablation Study We conduct ablation studies to validate the effect of EPA and the proposed style loss Lstyle, the qualitative results are shown in Figure 9. Notably, to better visualize the differences between the comparisons, we w/o ℒ𝑠𝑡𝑦𝑙𝑒 w/o EPA Full I had no information … BS prompt Figure 9: Ablation studies of EPA and style loss. utilize blendshape weights as the prompt, rather than text. As illustrated, when there is no EPA, if an unseen emotion is input, the resulting animation exhibits minimal expression. This is due to the limited generalization ability of the model toward unfamiliar emotions. However, the integration of EPA markedly enhances the emotional expressiveness of the animation. Despite this improvement, the consistency with the expression prompt remains somewhat less than ideal. The incorporation of style loss Lstyle further intensifies the emotional impact. Conclusion This paper introduces, for the first time, text-guided emotional speech-driven facial animation. To achieve this, a large-scale text-expression dataset TEAD is proposed, and ExpCLIP is trained on this dataset to align features of text and expressions. Experimental results demonstrate that the proposed framework achieves high accuracy and flexibility in emotional-controlled animation generation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7620 References Baevski, A.; Zhou, Y.; Mohamed, A.; and Auli, M. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33: 12449–12460. Cao, H.; Cooper, D. G.; Keutmann, M. K.; Gur, R. C.; Nenkova, A.; and Verma, R. 2014. CREMA-D: CrowdSourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5(4): 377–390. Chai, Z.; Zhang, T.; He, T.; Tan, X.; Baltrusaitis, T.; Wu, H.; Li, R.; Zhao, S.; Yuan, C.; and Bian, J. 2023. HiFace: HighFidelity 3D Face Reconstruction by Learning Static and Dynamic Details. arXiv preprint arXiv:2303.11225. Chen, L.; Wu, Z.; Ling, J.; Li, R.; Tan, X.; and Zhao, S. 2022. Transformer-s2a: Robust and efficient speech-to-animation. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 7247– 7251. IEEE. Cudeiro, D.; Bolkart, T.; Laidlaw, C.; Ranjan, A.; and Black, M. J. 2019. Capture, Learning, and Synthesis of 3D Speaking Styles. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10093–10103. Danˇeˇcek, R.; Chhatre, K.; Tripathi, S.; Wen, Y.; Black, M. J.; and Bolkart, T. 2023. Emotional Speech-Driven Animation with Content-Emotion Disentanglement. arXiv preprint arXiv:2306.08990. Ekman, P.; and Friesen, W. V. 1978. Facial action coding system. Environmental Psychology & Nonverbal Behavior. Fan, Y.; Lin, Z.; Saito, J.; Wang, W.; and Komura, T. 2021. FaceFormer: Speech-Driven 3D Facial Animation with Transformers. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18749– 18758. Guo, J.; Yu, J.; Lattas, A.; and Deng, J. 2022. Perspective reconstruction of human faces by joint mesh and landmark regression. In European Conference on Computer Vision, 350–365. Springer. Ji, X.; Zhou, H.; Wang, K.; Wu, Q.; Wu, W.; Xu, F.; and Cao, X. 2022. Eamm: One-shot emotional talking face via audiobased emotion-aware motion model. In ACM SIGGRAPH 2022 Conference Proceedings, 1–10. Ji, X.; Zhou, H.; Wang, K.; Wu, W.; Loy, C. C.; Cao, X.; and Xu, F. 2021. Audio-driven emotional video portraits. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 14080–14089. Karras, T.; Aila, T.; Laine, S.; Herva, A.; and Lehtinen, J. 2017. Audio-driven facial animation by joint end-to-end learning of pose and emotion. ACM Transactions on Graphics (TOG), 36(4): 1–12. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kollias, D. 2022. Abaw: Valence-arousal estimation, expression recognition, action unit detection & multi-task learning challenges. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2328–2336. Lei, B.; Ren, J.; Feng, M.; Cui, M.; and Xie, X. 2023. A Hierarchical Representation Network for Accurate and Detailed Face Reconstruction from In-The-Wild Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 394–403. Liang, B.; Pan, Y.; Guo, Z.; Zhou, H.; Hong, Z.; Han, X.; Han, J.; Liu, J.; Ding, E.; and Wang, J. 2022. Expressive talking head generation with granular audio-visual control. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3387–3396. Liu, H.; Zhu, Z.; Iwamoto, N.; Peng, Y.; Li, Z.; Zhou, Y.; Bozkurt, E.; and Zheng, B. 2022. Beat: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis. In European Conference on Computer Vision, 612–630. Springer. Ma, Y.; Wang, S.; Hu, Z.; Fan, C.; Lv, T.; Ding, Y.; Deng, Z.; and Yu, X. 2023. Styletalk: One-shot talking head generation with controllable speaking styles. arXiv preprint arXiv:2301.01081. Mohammad, S. M.; and Bravo-Marquez, F. 2017. Emotion Intensities in Tweets. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem). Vancouver, Canada. OpenAI. 2023. ChatGPT. https://chat.openai.com/chat. Accessed: 2023-05-14. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2023. Hierarchical text-conditional image generation with clip latents.(arXiv preprint)(2022). DOI: https://doi. org/10.48550/ARXIV, 2204. Richard, A.; Zollhoefer, M.; Wen, Y.; la Torre, F. D.; and Sheikh, Y. 2021. MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 1153–1162. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. Sadoughi, N.; and Busso, C. 2019. Speech-driven expressive talking lips with conditional sequential generative adversarial networks. IEEE Transactions on Affective Computing, 12(4): 1031–1044. Schaldenbrand, P.; Liu, Z.; and Oh, J. 2021. Styleclipdraw: Coupling content and style in text-to-drawing synthesis. arXiv preprint arXiv:2111.03133. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7621 Sinha, S.; Biswas, S.; Yadav, R.; and Bhowmick, B. 2022. Emotion-controllable generalized talking face generation. arXiv preprint arXiv:2205.01155. Tevet, G.; Gordon, B.; Hertz, A.; Bermano, A. H.; and Cohen-Or, D. 2022. Motionclip: Exposing human motion generation to clip space. In European Conference on Computer Vision, 358–374. Springer. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Virtanen, P.; Gommers, R.; Oliphant, T. E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. 2020. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature methods, 17(3): 261–272. Wang, D.; Deng, Y.; Yin, Z.; Shum, H.-Y.; and Wang, B. 2023. Progressive Disentangled Representation Learning for Fine-Grained Controllable Talking Head Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17979–17989. Wang, K.; Wu, Q.; Song, L.; Yang, Z.; Wu, W.; Qian, C.; He, R.; Qiao, Y.; and Loy, C. C. 2020. Mead: A largescale audio-visual dataset for emotional talking-face generation. In European Conference on Computer Vision, 700– 717. Springer. Wei, J.; and Zou, K. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196. Wu, H.; Jia, J.; Wang, H.; Dou, Y.; Duan, C.; and Deng, Q. 2021. Imitating arbitrary talking style for realistic audiodriven talking face synthesis. In Proceedings of the 29th ACM International Conference on Multimedia, 1478–1486. Xing, J.; Xia, M.; Zhang, Y.; Cun, X.; Wang, J.; and Wong, T.-T. 2023. Codetalker: Speech-driven 3d facial animation with discrete motion prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12780–12790. Zhen, R.; Song, W.; He, Q.; Cao, J.; Shi, L.; and Luo, J. 2023. Human-computer interaction system: A survey of talking-head generation. Electronics, 12(1): 218. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7622
2024
846
18,680
Learning Image Demoir´eing from Unpaired Real Data Yunshan Zhong1,2, Yuyao Zhou2,3, Yuxin Zhang2,3, Fei Chao2,3, Rongrong Ji1,2,3,4* 1Institute of Artificial Intelligence, Xiamen University. 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University. 3Department of Artificial Intelligence, School of Informatics, Xiamen University. 4Peng Cheng Laboratory. {zhongyunshan,yuyaozhou,yuxinzhang}@stu.xmu.edu.cn {fchao, rrji}@xmu.edu.cn Abstract This paper focuses on addressing the issue of image demoir´eing. Unlike the large volume of existing studies that rely on learning from paired real data, we attempt to learn a demoir´eing model from unpaired real data, i.e., moir´e images associated with irrelevant clean images. The proposed method, referred to as Unpaired Demoir´eing (UnDeM), synthesizes pseudo moir´e images from unpaired datasets, generating pairs with clean images for training demoir´eing models. To achieve this, we divide real moir´e images into patches and group them in compliance with their moir´e complexity. We introduce a novel moir´e generation framework to synthesize moir´e images with diverse moir´e features, resembling real moir´e patches, and details akin to real moir´e-free images. Additionally, we introduce an adaptive denoise method to eliminate the low-quality pseudo moir´e images that adversely impact the learning of demoir´eing models. We conduct extensive experiments on the commonly-used FHDMi and UHDM datasets. Results manifest that our UnDeM performs better than existing methods when using existing demoir´eing models such as MBCNN and ESDNet-L. Code: https://github. com/zysxmu/UnDeM. Introduction Contemporary society is awash with electronic screens for presenting images, text, video, etc. With the widespread availability of portable camera devices such as smartphones, people have grown accustomed to using them for quick information recording. Unfortunately, a common issue arises from the intrinsic interference between the camera’s color filter array (CFA) and LCD subpixel layout of the screen (Yu et al. 2022), resulting in captured pictures being contaminated with some rainbow-shape stripes, which are also known as moir´e patterns (Sun, Yu, and Wang 2018; Yang et al. 2017b). These moir´e patterns involve varying thickness, frequencies, layouts, and colors, which degrade the perceptual quality of captured pictures. Consequently, there has been considerable academic and industrial interest in developing demoir´eing algorithms to rectify the issue. *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 真实摩尔纹 Figure 1: Illustration of image moir´e. Natural moir´e patterns are complex in varying thicknesses, frequencies, layouts, and colors across images and within an image. Primitive research on demoir´eing are mostly established upon image priors (Dabov et al. 2007; Cho et al. 2011) or traditional machine learning methods (Liu, Yang, and Yue 2015; Yang et al. 2017a), which are demonstrated to be inadequate for tackling moir´e patterns of drastic variations (Zheng et al. 2021). Fortunately, the fashionable convolutional neural networks (CNNs) have become a de facto infrastructure for the success of various computer vision tasks including the image demoir´eing (He et al. 2020; Cheng, Fu, and Yang 2019; He et al. 2019; Liu et al. 2020; Sun, Yu, and Wang 2018; Yuan et al. 2019; Zheng et al. 2021; Yu et al. 2022; Liu, Shu, and Wu 2018; Gao et al. 2019). These CNN-based methods are typically trained on extensive pairs of moir´e-free and moir´e images in a supervised manner to model the demoir´eing mapping. However, it is challenging to collect paired images given the fact in Fig. 1 that natural moir´e patterns are featured with varying thicknesses, frequencies, layouts, and colors (Zheng et al. 2021). We can easily access to the moir´e images as well as moir´e-free images, but they are mostly unpaired. Although many studies try to capture image pairs from digital screens (He et al. 2020; Yu et al. 2022), their quality is barricaded by three limitations. First, acquiring high-quality image pairs involves professional camera position adjustments and even special hardware (Yu et al. 2022). Second, burdensome manpower is required to select well-aligned moir´e-free The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7623 and moir´e pairs. Third, the captured moir´e contents are very unitary under highly-controlled lab environments. However, image pairs full of more diverse moir´e patterns are more expected for improving demoir´eing models. Synthesizing moir´e images has therefore attracted increasing attention recently. Given Fig. 2a illustrated moir´efree screenshots, shooting simulation methods (Liu, Shu, and Wu 2018; Yuan et al. 2019; Niu, Guo, and Wang 2021) simulate the aliasing between CFA and screen’s LCD subpixel to produce corresponding paired moir´e images in Fig. 2b. However, the synthetic images are insufficient to capture characteristics of real moir´e patterns, leading to a large domain gap as we analyze from two aspects. First, the synthetic moir´e images are much darker and cannot well seize the light quality, destroying the context of the viewing environment and obscuring the image details. Second, the synthetic moir´e patterns lack authenticity, as the thicknesses, frequencies, layouts, and colors of moir´e stripes are almost the same within an image. In Table 1 and Table 2 of the experimental section, we apply shooting simulated moir´e images to train demoir´eing CNNs. Our results manifest that the trained models generalize poorly to the natural-world test datasets. In (Park et al. 2022), Park et al. introduced a cyclic moir´e learning method and we observe a better performance than shooting simulation in Table 1 and Table 2. However, the generated pseudo moir´e fails to accurately model real moir´e patterns as illustrated in Fig. 2c, leading to limited performance. Therefore, it is desired to develop a better method to improve the synthetic moir´e images. In this paper, we present one novel method, dubbed as UnDeM, to learn demoir´eing from unpaired real moir´e and clean images that are fairly easy to collect, for example, by performing random screenshots and taking random photos from the digital screen. As displayed in Fig. 2d, the basic objective of our UnDeM is to synthesize moir´e images that possess mori´e features as the real moir´e images and details as the real moir´e-free images. The synthesized pseudo moir´e images then form pairs with the real moir´efree images for training demoir´eing networks. To this end, as shown in Fig. 3, we first split images into patches. These moir´e patches are further grouped using a moir´e prior that takes into consideration frequency and color information in each patch (Zhang et al. 2023). Consequently, moir´e patches within each group fall into similar complexity such that they can be better processed by the individual moir´e synthesis network. Specifically, the introduced synthesis network contains four modules including a moir´e feature encoder to extract moir´e features of real moir´e patches, a generator to synthesize pseudo moir´e patches, a discriminator to identify real or pseudo moir´e patches, and a content encoder to retain content information of real clean patches in synthesized pseudo moir´e patches. The whole framework is conducted in an adversarial training manner (Goodfellow et al. 2014) for a better moir´e image generation. Before paired with real moir´e-free images for training demoir´eing networks, the synthesized moir´e patches further undergo an adaptive denoise process to rule out these low-quality moir´e patterns that bear image detail loss. Concretely, we find low-quality pseudo moir´e leads to a large structure difference from its moir´e-free counterpart, which therefore can be removed if the difference score is beyond a threshold adaptive to a particular percentile of the overall structure differences. Experiments in Table 1 and Table 2 demonstrate that, the proposed UnDeM improves the compared baseline by a large margin, on the real moir´e image dataset. For example, when trained with a size of 384, MBCNN (Zheng et al. 2020) trained on the synthetic images from our UnDeM achieves 19.89 dB in PSNR on FHDMi (He et al. 2020), while obtaining 19.36 dB from cyclic moir´e learning (Park et al. 2022) and only 9.32 dB from shooting simulation. Such results not only demonstrate our efficacy, but also enlighten a new moir´e generation method for the demoir´eing community. Related Work Image Demoir´eing Image demoir´eing target at cleansing moir´e patterns in taken photos. Earlier studies resort to some property presumptions of moir´e patterns, such as space-variant filters (Siddiqui, Boutin, and Bouman 2009; Sun, Li, and Sun 2014), a lowrank constrained sparse matrix decomposition (Liu, Yang, and Yue 2015; Yang et al. 2017a), and layer decomposition (Yang et al. 2017b). Along with the surge of deep learning on many computer vision tasks, demoir´eing also benefits from the convolutional neural networks (CNNs) recently. As the pioneering study, Sun et al. (Sun, Yu, and Wang 2018) developed DMCNN, a multi-scale CNN, to remove moir´e patterns at different frequencies and scales. He et al. (He et al. 2019) proposed MopNet that is specially designed for unique properties of moir´e patterns including frequencies, colors, and appearances. Zheng et al. (Zheng et al. 2020) introduced a multi-scale bandpass convolutional neural network (MBCNN) that consists of a learnable bandpass filter and a two-step tone mapping strategy to respectively deal with frequency prior and color shift. Liu et al. (Liu et al. 2020) designed WDNet that removes moir´e patterns in the wavelet domain to effectively separate moir´e patterns from image details. In (He et al. 2020), a multi-stage framework FHDe2Net is proposed. FHDe2Net employs a global to local cascaded removal branch to erase multi-scale moire patterns and a frequency-based branch to reserve fine details. Yu et al. (Yu et al. 2022) designed the ESDNet that utilizes the computationally-efficient semantic-aligned scale-aware module to enhance the network’s capability. However, all these mentioned approaches require large amounts of moir´e and moir´e-free pairs. To solve this limitation, a cycle loss is further constructed to simultaneously train a pseudo moir´e generator and a demoir´eing network (Park et al. 2022; Yue et al. 2021). Very differently, our proposed UnDeM does not involve demoire´e network in the moir´e synthesis stage. Moir´eing Dataset Since data-driven CNN-based algorithms require large amount of paired moir´e and moir´e-free images to complete the training, many efforts have been devoted to constructing large-scale image pairs. Sun et al. (Sun, Yu, and Wang 2018) built the first real-world moir´e image dataset from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7624 (a) Moir´e-free images. (b) Pseudo moir´e images by shooting simulation. Unpaired moire baseline (c) Pseudo moir´e images by cyclic learning. Generated mire by UnDem (d) Pseudo moir´e images by our UnDeM. Figure 2: Visual examples of (a) real moir´e-free images; (b) pseudo moir´e images by shooting simulation (Niu, Guo, and Wang 2021); (c) pseudo moir´e images by cyclic learning (Park et al. 2022); (d) pseudo moir´e images by our UnDeM. Compared with details-missing and inauthentic pseudo moir´e images by shooting simulation and (Park et al. 2022), ours result in more diverse moir´e patterns and preserve more details of the moir´e-free images. Best view by zooming in. ImageNet (Russakovsky et al. 2015). He et al. (He et al. 2020) proposed the first high-resolution moir´e image dataset FHDMi to satisfy the practical application in the real world. Yu et al. (Yu et al. 2022) further proposed the ultra-highdefinition demoir´eing dataset UHDM containing 4K images. Nevertheless, the data preparation process requires huge human efforts, and the resulting datasets are confined to limited scenes. To avoid the drudgery of collecting real-world paired moir´e and moir´e-free images, shooting simulation that simulates the camera imaging process becomes a more valuable approach (Liu, Shu, and Wu 2018; Yuan et al. 2019). However, the synthetic data fails to model the real imaging process and leads to a large domain gap between synthetic data and real data. As a result, demoir´eing models trained on synthetic data are incapable of handling real-world scenarios. Methodology Our UnDeM contains image preprocessing, moir´e synthesis network, and adaptive denoise, which are detailed one by one in the following. Image Preprocessing Moir´e patterns vary significantly even within one single image. It is challenging for one single network to learn all cases. To better learn from these different moir´e patterns, we apply an isolated moir´e synthesis network to deal with moir´e patterns with similar complexity. We first split images in moir´e set Im into non-overlapping patches, leading to a moir´e patch set Pm = {pm i }N i=1, where N is the number of patches for the whole moir´e patch set. Similarly, we can have an M-size moir´e-free patch set Pf = {pf i }M i=1 for If. As illustrated in Fig. 3, we divide the moir´e set Pm into K subsets Pm = Pm 1 ∪Pm 2 ∪...∪Pm K . Each Pm j contains moir´e patches with similar complexity and any two subsets are disjoint. Zhang et al. (Zhang et al. 2023) showed that a perceptible moir´e pattern is highlighted by either high frequency or rich color information. Following (Zhang et al. 2023), given a moir´e patch pm ∈P, whose frequency is measured by a Laplacian edge detection operator F(pm) with kernel size of 3 (Marr and Hildreth 1980). In addition, the colorfulness, denoted as C(pm), is the linear combination of the mean and standard deviation of the pixel cloud in the color plane of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7625 (a) Image Preprocess Crop Group 𝒊 𝒎 𝒊ୀ𝟏 𝑵 𝒎 Crop 𝒊 𝒇 𝒊ୀ𝟏 𝑴 𝟏 𝒎 𝟐 𝒎 𝟑 𝒎 𝟒 𝒎 𝒇 Crop Group Figure 3: Image preprocessing. Both moir´e images Im and unpaired moir´e-free images If are split into patches. Patches from moir´e images are further grouped in compliance with the complexity of moir´e patterns. RGB colour space (Hasler and Suesstrunk 2003): C(pm) = q σ2(pm R −pm G) + σ20.5(pm R + pm G) −pm B  + 0.3 q µ2(pm R −pm G) + µ20.5(pm R + pm G) −pm B  , (1) where σ(·) and µ(·) return standard deviation and mean value of inputs, the pm R , pm G, and pm B denote the red, green, and blue color channels of pm. We set K = 4 and obtain four evenly-sized subsets of moir´e patches, each of which has distinctive moir´e features. The first group Pm 1 contains patches with the first N/4 smallest F(pm)·C(pm), thus it has moir´e patterns of low frequency and less color. We sort the remaining patches from the smallest to the largest with a new metric F(pm)/C(pm). Then Pm 2 consists of the first N/4 patches highlighted by low frequency but rich color. The middle N/4 patches form Pm 3 featured with high frequency and rich color. The N/4 smallest scored patches with high frequency but less color make up with Pm 4 . Fig. 4 gives some visual examples. Moir´e Synthesis Network Fig. 5 depicts an overall framework of our moir´e synthesis network Ti to learn moir´e patterns from the group Pm i . It consists of a moir´e feature encoder Em, a generator Gm, a discriminator Dm, and a content encoder Ec. Given an unpaired moir´e patch pm ∈Pm i and a moir´e-free patch pf ∈Pf, our motivion is to produce a pseudo moir´e ˜pm that possesses the moir´e pattern of pm while still retaining image details of pf, such that (˜pm, pf) forms moir´e and moir´e-free pairs to guide the learning of existing demoir´eing networks. To fulfill this objective, the moir´e feature encoder Em extracts the moir´e features of the real moir´e patch pm, denoted 𝒫ଵ ௠ 𝒫ଶ ௠ 𝒫ଷ ௠ 𝒫ସ ௠ Figure 4: An illustration of moir´e images of each group. Each group has its own moir´e patterns complexity. as F m: F m = Em(pm). (2) Then, the generator Gm is to synthesize a pseudo moir´e patch ˜pm with F m and pf as its inputs: ˜pm = GmCon(F m, pf)  , (3) where Con(·, ·) indicates the concatenation operation. The discriminator Dm cooperates with the generator Gm for a better pseudo moir´e patch in an adversarial training manner (Goodfellow et al. 2014). The generator Gm is trained to trick the discriminator Dm by: Ldis-G = Dm(˜pm) −1 2. (4) The least squares loss function (Mao et al. 2017) is used for a better training stability. Also, Dm is trained to distinguish the pseudo moir´e patch ˜pm from the real pm: Ldis-D = Dm(˜pm) 2 + Dm(pm) −1 2. (5) The loss functions of Eq. (4) and Eq. (5) are optimized in a min-max game manner. As a result, Dm learns to distinguish the pseudo moir´e and the real moir´e images, while the moir´e feature encoder Em is forced to learn to extract the moir´e feature appropriately and the generator Gm learns to synthesize real-like and in-distribution pseudo moir´e images, In addition, we also require moir´e feature of synthesized ˜pm to follow that of real pm by: ˜F m = Em(˜pm), (6) Lfea = ∥˜F m −F m∥1, (7) where ∥· ∥1 denotes the ℓ1 loss. To well pair ˜pm and pf, ˜pm is also expected to have contents details of pf. An additional content encoder Ec is introduced to align content features between ˜pm and pf as: Lcon = ∥Ec(˜pm) −Ec(pf)∥1. (8) Combining Eq. (4), Eq. (5), Eq. (7), and Eq. (8) leads to our final loss function as: L = Ldis-G + Ldis-D + Lfea + Lcon. (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7626 (b) Moire ́ Synthesis Network moir𝐞́ feature of synthesized 𝒑෥𝒎 moir𝐞́ feature of real 𝒑𝒎 real moir𝐞́ 𝑝௠ ℒ୤ୣୟ ℒୢ୧ୱିୈ ℒୢ୧ୱିୈ ℒୢ୧ୱିୋ moir𝐞́-free 𝑝௙ synthesized moir𝐞́ patch 𝑝෤௠ Generator Discriminator Discriminator 𝐹௠ 𝐹෨௠ ℒୡ୭୬ Moiré Feature Encoder Content Encoder Moiré Feature Encoder Figure 5: Framework of our moir´e synthesis network. ௙ ி೘ (a) Moir´e-free patch pf. ௙ ி೘ (b) Low-quality Pseudo moir´e images ˜pm. Figure 6: Examples of low-quality pseudo moir´e images. Adaptive Denoise After training our moir´e synthesis networks {Ti}4 i=1, the pseudo moir´e patch ˜pm, paired with corresponding moir´efree patch pf, sets the dataset to train demoir´eing network. Unfortunately, we find some pseudo moir´e patches occasionally suffer from low-quality issues. Some examples are manifested in Fig. 6, where the contents and details of pf are destroyed in ˜pm. Such noisy data hinders the learning of demoir´eing models. Fortunately, we observe in Fig. 6 that the ruined structure mostly attributes to the edge information. Therefore, we calculate the edge map of each patch by the Laplacian edge detection operator, and the structure difference is computed by summing up the absolute value of the edge difference between each pseudo pair. Low-quality pseudo moir´e leads to a large score of structure difference, and we can rule out these pairs as long as the score is beyond a threshold that is adaptive to the γ-th percentile of structure differences in a total of N pseudo pairs. We conduct the above process for each synthesis network Ti and set a corresponding γi to remove low-quality pseudo moir´e. We find N = 6, 400 performs well already. Consequently, we obtain a better performance. In summary, our UnDeM consists of 1) training a moir´e synthesis network for synthesizing pseudo moir´e images; 2) training a demoir´eing model based on the trained moir´e synthesis network. This paper focuses on addressing moir´e image generation. As for demoir´eing models, we directly borrow from existing studies. Details of training algorithms are listed in the supplementary materials. Experimention Implementation Details Datasets. Public demoir´eing datasets used in this paper include the FHDMi (He et al. 2020) dataset and UHDM (Yu et al. 2022) dataset. The FHDMi dataset consists of 9,981 image pairs for training and 2,019 image pairs for testing with 1920×1080 resolution. The UHDM dataset contains 5,000 image pairs with 4K resolution in total, of which 4,500 are used for training and 500 for testing. We use the training set to train the proposed moir´e synthesis network. For image preprocessing, we crop the training images of FHDMi into 8 patches. For UHDM involving images with higher resolution, we crop the training images into 6 patches. During training, the moir´e patch pm and moir´e-free patch pf are selected from different original images (before image preprocessing) to ensure they are unpaired. Networks. We implement our UnDeM using the Pytorch framework (Paszke et al. 2019). The architecture of moir´e synthesis network is largely based on (Hu et al. 2019; Liu et al. 2021). Em and Ec contain one convolutional layer and two residual blocks. Gm contains three convolutional layers, nine residual blocks, and two deconvolutional layers, and ends with a convolutional layer to produce the final output. The residual blocks constitute two convolutional layers that are followed by instance normalization and ReLU function (Ulyanov, Vedaldi, and Lempitsky 2016). The convolutional layer has 16 channels for Em and Ec and 128 for Gm. Dm is borrowed from PatchGAN (Isola et al. 2017) and consists of three convolutional layers with a stride of 2, two convolutional layers with a stride of 1, and ends with an average pooling layer. For demoir´eing models, we utilize MBCNN (Zheng et al. 2020) and ESDNet-L (Yu et al. 2022) (A large version of ESDNet). The moir´e synthesis network is trained using the Adam optimizer (Kingma and Ba 2014), where the first momentum and second momentum are set to 0.9 and 0.999, respecThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7627 Model C.S. Method PSNR↑SSIM↑LPIPS↓ MBCNN 192 Paired 22.49 0.815 0.191 Shooting 10.66 0.477 0.570 Cyclic 19.15 0.722 0.257 UnDeM 19.45 0.732 0.230 384 Paired 22.73 0.819 0.182 Shooting 9.32 0.513 0.572 Cyclic 19.36 0.733 0.265 UnDeM 19.89 0.735 0.226 ESDNet-L 192 Paired 22.86 0.823 0.143 Shooting 10.06 0.558 0.487 Cyclic 19.09 0.738 0.241 UnDeM 19.38 0.749 0.228 384 Paired 23.45 0.834 0.134 Shooting 9.81 0.553 0.512 Cyclic 19.05 0.715 0.273 UnDeM 19.66 0.747 0.205 Table 1: Quantitative results on the FHDMi dataset. The “C.S.” denotes size in the random crop and the “Paired” denotes real paired data. tively. We use 100 epochs for training with a batch size of 4 and an initial learning rate of 2×10−4, which is linearly decayed to 0 in the last 50 epochs. Besides, we perform different random crop sizes on the image patches after image preprocessing to validate the flexibility of our method for synthesizing pseudo moir´e images. The crop sizes are set to 192×192 and 384×384 for FHDMi, and 192×192, 384×384, and 768×768 for UHDM, respectively. As for the demoir´eing models, we retain the same training configurations as the original paper except that all models are trained for 150 epochs for a fair comparison. All networks are initialized using a Gaussian distribution with a mean of 0 and a standard deviation of 0.02. The γ1, γ2, γ3, and γ4 for adaptive denoise are empirically set to 50, 40, 30, and 20, respectively1. All experiments are run on NVIDIA A100 GPUs. Evaluation Protocols. We adopt the Peak Signal-toNoise Ratio (PSNR), Structure Similarity (SSIM) (Wang et al. 2004), and LPIPS (Zhang et al. 2018) to quantitatively evaluate the performance of demoir´eing models. Quantitative Results FHDMi. We first analyze the performance on the FHDMi dataset by comparing our UnDeM against the baseline, i.e., the results of the shooting simulation (Niu, Guo, and Wang 2021) and cyclic learning (Park et al. 2022). Table 1 shows that the performance of demoir´eing models trained on data produced by shooting simulation is extremely poor. For example, MBCNN obtains only 10.66 dB of PSNR when trained with 192×192 crop size, which indicates the existence of a large domain gap between the pseudo and real data. Both the cyclic learning method (Park et al. 2022) and our UnDeM exhibit much better results. Moreover, compared with (Park et al. 2022), our UnDeM successfully mod1Ablations on γi and each component in UnDeM are provided in the supplementary materials. Model C.S. Method PSNR↑SSIM↑LPIPS↓ MBCNN 192 Paired 20.14 0.760 0.346 Shooting 8.99 0.528 0.632 Cyclic 17.42 0.663 0.464 UnDeM 17.96 0.673 0.425 384 Paired 20.14 0.759 0.356 Shooting 9.27 0.538 0.603 Cyclic 17.68 0.665 0.476 UnDeM 17.78 0.668 0.401 768 Paired† 21.41 0.793 0.332 Shooting 9.33 0.543 0.605 Cyclic 17.98 0.719 0.503 UnDeM 18.13 0.723 0.360 ESDNet-L 192 Paired 21.30 0.786 0.258 Shooting 9.80 0.606 0.544 Cyclic 18.02 0.659 0.371 UnDeM 18.30 0.662 0.365 384 Paired 21.18 0.785 0.257 Shooting 10.27 0.604 0.522 Cyclic 17.75 0.679 0.404 UnDeM 18.18 0.688 0.361 768 Paired† 22.12 0.799 0.245 Shooting 9.80 0.599 0.542 Cyclic 18.00 0.697 0.423 UnDeM 18.40 0.713 0.344 Table 2: Quantitative results on the UHDM dataset. The “C.S.” denotes size in the random crop and the “Paired” denotes real paired data. The “†” indicates results directly copied from (Yu et al. 2022). els the moir´e patterns and thus presents the highest performance. For instance, MBCNN respectively obtains 19.45 dB and 19.89 dB of PSNR when trained with crop sizes of 192 and 384. For ESDNet-L, the PSNR results are 19.38 dB and 19.66 dB, respectively. Correspondingly, the SSIM and LPIPS of our UnDeM also exhibit much better performance than shooting simulation and cyclic learning. UHDM. The results on the UHDM dataset are provided in Table 2. Demoir´eing models trained on shooting simulation still fail to deal with the real data, and cyclic learning provides better results. More importantly, our UnDeM surpasses these two methods over different networks and training sizes. Specifically, the UnDeM increases the PSNR by 0.54 dB, 0.10 dB, and 0.15 dB when training MBCNN with crop sizes of 192, 384, and 768, respectively. For ESDNetL, the PSNR gains are 0.28 dB, 0.43 dB, and 0.40, respectively. To summarize from Table 1 and Table 2, we can conclude that the transferability of our produced moir´e images to the downstream demoir´eing tasks and the efficacy of our UnDeM over existing methods have therefore been well demonstrated. Qualitative Results Qualitative comparisons of demoir´eing images on UHDM dataset are presented in Fig. 7, with additional results provided in the supplementary materials. As shown in Figure The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7628 (a) Moir´e images. (b) Demoir´eing results by shooting simulation (Niu, Guo, and Wang 2021). (c) Demoir´eing results by cyclic learning (Park et al. 2022). (d) Demoir´eing results by our UnDeM. (e) Moir´e-free images. Figure 7: Visualization of demoir´eing results of MBCNN (Crop Size: 768) on the UHDM dataset. For the convenience of demonstration, we crop the patch from the test image. 7b, the demoir´eing results of shooting simulation exhibit unnaturally high brightness, leading to a loss of image detail. This decrease in visual quality can be blamed on the generally darker brightness of shooting simulation as shown in Fig. 2b, which makes the demoir´eing model learning incorrect brightness relationship between the moir´e and moir´efree images. As presented in Fig. 7c, the demoir´eing model fails to remove moir´e due to cyclic learning cannot model the moir´e patterns as illustrated in Fig. 2c. Results in Fig. 7d demonstrate the efficacy of UnDeM in removing moir´e patterns, reflecting the fact that UnDeM has the ability to successfully model the moir´e patterns. Conclusion In this paper, we present UnDeM that performs real image demoir´eing using unpaired real data in a learning-based manner. We synthesize pseudo moir´e images to form paired data for training off-the-shelf demoir´eing models. The proposed UnDeM contains three steps including image preprocessing, a moir´e generation network, and adaptive denoise. The image preprocessing crops the real moir´e images into multiple sub-image patches and groups them into four groups according to the moir´e patterns complexity. A moir´e generation network is applied to synthesize a pseudo moir´e image that has the moir´e feature as its input real moir´e image and the image detail as its input moir´e-free image. The adaptive denoise is introduced to rule out the low-quality synthetic moir´e images for avoiding their adverse effects on the learning of demoir´eing models. UnDeM is demonstrated to improve the quality of synthetic images and the demoir´eing models trained on these images are experimentally shown to be superior in performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7629 Acknowledgments This work was supported by National Key R&D Program of China (No.2022ZD0118202), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U21B2037, No. U22B2051, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001). References Cheng, X.; Fu, Z.; and Yang, J. 2019. Multi-scale dynamic feature encoding network for image demoir´eing. In Proceedings of IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 3486–3493. IEEE. Cho, T. S.; Zitnick, C. L.; Joshi, N.; Kang, S. B.; Szeliski, R.; and Freeman, W. T. 2011. Image restoration by matching gradient distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34: 683–694. Dabov, K.; Foi, A.; Katkovnik, V.; and Egiazarian, K. 2007. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on Image Processing (TIP), 16: 2080–2095. Gao, T.; Guo, Y.; Zheng, X.; Wang, Q.; and Luo, X. 2019. Moir´e pattern removal with multi-scale feature enhancing network. In Proceedings of the IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 240–245. Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2672–2680. Hasler, D.; and Suesstrunk, S. E. 2003. Measuring colorfulness in natural images. In Human vision and electronic imaging VIII, volume 5007, 87–95. He, B.; Wang, C.; Shi, B.; and Duan, L.-Y. 2019. Mop moire patterns using mopnet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2424–2432. He, B.; Wang, C.; Shi, B.; and Duan, L.-Y. 2020. FHDe2Net: Full high definition demoireing network. In Proceedings of the European Conference on Computer Vision (ECCV), 713–729. Hu, X.; Jiang, Y.; Fu, C.-W.; and Heng, P.-A. 2019. MaskShadowGAN: Learning to remove shadows from unpaired data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2472–2481. Isola, P.; Zhu, J.-Y.; Zhou, T.; and Efros, A. A. 2017. Imageto-image translation with conditional adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1125–1134. Kingma, D. P.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR). Liu, B.; Shu, X.; and Wu, X. 2018. Demoir´eing of CameraCaptured Screen Images Using Deep Convolutional Neural Network. arXiv preprint arXiv:1804.03809. Liu, F.; Yang, J.; and Yue, H. 2015. Moir´e pattern removal from texture images via low-rank and sparse matrix decomposition. In IEEE Visual Communications and Image Processing (VCIP), 1–4. Liu, L.; Liu, J.; Yuan, S.; Slabaugh, G.; Leonardis, A.; Zhou, W.; and Tian, Q. 2020. Wavelet-based dual-branch network for image demoir´eing. In Proceedings of the European Conference on Computer Vision (ECCV), 86–102. Liu, Z.; Yin, H.; Wu, X.; Wu, Z.; Mi, Y.; and Wang, S. 2021. From shadow generation to shadow removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4927–4936. Mao, X.; Li, Q.; Xie, H.; Lau, R. Y.; Wang, Z.; and Paul Smolley, S. 2017. Least squares generative adversarial networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2794–2802. Marr, D.; and Hildreth, E. 1980. Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biological Sciences, 207: 187–217. Niu, D.; Guo, R.; and Wang, Y. 2021. Mori´e Attack (MA): A New Potential Risk of Screen Photos. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 26117–26129. Park, H.; Vien, A. G.; Kim, H.; Koh, Y. J.; and Lee, C. 2022. Unpaired screen-shot image demoir´eing with cyclic moir´e learning. IEEE Access, 10: 16254–16268. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. PyTorch: An Imperative Style, HighPerformance Deep Learning Library. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 8026–8037. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115: 211–252. Siddiqui, H.; Boutin, M.; and Bouman, C. A. 2009. Hardware-friendly descreening. IEEE Transactions on Image Processing (TIP), 19: 746–757. Sun, B.; Li, S.; and Sun, J. 2014. Scanned image descreening with image redundancy and adaptive filtering. IEEE Transactions on Image Processing (TIP), 23: 3698–3710. Sun, Y.; Yu, Y.; and Wang, W. 2018. Moir´e photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing (TIP), 27: 4160–4172. Ulyanov, D.; Vedaldi, A.; and Lempitsky, V. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing (TIP), 13: 600–612. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7630 Yang, J.; Liu, F.; Yue, H.; Fu, X.; Hou, C.; and Wu, F. 2017a. Textured image demoir´eing via signal decomposition and guided filtering. IEEE Transactions on Image Processing (TIP), 26: 3528–3541. Yang, J.; Zhang, X.; Cai, C.; and Li, K. 2017b. Demoir´eing for screen-shot images with multi-channel layer decomposition. In IEEE Visual Communications and Image Processing (VCIP), 1–4. Yu, X.; Dai, P.; Li, W.; Ma, L.; Shen, J.; Li, J.; and Qi, X. 2022. Towards efficient and scale-robust ultra-highdefinition image demoir´eing. In Proceedings of the European Conference on Computer Vision (ECCV), 646–662. Yuan, S.; Timofte, R.; Slabaugh, G.; Leonardis, A.; Zheng, B.; Ye, X.; Tian, X.; Chen, Y.; Cheng, X.; Fu, Z.; et al. 2019. Aim 2019 challenge on image demoireing: Methods and results. In Proceedings of IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 3534–3545. Yue, H.; Cheng, Y.; Liu, F.; and Yang, J. 2021. Unsupervised moir´e pattern removal for recaptured screen images. Neurocomputing, 456: 352–363. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 586–595. Zhang, Y.; Lin, M.; Li, X.; Liu, H.; Wang, G.; Chao, F.; Ren, S.; Wen, Y.; Chen, X.; and Ji, R. 2023. Real-Time Image Demoireing on Mobile Devices. In Proceedings of the International Conference on Learning Representations (ICLR). Zheng, B.; Yuan, S.; Slabaugh, G.; and Leonardis, A. 2020. Image demoireing with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3636–3645. Zheng, B.; Yuan, S.; Yan, C.; Tian, X.; Zhang, J.; Sun, Y.; Liu, L.; Leonardis, A.; and Slabaugh, G. 2021. Learning frequency domain priors for image demoireing. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 44: 7705–7717. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7631
2024
847
18,681
Lifting by Image - Leveraging Image Cues for Accurate 3D Human Pose Estimation Feng Zhou, Jianqin Yin*, Peiyang Li School of Artificial Intelligence, Beijing University of Posts and Telecommunications, China {zhoufeng, jqyin, lipeiyang}@bupt.edu.cn Abstract The “lifting from 2D pose” method has been the dominant approach to 3D Human Pose Estimation (3DHPE) due to the powerful visual analysis ability of 2D pose estimators. Widely known, there exists a depth ambiguity problem when estimating solely from 2D pose, where one 2D pose can be mapped to multiple 3D poses. Intuitively, the rich semantic and texture information in images can contribute to a more accurate “lifting” procedure. Yet, existing research encounters two primary challenges. Firstly, the distribution of image data in 3D motion capture datasets is too narrow because of the laboratorial environment, which leads to poor generalization ability of methods trained with image information. Secondly, effective strategies for leveraging image information are lacking. In this paper, we give new insight into the cause of poor generalization problems and the effectiveness of image features. Based on that, we propose an advanced framework. Specifically, the framework consists of two stages. First, we enable the keypoints to query and select the beneficial features from all image patches. To reduce the keypoints attention to inconsequential background features, we design a novel Pose-guided Transformer Layer, which adaptively limits the updates to unimportant image patches. Then, through a designed Adaptive Feature Selection Module, we prune less significant image patches from the feature map. In the second stage, we allow the keypoints to further emphasize the retained critical image features. This progressive learning approach prevents further training on insignificant image features. Experimental results show that our model achieves state-of-the-art performance on both the Human3.6M dataset and the MPI-INF-3DHP dataset. Introduction Monocular 3D Human Pose Estimation (3DHPE) aims to estimate the relative 3D coordinates of human joints from an image. It is a fundamental computer vision task related to a wide range of applications, including human motion forecasting (Ding and Yin 2022; Liu et al. 2020), human action recognition (Dang, Yang, and Yin 2020), human-centric generation (Cao et al. 2023b, 2022, 2023a) and so on. In recent years, 3D human pose estimation has been dominated by the “lifting” technique (Martinez et al. 2017). This *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Background Overfitting Assistance Image Feature 2D Pose Image Key Image Feature 3D Pose query regress 2D Pose Image 3D Pose Figure 1: The main idea of this paper is to design a framework that enables 2D poses to regress 3D poses by querying information from the image. The framework is specifically designed based on two key insights: First, excessive attention to dataset-biased background information leads to poor generalization ability. Second, it is not only the image features corresponding to the keypoints that are helpful for the task, but also the associated body structural positions of the keypoints that can provide valuable assistance. approach consists of two stages. First, utilize off-the-shelf 2D pose estimators (Sun et al. 2019; Newell, Yang, and Deng 2016; Dang et al. 2022) to estimate the 2D pose from the image and then regress the 3D pose from the obtained 2D human pose. Compared to direct estimation, this cascaded approach has the following advantages: 2D estimator is trained on more diverse and extensive 2D human pose datasets, which enables stronger visual perception and generalization ability (Martinez et al. 2017). Besides, the “lifting” can be trained with infinite 2D-3D pairs by setting different camera views (Xu et al. 2021). Nevertheless, estimating 3D pose from 2D pose introduces the depth ambiguity problem, one 2D pose can be mapped to multiple 3D poses. Intuitively, rich texture and semantic information in images can assist in regressing a more accurate 3D pose from 2D pose. There has been some exploration in this direction. For example, Nie, Wei, and Zhu; Xu et al. segment imThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7632 Poor Generalization Ability Figure 2: Cross-dataset evaluation between straightforward image-based model and our approach. With continuous training on Human3.6m, the former’s accuracy on 3DHP decreased, highlighting poor generalization ability. age patches around keypoint locations to aid in generating the 3D pose. Likewise, Zhao et al.; Liu et al. introduced a method of superimposing extracted image features around keypoints’ position onto 2D keypoints to offer complementary information to the network. Yet despite the considerable progress, there exist some issues that need to be addressed. Firstly, because 3D human motion datasets were primarily captured in constrained laboratory environments, the distribution of image data is limited. Consequently, methods that are trained with image information tend to suffer from poor generalization ability, as shown in Fig 2. Additionally, effective strategies for leveraging image information are lacking. This paper gives a novel insight into the cause of weak generalization ability and the specific effectiveness of image information in predicting the 3D pose. Based on that, we propose a novel framework that estimates 3D human pose from 2D pose by leveraging effective image cues, as shown in Fig 1. To begin, we utilize the attention mechanism (Vaswani et al. 2017) to study the response of human keypoints to the image features. By analyzing the attention maps, we derived two noteworthy insights: 1. In general, for all keypoints, the attention maps exhibit a high-proportion, wide-range, and indiscriminate emphasis on irrelevant background information outside the human body. This may shed light on the weak generalization abilities of image-based methods, as they overly focus on dataset-biased information. 2. For a specific keypoint, its required image features are not confined solely to its own location in the image. Instead, the required positions also encompass body structure positions that provide depth information for that keypoint. For instance, the features of elbow can be instrumental in estimating the depth of wrist keypoint. This underscores the constraints of previous methods that exclusively concatenate localized image patches or features around keypoints (Nie, Wei, and Zhu 2017; Zhao et al. 2019; Liu et al. 2019). Based on these understandings, we propose a novel 3D pose estimation framework. The key concept is to allow the keypoints to adaptively focus on critical image features. To give an overview, the progressive learning framework consists of two stages. We enable the keypoints to query and select the beneficial features from all image patches in the first stage, called “Broad Query”. Then we prune the irrelevant image features (mostly background features). At last, we allow the keypoints to further explore information from these critical image features to obtain an accurate 3D pose, called “Focused Exploration”. Specifically, in Stage 1, we introduce a Pose-guided Transformer Layer, which effectively reduces the keypoints’ attention to the background. It leverages the pose-to-image attention matrix to allow the image features to reversely query and aggregate the keypoints features. Through our design, the more crucial image features can extract more relevant information from the keypoint, while less important image features, like background patches, receive comparatively less information. Then we proposed an Adaptive Feature Selection Module, which aims to rank and prune the less important image features by the attention mechanism. In Stage 2, the keypoints are allowed to refocus on critical human image features by several Transformer Layers. Through this cascaded approach, the keypoints are empowered to dynamically explore critical features broadly and simultaneously prevent over-training to the background features. We demonstrate quantitative results by conducting our method on standard 3D human pose benchmarks. Experimental results show that our method outperforms state-ofthe-art performance on Human3.6M (Ionescu et al. 2013) and MPI-INF-3DHP (Mehta et al. 2017). Mention that our method not only significantly improves the accuracy of single-frame 3D pose estimation but also outperforms even the accuracy of 3D pose estimation networks based on temporal information. Our contribution can be summarized as follows: • We propose two novel insights about 3DHPE methods involving image information. For one thing, overly focusing on dataset-biased background leads to poor generalization ability. For another, valuable image patches for estimating specific keypoint’s 3D coordinates are not confined to its exact image location; they extend to areas with structurally related positions. • We propose a 3DHPE framework leveraging effective image features in two stages: broad query followed by focused exploration. It not only enables keypoints to determine all the necessary image features but also prevents excessive training on the background, thus improving generalization. • We propose a novel Pose-guided Transformer Layer that effectively improves the keypoints’ ability to significant features. Besides, we propose an Adaptive Feature Selection Module, which adaptively stops irrelevant image features from further training. Related Work In the past few years, there has been extensive research on deep-learning-based algorithms for monocular 3D human pose estimation. Methods that directly regress 3D pose from the image are popular in the early stages (Li and Chan The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7633 H3.6M Dataset 3DHP Dataset Figure 3: Examples of attention map visualization of all keypoints on image features in two datasets. Traindata Testdata MPJPE↓ Background Attention H3.6M H3.6M 30.4 73% H3.6M 3DHP 74.2 75% Table 1: Comparison of different test datasets and observed issues of excessive attention to the background and poor generalization. 2015). However, these approaches suffered from limited performance due to their reliance on training and testing within the constraints of 3D Motion Capture data (Xu et al. 2021). To address this limitation, the “lifting” method emerged as the dominant approach, offering better solutions to the problem. “Lifting” Based 3D Human Pose Estimation “Lifting” based approaches leverage off-the-shelf 2D human pose estimators trained on large and more diverse 2D datasets. By adopting this, the process of 3D human pose estimation is simplified to lifting the 2D pose to 3D pose without image participation. Martinez et al. first proposed a fully connected residual network in this approach. To handle the issue of depth ambiguity in the lifting process, some methods have leveraged temporal information (Pavllo et al. 2019; Chen et al. 2021) or proposed models with multiple hypotheses (Li and Lee 2019; Li et al. 2022). Fusion Approach Apart from the two mainstream approaches, there exist some methods that combine 2D pose with image information. Despite the remarkable attempts made by these methods, they still exhibit certain limitations. For example, some methods did not leverage off-the-shelf 2D pose estimators to generate 2D pose (Zhao et al. 2019; Liu et al. 2019). These approaches not only add a burden to the network but also fail to leverage the benefits of 2D estimators mentioned before. Besides, some methods employ rudimentary approaches to integrate image information. For instance, Nie, Wei, and Zhu; Xu et al. segment image patches around keypoint positions to assist in generating the 3D pose. Similarly, Zhao et al.; Liu et al. overlay image features extracted from keypoints position onto 2D keypoints. Nevertheless, the insight proposed in the next section proves this local concatenation approach might be ineffective. Moreover, Zhou et al.; Gong et al. utilize 2D keypoints heatmap on image to provide extra inforknee ankle wrist knee ankle wrist elbow elbow Figure 4: Visualization examples of heatmaps depicting the attention of specific keypoints. mation. Indeed, heatmap only contains limited information, which may not be sufficient to accurately regress 3D poses. Insight of Image Effect to 3DHPE In this section, we study the roles and limitations of image features in estimating 3D pose using attention mechanisms. The keypoint-to-image attention map represents which image patches offer beneficial information for estimating the 3D coordinates of that keypoint. Background Overfitting Given the task of estimating the relative coordinates of human keypoints, the presence of non-contact backgrounds in 3D capture-environment datasets can be considered a form of dataset-biased noise. When we visualized the average attention maps of keypoints on the images, we observed a wide-ranging and indiscriminate focus on background features, as shown in Fig 3. This reveals the model’s overfitting to background information, which could be a potential cause of the poor generalization of image-based models. We further quantified the proportion of attention on background features, and found very high proportions on both datasets (73%, 75%), as shown in Table 1. Structural Assistance Logically speaking, for a specific human body keypoint in the image, the keypoint’s own image features can only provide its 2D coordinates in the image. However, the relative depth coordinate with respect to the pelvis point requires prior knowledge derived from combining other human structure features. We present attention of specific keypoints on image features, examples shown in Fig 4. Not surprisingly, our findings lead to the conclusion: not only the image features of keypoint’s locations are required. It will extend to body structure positions that provide depth information for that keypoint. For instance, the knee keypoint gives attention to ankles, the wrist keypoint gives attention to elbows and shoulders, and the elbow keypoint gives attention to shoulders. Hence, previous methods have been mistaken in their The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7634 … Feature map 2D Pose Linear Projection … k v … q Transformer Layer Pose-guided Transformer Layer Transformer Layer … … Image Encoder Linear Projection Multihead Cross-Attention FFN Multihead Self-Attention Adaptive Feature Selection Transformer Layer ×N … Refined Pose FFN Multihead Self-Attention FFN Softmax 𝐽! 𝐽" 𝐽# I# I" I! I$ I% Attention map Transpose Normalized Pretrained Weight sharing Linear Projection Stage 1: Broad Query Stage 2: Focused Exploration Coarse Pose Keypoint token Image token Enhanced Image token flatten dot product element-wise addition gradient truncation Figure 5: The overview of the proposed network. assumption that only concatenating image patches or features around keypoints is sufficient. Method In this section, we provide a detailed description of the proposed framework, as illustrated in Fig 5. Given a 2D pose J2d ∈RN×2, our method aims to reconstruct the 3D pose J3d ∈RN×3 by effectively leveraging the information from a cropped image I ∈Rh×w×3, where N is the number of keypoints, h, w is the input image size. To accomplish this, we proposed a Progressive Training framework. It consists of two stages. In Stage 1, we allow the keypoints to query beneficial information from all image features under coarse pose supervision until convergence. Subsequently, to counteract the detriment of background features’ excessive training on generalization, we employ an Adaptive Feature Selection Module to prune less crucial image features. Then, in Stage 2, keypoints exclusively query the preserved image features to generate a refined pose. Stage 1: Broad Query The image I is fed into a 2d-pose-estimation-pretrained image encoder, resulting in features FI ∈RH×W ×d, which are flattened into tokens TI ∈RHW ×d with sequence length HW and dimension d. Similarly, the 2D pose J2d is transformed to pose tokens TP ∈RN×d by linear projection. Subsequently, the image tokens and keypoint tokens are fed into three consecutive Transformer Layers. Situated in the middle is the specially crafted Pose-guided Transformer Layer, intended to selectively enhance image tokens while diminishing the keypoints’ focus on irrelevant image tokens. The resulting keypoint tokens are then projected linearly to generate the coarse 3D pose denoted as J3d1 ∈RJ×3. Transformer Layer consists of three consecutive modules, including Multi-head Self-Attention (MSA), Multihead Cross-Attention (MCA), and Feed Forward Network (FFN). Multi-head Attention can be formulated as: A(Q, K, V ) = softmax(Q · K⊺ √ d ) · V (1) In MSA, keypoints tokens are linearly mapped to Queries Q ∈RN×d, Keys K ∈RN×d, and Values V ∈RN×d. Similarly, in MCA, keypoints tokens are linearly mapped to Queries Q ∈RN×d, Image tokens are linearly mapped into Keys K ∈RHW ×d, and Values V ∈RHW ×d. Pose-guided Transformer Layer. We designed a Poseguided dual attention structure that effectively reduces the keypoints’ attention to the background. It leverages the poseto-image attention matrix to allow the image features to reversely query and aggregate the keypoints features. By our design, the more crucial image patches can obtain more information from the keypoint features. Specifically, the novel attention mechanism produces two outputs: keypoint tokens ˆ TJ and enhanced image tokens ˆTI. The update of image tokens will be influenced by the update of keypoint tokens through Attention Map (A). The formulations are as follows: A = softmax(Q · K⊺ √ d ) ˆ TJ = A · VI + TJ ˆTI = A⊺· VJ + TI (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7635 Pruned feature Enhanced Feature mean Pruning by proportion aggregate 𝐽! 𝐽" 𝐽# I# I" I! I$ I% Retained Patch Figure 6: Details of Adaptive Feature Selection Module. Similarly, keypoints tokens are linearly mapped to Queries Q ∈RN×d, Image tokens are linearly mapped into Keys K ∈RHW ×d. VI ∈RHW ×d and VJ ∈RN×d are Values linearly mapped from images tokens and keypoints tokens, respectively. The attention map A ∈RN×HW represents the weighting that keypoint tokens assign to image tokens, and it is normalized using the softmax function. Similarly, the transposed attention map A⊺∈RHW ×N represents the weights image tokens assign to keypoint tokens. Because of the normalization before transposition, intuitively, the image tokens deemed more important receive a greater overall weight from the keypoint tokens. This signifies that image tokens with higher significance can gather more information from keypoint tokens, while on the contrary, less significant image tokens (typically background tokens) collect limited information. We choose to replace only the second Transformer Layer with the proposed Pose-guided Transformer Layer for the following reasons: before guiding the update of image features, the keypoint tokens require an initial perception of the image features (by the first transformer layer) to evaluate their significance. The final layer cannot be replaced, as the update of image tokens lacks direct supervision in Stage 1. Adaptive Feature Selection Module To prevent less significant image tokens from further training and aggregating critical image tokens. We propose an Adaptive Feature Selection Module, details shown in Fig. 6. We leverage the attention map from the last transformer layer to rank the importance of image tokens. For simplicity, we aggregate the attention weights of all keypoints on image features and set a retention rate, denoted as r ( 0 < r < 1). The top r × HW image tokens with the highest weights are retained. Stage 2: Focused Exploration In this stage, we allow the keypoint tokens to further dig information from selected critical image tokens and generate a refined 3D pose J3d2 ∈RJ×3. Specifically, we freeze the weights trained in the former stage and feed the keypoints tokens and selected image tokens into a new Transformer Block consisting of several Transformer Layers. Then, the output keypoint tokens will be projected to a refined 3D pose J3d2by Linear Projection. Loss Function Our model is traind with Mean Squared Error (MSE) loss. L = J X i=1 ∥Yi −ˆYi∥2 (3) where Yi and ˆYi represent the predicted and ground-truth 3D pose of joint i, respectively. Experiments Datasets and Evaluation Metrics We evaluate our method on two widely-used datasets for 3DHPE: Human3.6M (Ionescu et al. 2013) and MPI-INF3DHP (Mehta et al. 2017). Human3.6M (H3.6M) is the largest and most representative benchmark for 3DHPE. Following Martinez et al., we use subject S1, S5, S6, S7 and S8 for training, and S9, S11 for testing. We down-sampled the original frame rate from 50 fps to 5 fps for faster training. The Mean Per Joint Position Error (MPJPE) is computed under two protocols: Protocol 1 computes MPJPE between ground truth and the estimated 3D poses after aligning their root (pelvis) keypoints; Protocol 2 is the MPJPE after aligning the estimated 3D pose with the ground truth using translation, rotation, and scale (P-MPJPE). MPI-INF-3DHP (3DHP) provides monocular videos of six subjects acting in three different scenes, including indoors and outdoors. This dataset is often used to evaluate the generalization performance of different models. Following the convention, we directly apply our model trained on H3.6M dataset to this dataset without fine-tuning. We report results using three metrics, Mean Per Joint Position Error (MPJPE), Percentage of Correctly estimated Keypoints (PCK) with a threshold of 150 mm, and Area Under the Curve (AUC) a range of PCK thresholds. Implementation Details We take HRNet-w32 as our backbone with input size 256 × 192, which is pretrained on MS COCO 2017 dataset (Lin et al. 2014), provided by Sun et al.. The retention r is set to 0.3. The number of Transformer layers in Stage 2 is set to 3. For a fair comparison, following previous work (Pavllo et al. 2019; Martinez et al. 2017), we obtain 2D pose detections cascaded pyramid network (CPN) (Chen et al. 2018) and stacked hourglass network (SH) (Newell, Yang, and Deng 2016). We take the ground-truth bounding boxes. Our model is implemented in Pytorch and optimized via Adam. All experiments are conducted on two NVIDIA RTX 3090 GPUs. The initial learning rate is set to 0.001 with a shrink factor of 0.9 per 4 epochs with 128 batch size. We first train the initial interaction stage and image encoder for 20 epochs and freeze them to train the remaining modules. Comparison with State-of-the-art Methods Results on Human3.6M. The proposed method is compared with the state-of-the-art methods on Human3.6M. The result and comparison of our model with SH detected, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7636 Method Dir. Disc Eat Greet Phone Photo Pose Purch. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg. Learning (Fang et al. 2018) 50.1 54.3 57.0 57.1 66.6 73.3 53.4 55.7 72.8 88.6 60.3 57.7 62.7 47.5 50.6 60.4 SemGCN (Zhao et al. 2019)∗ 47.3 60.7 51.4 60.5 61.1 49.9 47.3 68.1 86.2 55.0 67.8 61.0 42.1 60.6 45.3 57.6 Monocular (Xu et al. 2021)∗ 47.1 52.8 54.2 54.9 63.8 72.5 51.7 54.3 70.9 85.0 58.7 54.9 59.7 43.8 47.1 58.1 Graformer (Zhao, Wang, and Tian 2022) 49.3 53.9 54.1 55.0 63.0 69.8 51.1 53.3 69.4 90.0 58.0 55.2 60.3 47.4 50.6 58.7 Ours ∗ 48.3 51.5 46.1 48.5 53.7 42.8 47.3 59.9 71.0 51.6 52.7 46.1 39.8 53.0 43.9 51.0 VideoPose (Pavllo et al. 2019) † 45.1 47.4 42.0 46.0 49.1 56.7 44.5 44.4 57.2 66.1 47.5 44.8 49.2 32.6 34.0 47.1 GraphSH (Xu and Takano 2021) 45.2 49.9 47.5 50.9 54.9 66.1 48.5 46.3 59.7 71.5 51.4 48.6 53.9 39.9 44.1 51.9 MGCN (Zou and Tang 2021) 45.4 49.2 45.7 49.4 50.4 58.2 47.9 46.0 57.5 63.0 49.7 46.6 52.2 38.9 40.8 49.4 MHFormer (Li et al. 2022)† f=243 39.2 43.1 40.1 40.9 44.9 51.2 40.6 41.3 53.5 60.3 43.7 41.1 43.8 29.8 30.6 43.0 Pose-Oriented (Li et al. 2023) 47.9 50.0 47.1 51.3 51.2 59.5 48.7 46.9 56.0 61.9 51.1 48.9 54.3 40.0 42.9 50.5 diffPose (Gong et al. 2023) 42.8 49.1 45.2 48.7 52.1 63.5 46.3 45.2 58.6 66.3 50.4 47.6 52.0 37.6 40.2 49.7 Ours∗ 44.9 46.4 42.4 44.9 48.7 40.1 44.3 55.0 58.9 47.1 48.2 42.6 36.9 48.8 40.1 46.4 SemGCN (Zhao et al. 2019)∗ 37.8 49.4 37.6 40.9 45.1 41.4 40.1 48.3 50.1 42.2 53.5 44.3 40.5 47.3 39.0 43.8 VideoPose (Pavllo et al. 2019) † 37.2 GraphSH (Xu and Takano 2021) 35.8 38.1 31.0 35.3 35.8 43.2 37.3 31.7 38.4 45.5 35.4 36.7 36.8 27.9 30.7 35.8 Graformer (Zhao, Wang, and Tian 2022) 32.0 38.0 30.4 34.4 34.7 43.3 35.2 31.4 38.0 46.2 34.2 35.7 36.1 27.4 30.6 35.2 MHFormer (Li et al. 2022) † f=243 27.7 32.1 29.1 28.9 30.0 33.9 33.0 31.2 37.0 39.3 30.0 31.0 29.4 22.2 23.0 30.5 Pose-Oriented (Li et al. 2023) 32.9 38.3 28.3 33.8 34.9 38.7 37.2 30.7 34.5 39.7 33.9 34.7 34.3 26.1 28.9 33.8 diffPose (Gong et al. 2023) 28.8 32.7 27.8 30.9 32.8 38.9 32.2 28.3 33.3 41.0 31.0 32.1 31.5 25.9 27.5 31.6 Ours∗ 29.5 30.1 25.0 29.0 28.5 28.6 26.9 30.5 31.1 27.7 32.4 27.7 24.8 30.0 25.9 28.6 Table 2: Quantitative comparison with the state-of-the-art methods on Human3.6M under Protocol 1, using SH (Newell, Yang, and Deng 2016) detected 2D poses (top), using CPN (Chen et al. 2018) detected 2D poses (middle), ground truth 2D poses (bottom) as inputs. (†) - uses temporal information. (∗) - uses image information. Blod: best; Underlined: second best. Method PCK ↑ AUC ↑ MPJPE ↓ Simple (Martinez et al. 2017) 82.6 50.2 88.6 Cascaded (Li et al. 2020) 81.2 46.1 99.7 MGCN (Zou and Tang 2021) 86.1 53.7 Pose-Oriented (Li et al. 2023) 84.1 53.7 Ours 88.2 59.3 68.9 Table 3: Quantitative comparison with the state-of-the-art methods on MPI-INF-3DHP. Best in Bold. CPN detected and ground-truth 2D pose are reported in Table 2. Our method outperforms all previous state-of-the-art methods by a large margin under protocol 1 for all singleframe methods (Methods w/o †). Besides, our method even achieves comparable results with the video-based method (Mthods w/ †). Result on MPI-INF-3DHP. To assess the generalization ability, we directly apply our method to 3DHP dataset trained on H3.6M. Table 3 shows the result of different methods with 2D ground-truth input. Our approach achieves the best performance on all metrics (PCK, AUC, and MPJPE). It emphasizes the strong generalization ability. Qualitative Result. Figure 7 demonstrates some qualitative results on H3.6M dataset compared with Graformer (Zhao, Wang, and Tian 2022) with 2d ground-truth input. Our results achieve almost the same as the ground truth when faced with easy cases. For hard cases, compared with the baseline, our result can effectively reduce the gap between the estimated result and ground-truth. Ablation Study and Discussion Effect on Progressive Learning Strategy. We first compared the Coarse Pose and Refined Pose results on both Strategy Human3.6m 3DHP end-to-end w/ fine supervision 32.6 76.4 end-to-end w/ both supervision 30.2 72.4 Coarse Pose 29.2 70.9 Refined Pose 28.6 68.9 Table 4: Ablation study on progressive learning. dataset MPJPE Background Attention w/o PGTL H3.6M 30.4 73% 3DHP 74.2 75% w/ PGTL H3.6M 29.2 60% 3DHP 70.9 63% Table 5: Effect on the Pose-guided Transformer Layer (PGTL). Represented in Coarse Pose in MPJPE. H3.6M and 3DHP on Table 4. It can be seen that Stage 2 brings 0.6mm performance on Human3.6m and 2.0mm performance on 3DHP. To demonstrate that performance improvement is not brought by the increased model parameters, we retrained our method from end to end without coarse pose supervision or with both supervisions. The performance drops 4.0mm or 1.6mm in Human3.6m and drops 7.5mm or 3.5mm in 3DHP. To demonstrate the ability of our method to improve generalization, we compare the test result curves of end-to-end training and progressive training on the 3DHP dataset as shown in Fig. 8. It can be seen that in the case of training only on key features (Stage 2), the test accuracy of the 3DHP dataset continues to increase (lower in MPJPE). This demonstrates that preventing excessive training on background features can enhance generalization. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7637 Input GraFormer Ours Input GraFormer Ours Input GraFormer Ours Figure 7: Qualitative results on Human3.6M. Green lines represent our results, results from baseline (Graformer) are represented by blue lines, and Ground-truth is represented in red. Easy case examples are shown at the top, and hard case examples are shown at the bottom. Figure 8: Effect of progressive learning strategy on test curves on 3DHP dataset. Transformer Layer Pose-guided Layer MPJPE 0 2 30.4 1 1 29.2 2 0 31.3 Table 6: Effect on number of Transformer Layers in Stage 1. Ablation Study on Pose-guided Transformer Layer. We then diagnose the effect of the designed Pose-guide Transformer Layer. To test the ability to reduce the focus of keypoints on background information, we conducted a quantitative analysis. We took a 30-pixel-radius circle for the positions of each keypoints on the attention map, and defined these areas outside these circles as “background”. We then calculated the average attention of the keypoints to the background and performed statistical analysis on the test set. The result is shown in Table 5. We achieved 1.2mm on H3.6m improvement and 3.3mm improvement on 3DHP by replacing Tansformer Layer with Pose-guide Transformer Layer. More importantly, the attention rate to background in the first stage dropped from 73% to 60% on H3.6m dataset, and from 75% to 63% on 3DHP dataset, which shows our approach is effective in enhancing the ability of keypoints to perceive critical features. In addition, we performed an ablation study on the number of layers in the Pose-guided Transformer Layer, shown in Table 6. Effect on Adaptive Feature Selection Module. We then conduct an ablation study on Adaptive Feature Selection Figure 9: Visualization of the retained feature (red pixels). r 0.01 0.3 1 Coarse Pose MPJPE 28.9 28.6 29.0 29.2 Table 7: Ablation study on the retention rate r. Module. We first tested the effect of different retention rates (r) on the results, shown in Table 7. It can be seen that we reach the best result when r is set to 0.3. We further visualize some examples of pruning on feature maps when r is set to 0.3, shown in Fig 9. It shows the module effectively prunes the background feature. Conclusion This paper gives a new insight into the cause of poor generalization problems and the effectiveness of image features. Based on that, we propose an advanced 3DHPE framework that leverages effective image cues and improves the generalization ability. It comprises two stages: the first involves a broad query for valuable image features, and the second stage focuses on critical features. To accomplish this, we proposed a novel Pose-guided Transformer Layer to reduce the keypoints’ attention to background and an Adaptive Feature Selection Module to prune less significant image features. Extensive experiments show that our method achieves state-of-the-art performance on two widely used benchmark datasets and shows great generalization ability. We hope our exploration can provide insights for future 3DHPE research. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7638 Acknowledgements This work was supported partly by the National Natural Science Foundation of China (Grant No. 62173045), and the Natural Science Foundation of Hainan Province (Grant No. 622RC675). References Cao, P.; Yang, L.; Liu, D.; Liu, Z.; Li, S.; and Song, Q. 2022. Lsap: Rethinking inversion fidelity, perception and editability in gan latent space. arXiv preprint arXiv:2209.12746. Cao, P.; Yang, L.; Liu, D.; Liu, Z.; Li, S.; and Song, Q. 2023a. What Decreases Editing Capability? DomainSpecific Hybrid Refinement for Improved GAN Inversion. arXiv preprint arXiv:2301.12141. Cao, P.; Yang, L.; Zhou, F.; Huang, T.; and Song, Q. 2023b. Concept-centric Personalization with Large-scale Diffusion Priors. arXiv preprint arXiv:2312.08195. Chen, T.; Fang, C.; Shen, X.; Zhu, Y.; Chen, Z.; and Luo, J. 2021. Anatomy-aware 3d human pose estimation with bonebased pose decomposition. IEEE Transactions on Circuits and Systems for Video Technology, 32(1): 198–209. Chen, Y.; Wang, Z.; Peng, Y.; Zhang, Z.; Yu, G.; and Sun, J. 2018. Cascaded pyramid network for multi-person pose estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7103–7112. Dang, Y.; Yang, F.; and Yin, J. 2020. DWnet: Deep-wide network for 3D action recognition. Robotics and Autonomous Systems, 126: 103441. Dang, Y.; Yin, J.; Zhang, S.; Liu, J.; and Hu, Y. 2022. Learning Human Kinematics by Modeling Temporal Correlations between Joints for Video-based Human Pose Estimation. arXiv preprint arXiv:2207.10971. Ding, P.; and Yin, J. 2022. Towards more realistic human motion prediction with attention to motion coordination. IEEE Transactions on Circuits and Systems for Video Technology, 32(9): 5846–5858. Fang, H.-S.; Xu, Y.; Wang, W.; Liu, X.; and Zhu, S.-C. 2018. Learning pose grammar to encode human body configuration for 3d pose estimation. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Gong, J.; Foo, L. G.; Fan, Z.; Ke, Q.; Rahmani, H.; and Liu, J. 2023. Diffpose: Toward more reliable 3d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13041–13051. Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2013. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7): 1325–1339. Li, C.; and Lee, G. H. 2019. Generating multiple hypotheses for 3d human pose estimation with mixture density network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9887–9895. Li, H.; Shi, B.; Dai, W.; Zheng, H.; Wang, B.; Sun, Y.; Guo, M.; Li, C.; Zou, J.; and Xiong, H. 2023. PoseOriented Transformer with Uncertainty-Guided Refinement for 2D-to-3D Human Pose Estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 1296–1304. Li, S.; and Chan, A. B. 2015. 3d human pose estimation from monocular images with deep convolutional neural network. In Computer Vision–ACCV 2014: 12th Asian Conference on Computer Vision, Singapore, Singapore, November 1-5, 2014, Revised Selected Papers, Part II 12, 332–347. Springer. Li, S.; Ke, L.; Pratama, K.; Tai, Y.-W.; Tang, C.-K.; and Cheng, K.-T. 2020. Cascaded deep monocular 3d human pose estimation with evolutionary training data. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6173–6183. Li, W.; Liu, H.; Tang, H.; Wang, P.; and Van Gool, L. 2022. Mhformer: Multi-hypothesis transformer for 3d human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13147–13156. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Liu, J.; Ding, H.; Shahroudy, A.; Duan, L.-Y.; Jiang, X.; Wang, G.; and Kot, A. C. 2019. Feature boosting network for 3D pose estimation. IEEE transactions on pattern analysis and machine intelligence, 42(2): 494–501. Liu, X.; Yin, J.; Liu, J.; Ding, P.; Liu, J.; and Liu, H. 2020. Trajectorycnn: a new spatio-temporal feature learning network for human motion prediction. IEEE Transactions on Circuits and Systems for Video Technology, 31(6): 2133– 2146. Martinez, J.; Hossain, R.; Romero, J.; and Little, J. J. 2017. A simple yet effective baseline for 3d human pose estimation. In Proceedings of the IEEE international conference on computer vision, 2640–2649. Mehta, D.; Rhodin, H.; Casas, D.; Fua, P.; Sotnychenko, O.; Xu, W.; and Theobalt, C. 2017. Monocular 3d human pose estimation in the wild using improved cnn supervision. In 2017 international conference on 3D vision (3DV), 506– 516. IEEE. Newell, A.; Yang, K.; and Deng, J. 2016. Stacked hourglass networks for human pose estimation. In Computer Vision– ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII 14, 483–499. Springer. Nie, B. X.; Wei, P.; and Zhu, S.-C. 2017. Monocular 3d human pose estimation by predicting depth on joints. In 2017 IEEE International Conference on Computer Vision (ICCV), 3467–3475. IEEE. Pavllo, D.; Feichtenhofer, C.; Grangier, D.; and Auli, M. 2019. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7753–7762. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7639 Sun, K.; Xiao, B.; Liu, D.; and Wang, J. 2019. Deep highresolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5693–5703. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Xu, T.; and Takano, W. 2021. Graph stacked hourglass networks for 3d human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16105–16114. Xu, Y.; Wang, W.; Liu, T.; Liu, X.; Xie, J.; and Zhu, S.-C. 2021. Monocular 3d pose estimation via pose grammar and data augmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 6327–6344. Zhao, L.; Peng, X.; Tian, Y.; Kapadia, M.; and Metaxas, D. N. 2019. Semantic graph convolutional networks for 3d human pose regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3425–3435. Zhao, W.; Wang, W.; and Tian, Y. 2022. Graformer: Graphoriented transformer for 3d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20438–20447. Zhou, K.; Han, X.; Jiang, N.; Jia, K.; and Lu, J. 2019. Hemlets pose: Learning part-centric heatmap triplets for accurate 3d human pose estimation. In Proceedings of the IEEE/CVF international conference on computer vision, 2344–2353. Zou, Z.; and Tang, W. 2021. Modulated graph convolutional network for 3D human pose estimation. In Proceedings of the IEEE/CVF international conference on computer vision, 11477–11487. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7640
2024
848
18,682
NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models Gengze Zhou1, Yicong Hong2, Qi Wu1* 1The University of Adelaide 2The Australian National University {gengze.zhou, qi.wu01}@adelaide.edu.au, mr.yiconghong@gmail.com Abstract Trained with an unprecedented scale of data, large language models (LLMs) like ChatGPT and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such a trend underscored the potential of training LLMs with unlimited language data, advancing the development of a universal embodied agent. In this work, we introduce the NavGPT, a purely LLM-based instruction-following navigation agent, to reveal the reasoning capability of GPT models in complex embodied scenes by performing zero-shot sequential action prediction for vision-and-language navigation (VLN). At each step, NavGPT takes the textual descriptions of visual observations, navigation history, and future explorable directions as inputs to reason the agent’s current status, and makes the decision to approach the target. Through comprehensive experiments, we demonstrate NavGPT can explicitly perform high-level planning for navigation, including decomposing instruction into sub-goals, integrating commonsense knowledge relevant to navigation task resolution, identifying landmarks from observed scenes, tracking navigation progress, and adapting to exceptions with plan adjustment. Furthermore, we show that LLMs is capable of generating high-quality navigational instructions from observations and actions along a path, as well as drawing accurate top-down metric trajectory given the agent’s navigation history. Despite the performance of using NavGPT to zeroshot R2R tasks still falling short of trained models, we suggest adapting multi-modality inputs for LLMs to use as visual navigation agents and applying the explicit reasoning of LLMs to benefit learning-based models. Code is available at: https://github.com/GengzeZhou/NavGPT. Introduction Amid the remarkable advances in large language model (LLM) training (Touvron et al. 2023; Brown et al. 2020; Chowdhery et al. 2022; Zhang et al. 2022; Wei et al. 2021; Bubeck et al. 2023; OpenAI 2023), we note a shift towards integrating LLMs into embodied robotics tasks such as SayCan (Ahn et al. 2022) and PaLM-E (Driess et al. 2023). This trend stems from two primary considerations: the scale of training data and the scale of models. First, the development of techniques for processing textual information provides an abundant source of natural language training data *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The architecture of NavGPT. NavGPT synergizes reasoning and actions in LLMs to perform zero-shot Visionand-Language Navigation following navigation system principles. It interacts with different visual foundation models to adapt multi-modality inputs, handle the length of history with a history buffer and a summarizer, and aggregate various sources of information through a prompt manager. NavGPT parses the generated results from LLMs (LLM Thoughts and LLM Action) to move to the next viewpoint. for learning interdisciplinary and generalizable knowledge. Furthermore, by accessing unlimited language data, significant emergent abilities (Wei et al. 2022a) are observed when scaling up the model, resulting in a remarkable enhancement in the reasoning capabilities when solving problems across wide domains. Consequently, training an LLM with unlimited language data is seen as a viable pathway toward realizing a universal embodied agent. This insight has spurred the integration of LLMs into vision-and-language navigation (VLN) (Anderson et al. 2018), an exploratory task toward achieving real-world instruction-following embodied agents. The latest research attempt to leverage GPT models (OpenAI 2023; Brown et al. 2020) to benefit navigation. For example, using LLMs as a parser for diverse language input (Shah et al. 2023) — extracting landmarks from instruction to support visual matching and planning, or leveraging LLMs’ commonsense reasoning abilities (Zhou et al. 2023; Dorbala, Mullen Jr, and Manocha 2023) to incorporate prior knowledge of interobject correlations to extend agents’ perception and facilitate the decision making. However, we notice that the reaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7641 soning ability of LLMs in navigation is still under-explored, i.e. , can LLMs understand the interactive world, the actions, and consequences in text form, and use all the information to solve a navigation task? In light of this, we introduce NavGPT, a fully automatic LLM-based system designed for language-guided visual navigation, with the capability to handle multi-modality inputs, unconstrained language guidance, interaction with an open-world environment, and progress tracking with navigation history. NavGPT perceives the visual world by reading descriptions of observations generated by visual foundation models (VFMs), and synergizing Thoughts (reasoning) and Actions (decision making) in an explicit text form. To an extreme extent, we use NavGPT to perform zero-shot VLN1 to clearly reveal the reasoning process of LLMs during navigation. Through comprehensive experiments, we found that LLMs possess the capability to execute complex navigational planning. This includes the deconstruction of instructions into distinct sub-goals, assimilation of commonsense knowledge pertinent to navigational tasks, identification of landmarks within the context of observed environments, continuous monitoring of navigational progression, and responding to anomalies by modifying their initial plan. The aforementioned phenomenon reflects an astonishing reasoning ability in understanding and solving navigation problems. Furthermore, we show that LLMs have the ability to draw navigation trajectories in a metric map and regenerate navigation instruction based on navigation history, revealing the historical and spatial awareness of LLMs for navigation tasks. However, there remains a significant gap between the zero-shot performance of current open-sourced LLMs in VLN compared to the fine-tuned models, where the bottleneck of NavGPT lies in the information loss while translating visual signals into natural language and summarizing observations into history. As a result, we suggest the future direction of building general VLN agents to be LLMs with multi-modality inputs or a navigation system making use of high-level navigation planning, historical and spatial awareness from LLMs. Our contributions can be summarized as follow: (1) We introduce a novel instruction-following LLMs agent for visual navigation with a supportive system to interact with the environment and track navigation history. (2) We investigate the capabilities and limitations of current LLMs’ reasoning for making navigation decisions. (3) We reveal the capability of LLMs in high-level planning for navigation, by observing the thoughts of LLMs, making the planning process of navigation agents accessible and explainable. Related Work Vision-and-Language Navigation Language-driven vision navigation is demanded by widely applicable embodied navigation agents. Previous study shows the essentials 1Our NavGPT is solely powered by off-the-shelf LLMs, without any learnable module or any prior experience in solving interactive navigation. Hence, all navigation tasks defined in this paper are novel to NavGPT. of modules to achieve such a goal (Anderson et al. 2018; Qi et al. 2020b; Krantz et al. 2020; Ku et al. 2020; He et al. 2021; Gu et al. 2022; Zhu et al. 2022; Hong et al. 2020a, 2022; Zhao, Qi, and Wu 2023; Qiao et al. 2023b), whereas a large number of research reveal the crucial effect of training strategies (Wang et al. 2019; Tan, Yu, and Bansal 2019). Importantly, the main problem in VLN is the generalizability of agents in unseen environments. Data augmentation (Wang et al. 2022; Li, Tan, and Bansal 2022; Tan, Yu, and Bansal 2019; Parvaneh et al. 2020; Li and Bansal 2023), memory mechanism (Chen et al. 2021b; Pashevich, Schmid, and Sun 2021; Hong et al. 2023), pre-training (Hao et al. 2020; Chen et al. 2022a; Qiao et al. 2023a; Wang et al. 2023) have been adopted to alleviate data scarcity. However, those augmentations and pre-training are limited to the sampled data from a fixed number of scenes, which is not enough to reflect a realistic application scene where objects could be out of the domains and language instructions are more diverse. In our work, we utilize the reasoning and knowledge storage of LLMs and perform VLN in a zero-shot manner as an initial attempt to reveal the potential usage of LLMs for VLN in the wild. A number of studies (Chen et al. 2021a; Deng, Narasimhan, and Russakovsky 2020; Chen et al. 2022b) have presented compelling methodologies that underscore the significance of topological maps in facilitating long-term planning, specifically in the aspect of backtracking to prior locations. In addition, Dorbala et al. (Dorbala et al. 2022) use CLIP (Radford et al. 2021) to perform zero-shot VLN by chunking instructions into keyphrases and completely rely on the text-image matching capability from CLIP to navigate. However, the planning and decision-making processes of the agents above are implicit and not accessible. On the contrary, benefiting from the intrinsic of LLMs, we are able to access the reasoning process of agents, making it explainable and controllable. Large Language Models With the massive success in large-scale language model training (Touvron et al. 2023; Brown et al. 2020; Chowdhery et al. 2022; Zhang et al. 2022; Wei et al. 2021), a new cohort of Large Language Models (LLMs) has shown evolutionary progress toward achieving Artificial General Intelligence (AGI) (Bubeck et al. 2023; OpenAI 2023). This burgeoning class of LLMs, underpinned by increasingly sophisticated architectures and training methodologies (Scao et al. 2022), has the potential to revolutionize various domains by offering unprecedented capabilities in natural language understanding and generation. The main concern for LLMs is that their knowledge is limited and confined after training is finished. The latest works study how to utilize LLMs interacting with tools to expand their knowledge as a plugin, including extending LLM to process multimodality content (Wu et al. 2023; Yongliang et al. 2023), teaching LLMs to access the internet with correct API calls (Schick et al. 2023), and expanding their knowledge with local databases to accomplish QA tasks (Peng et al. 2023). Another stream of works studies how to prompt LLMs in a hierarchical system to facilitate the alignment of reasoning and corresponding actions (Yao et al. 2022; Karpas et al. 2022) beyond the Chain of Thought The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7642 (CoT) (Wei et al. 2022b). These works set up the preliminaries for building an embodied agent directly using LLMs. LLMs in Robotics Navigation The employment of Large Language Models (LLMs) in the field of robotics remains in the primary stage (Vemprala et al. 2023; Bubeck et al. 2023). A handful of contemporary studies, however, have begun to explore the utilization of generative models for navigation. Shah et al. (Shah et al. 2023) employs GPT3 (Brown et al. 2020) in an attempt to identify "landmarks" or subgoals, while Huang et al. (Huang et al. 2022) concentrates its efforts on the application of an LLM for the generation of code. Zhou et al. (Zhou et al. 2023) use LLM to extract the commonsense knowledge of the relations between targets and objects in observations to perform zero-shot object navigation (ZSON) (Gadre et al. 2022; Majumdar et al. 2022). Despite these recent advancements, our study diverges in its concentration on converting visual scene semantics into input prompts for the LLM, directly performing VLN based on the commonsense knowledge and reasoning ability of LLMs. The work closest to ours is LGX (Dorbala, Mullen Jr, and Manocha 2023), but they are doing object navigation where agents are not required to follow the instruction and in their method, they use the GLIP (Li et al. 2022a) model to decide the stop probability and did not consider memorization of navigation history, action, and reasoning between LLM. Method VLN Problem Formulation We formulate the VLN problem as follows: Given a natural language instruction W, composed of a series of words tw1, w2, w3, . . . , wnu, at every step st, the agent interprets the current location via the simulator to obtain an observation O. This observation comprises N alternative viewpoints, representing the egocentric perspectives of agents in varying orientations. Each unique view observation is denoted as oipi ď Nq, with its associated angle direction represented as aipi ď Nq. The observation can thus be defined as Ot fi rxo1, a1y, xo2, a2y, . . . , xoN, aNys. Throughout the navigation process, the agents’ action space is confined to the navigation graph G. The agent must select from the M “ |Ct`1| navigable viewpoints, where Ct`1 indicates the set of candidate viewpoints, by aligning the observation OC t fi rxoC 1 , aC 1 y, xoC 2 , aC 2 y, . . . , xoC M, aC Mys with the oracle W. The agent prognosticates the subsequent action by selecting the relative angle aC i from OC t , then enacts this action through interaction with the simulator to transition from the current state st “ xvt, θt, ϕty to st`1 “ xvt`1, θt`1, ϕt`1y, where v denotes the current viewpoint location, θ denotes the current heading angle, and ϕ denotes the current elevation angle of the agent. The agent also maintains a record of the state history ht and adjusts the conditional transition probability between states St “ Tpst`1|aC i , st, htq, where function T denotes the conditional transition probability distribution. In summary, the policy π parametrized by Θ that the agent is required to learn is based on the oracle W and the current observation OC t , which is πpat|W, Ot, OC t , St; Θq. In this study, NavGPT conducts the VLN task in a zero-shot manner, where the Θ is not learned from the VLN datasets, but from the language corpus that the LLMs are trained on. NavGPT NavGPT is a system that interacts with environments, language guidance, and navigation history to perform action prediction. Let Hăt`1 fi rxO1, R1, A1y, xO2, R2, A2y, . . . , xOt, Rt, Atys be the navigation history of observation O, LLM reasoning R and action A triplets for the previous t steps. To obtain the navigation decision At`1, NavGPT needs to synergize the visual perception from VFMs F, language instruction W, history H and navigation system principle P with the help of prompt manager M, define as follow: xRt`1, At`1y “LLMpMpPq, MpWq, MpFpOtqq, MpHăt`1qq (1) Navigation System Principle P. The Navigation System Principle formulates the behavior of LLM as a VLN agent. It clearly defines the VLN task and the basic reasoning format and rules for NavGPT at each navigation step. For example, NavGPT should move among the static viewpoints (positions) of a pre-defined graph of the environment by identifying the unique viewpoint ID. NavGPT should not fabricate nonexistent IDs. Visual Foundation Models F. NavGPT as an LLM agent requires visual perception and expression ability from VFMs to translate the current environment’s visual observation into natural language description. The VFMs here play the role of translator, to translate visual observations using their own language, e.g. natural language, objects’ bounding boxes, and objects’ depth. Through the process of prompt management, the visual perception results will be reformated and translated into pure natural language for LLMs to understand. Navigation History Hăt`1. The navigation history is essential for NavGPT to evaluate the progress of the completion of the instruction, to update the current state, and make the following decisions. The history is composed of summarized descriptions of previous observations Oăt`1 and actions Aăt`1, along with the reasoning thoughts Răt`1 from LLM. Prompt Manager M. The key to using LLM as a VLN agent is to convert all the above content into a natural language that LLM can understand. This process is done by the prompt manager, which collects the results from different components and parses them into a single prompt for LLM to make navigation decisions. Visual Perceptron for NavGPT In this section, we introduce the visual perception process of NavGPT. We take visual signals as a foreign language and handle the visual input using different visual foundation models to translate them into natural language, shown in figure 2. For an agent standing at any viewpoint in the environment, the observation is composed of egocentric views from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7643 Figure 2: The process of forming natural language description from visual input. We used 8 directions to represent a viewpoint and show the process of forming the descriptions for one of the directions. different orientations. The number of total views is defined by the field of view of each view image and the relative angle of each view. In our work, we set the field of view of each view as 45˝, and turn the heading angle θ 45˝ per view from 0˝ to 360˝, 8 directions in total. Besides, we turn the elevation angle ϕ 30˝ per view from 30˝ above the horizontal level to 30˝ below, 3 levels in total. As a result, we obtain 3 ˆ 8 “ 24 egocentric views for each viewpoint. To translate visual observation into natural language, we first utilize the BLIP-2 (Li et al. 2023a) model as the translator. With the strong text generation capability of LLMs, BLIP-2 can achieve stunning zero-shot image-to-text generation quality. By carefully setting the granularity of visual observation (field of views and the total view number in each observation), we prompt BILP-2 to generate a decent language description of each view with a detailed depiction of the shapes and color of objects and the scenes they are in while avoiding useless caption of views from a smaller FoV, from which partial observation is available and it is hard to recognize even for humans. See appendix for details. Notice that for the heading direction, the rotation interval is equal to the field of view, therefore there is no overlapping between each orientation. For the elevations, there is a 15˝’s overlapping between the top, middle, and down views. In NavGPT we mainly focus on the heading angle of agents during navigation, therefore, we prompt GPT-3.5 to summarize the scenes from the top, middle, and down views for each orientation into a sentence of description. Besides natural language descriptions of the scene from BLIP-2, we also excavate the lower-level feature extracted by other vision models. These vision models serve as auxiliary translators, translating visual input into their own "language" like the class of objects and corresponding bounding boxes. The detection results will be aggregated by the prompt manager into prompts for LLMs. In this work, we utilize Fast-RCNN (Girshick 2015) to extract the bounding boxes of objects in each egocentric view. After locating the objects, we calculate the relative heading angle for each object and the agent. We also extract the depth information of the center pixel of the object provided by the Matterport3D simulator (Anderson et al. 2018). With the depth, objects’ relative orientation, and class, we filter the detection results by leaving the object within 3 meters from the current viewpoint. The results from VFMs will be processed by the prompt manager into observation for the current viewpoint in natural language. Synergizing Reasoning and Actions in LLMs In the VLN task, the agent needs to learn the policy πpat|W, Ot, OC t , St; Θq, which is difficult because the implicit connection between actions and observations and demain intensive computation. In order to explicitly access and enhance the agent’s comprehension of the current state during navigation, we follow the ReAct paper (Yao et al. 2022) to expand the agent’s action space to ˜ A “ A Y R, where R P L is in the entire language space L, denoting the thought or reasoning trace of the agent. The reasoning traces R of the agent will not trigger any interaction with the external environment, therefore no observation will be returned when the agent is outputting the reasoning during each navigation step. We synergize the NavGPT’s actions and thoughts by prompting it to make navigation decisions after outputting the reasoning trace at each step. Introducing the reasoning traces aims to bootstrap the LLMs in two aspects: Firstly, prompting the LLMs to think before choosing an action, enables LLMs to perform complex reasoning in planning and creating strategies to follow the instructions under the new observations. For example, as shown in figure 3, NavGPT can generate a long-term navigation plan by analyzing the current observation and the instruction, performing higher-level planning such as decomposing instruction and planning to reach the sub-goal, which is never seen explicitly in previous works. Secondly, including reasoning traces R in the navigation history Hăt enhances the problem-solving ability of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7644 NavGPT. By injecting reasoning traces into navigation history, NavGPT inherits from the previous reasoning traces, to reach a sub-goal with high-level planning consistently through steps, and can track the navigation progress with exception-handling abilities like adjusting the plan. NavGPT Prompt Manager With the Navigation System Principle P, translated results from VFMs, and the History of Navigation Hăt, the prompt manager parses and reformates them into prompts for LLMs. Details of the prompt are presented in the appendix. Specifically, for Navigation System Principle P, NavGPT prompt manager will create a prompt to convey LLMs with the rules, declaring the VLN task definition, defining the simulation environment for NavGPT, and restricting LLMs’ behavior in the given reasoning format. For perception results from VFMs F, the prompt manager gathers the results from each direction and orders the language description by taking the current orientation of NavGPT as the front, shown in figure 2, arranging the description from 8 directions into prompt by concatenating them clockwise. For navigation history Hăt`1, the observation, reasoning, and actions triples xOi, Ri, Aiy are stored in a history buffer, shown in figure 1. Directly extracting all triples in the buffer will create too long a prompt for LLMs to accept. To handle the length of history, the prompt manager utilizes GPT-3.5 to summarize the observations from viewpoints in the trajectory, inserting the summarized observations into the observation, reasoning, and actions triples in the prompt. Experiment Implementation Details. We evaluate NavGPT based on GPT-4 (OpenAI 2023) and GPT-3.5 on the R2R dataset (Anderson et al. 2018). The R2R dataset is composed of 7189 trajectories, each corresponding to three fine-grained instructions. The dataset is separated into the train, val seen, val unseen, and test unseen splits, with 61, 56, 11, and 18 indoor scenes, respectively. We apply the 783 trajectories in the 11 val unseen environments in all our experiments and for comparison to previous supervised approaches. We utilize BLIP-2 ViT-G FlanT5XL (Li et al. 2023a) as images translator and Fast-RCNN (Girshick 2015) as object detector. The depth information of objects is extracted from the Mattport3D simulator (Anderson et al. 2018) by taking the depth of the center pixel in the bounding box. Evaluation Metrics. The evaluation of NavGPT utilizes standardized metrics from the R2R dataset. These include Trajectory Length (TL), denoting the average distance traveled by the agent; Navigation Error (NE), representing the mean distance from the agent’s final location to the destination; Success Rate (SR), indicating the proportion of navigation episodes where the agent successfully reaches the target location within a 3-meter margin of error; Oracle Success Rate (OSR), the success rate of agent stopped at the closest point to the goal on its trajectory; and Success Rate weighted by the normalized inverse of Path Length (SPL), which balances navigation precision and efficiency by adjusting the success rate based on the ratio of the optimal path length to the agent’s predicted path length. Qualitive Results We elaborately study the qualitative results of the reason trace from NavGPT. We reveal the potential high-level planning capability of GPT-4 under embodied navigation tasks. Reasoning Capability of GPT-4 for Language-guide Navigation As shown in figure 3, with GPT-4, NavGPT can perform various types of reasoning and high-level planning during navigation. For short instructions, NavGPT can track the navigation progress through steps to accomplish a single action described in the instructions, similar to the selfmonitoring VLN agents (Ma et al. 2019; Zhu et al. 2020; Gao et al. 2023). For long instructions, NavGPT can break it down with sub-goals, similar to previous works on finegraining R2R data (Hong et al. 2020b; He et al. 2021; Zhao et al. 2022), and plan to reach the destination by effectively identifying landmarks from observations, similar to works on utilizing objects information to perform cross-modality matching in VLN (Gao et al. 2021; Qi et al. 2020a, 2021). When navigating to a viewpoint with unexpected observation, NavGPT can plan to explore the environment and use commonsense knowledge to assist decision-making, similar to VLN methods incorporate external knowledge(Li et al. 2022b; Gao et al. 2021; Li et al. 2023b). History and Spatial Relative Relation Awareness for LLMs During Navigation We examined NavGPT’s awareness of historical and spatial relations by employing GPT-4 to delineate the trajectory in navigational history and to construct a map of visited viewpoints utilizing pyplot. The process involved extracting exclusively the actions At`1, observations Ot`1, and the entire navigation history Ht`1. The specifics of the prompt are presented in the appendix. As shown in figure 4, we observed that GPT-4 could effectively extract landmarks from the redundant observation descriptions and generate navigation history descriptions with actions. This could be a potential way of generating new trajectory instructions for VLN. Besides, the result shows GPT4 can comprehensively understand the history of navigation, and thus can perform the essential progress tracking during navigation. Moreover, shown in figure 4, GPT-4 can successfully catch the relative position relations between viewpoints and draw a top-down view of the trajectory for visited viewpoints. By providing language descriptions of actions taken by the agents, including the turning angle and relative distances between viewpoints, GPT-4 shows a stunning awareness of spatial relations. Such impressive reasoning ability supports NavGPT to perform high-level planning shown in figure 3, underlines the significant potential LLMs hold for embodied navigation tasks. Comparison with Supervised Methods We compare the results of using NavGPT to zero-shot the sequential navigation tasks with previous models trained on the R2R dataset. As shown in table 1, NavGPT surpasses some supervised baselines and is compatible with some supervised agents. However, a significant discrepancy can be The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7645 Instruction: Go forward past the bed and the stairs. Stop in the doorway straight ahead. Init observation: … A bedroom with hardwood floors … Thought: … The rear viewpoint seems to be in the direction of the bed, so I should move there first. Action: action_maker("b0a6cf6e9b9043248ed1c1ed8db08436”) Observation: … A hallway with hardwood floors leads to a bathroom with a mirror and light fixtures … Thought: I have moved past the bed, and now I see a hallway leading to a bathroom, … continue forward to find the stairs and the doorway straight ahead … Action: action_maker("c53934a9642942ae9b8d9766ee1e51ac”) Observation: … A view of a hallway with hardwood floors and a staircase… Thought: I have found the stairs, and now I need to find the doorway straight ahead … Action: action_maker("bf10ccfbb089402d9f3e606803dadb0d") Observation: … hardwood floors and a doorway … Thought: I have reached a viewpoint where I can see a doorway straight ahead. I should move to the front navigable viewpoint to get closer to the doorway and check if I have reached the destination. Action: action_maker("df14f9089a6646f0bbf5054e18fb8951”) Observation: … A hallway with hardwood floors and white doors leads to a white doorway … Thought: … the destination of the doorway … Instruction: Go up the stairs. Stop at the top of the stairs. Observation: …A wooden staircase leads up to a room … Thought: I am at the bottom of the stairs. I should continue going up the stairs by moving to the viewpoint '5b2de972faea4ca381cd72140f77b0c0'. Action: action_maker("5b2de972faea4ca381cd72140f77b0c0") Observation: … staircase leads down to a room with beams in a large barn with a ladder … Thought: I am now on the stairs. I should continue going up the stairs by moving to the viewpoint '930cbd6da8f14dfba4d883c55a069f96'. Action: action_maker("930cbd6da8f14dfba4d883c55a069f96") Observation: …A wooden staircase with a railing leads to a room with wooden floors, … Thought: I am almost at the top of the stairs. I should continue going up the stairs … Instruction: Keep walking forward passing all the picture frames on the wall. Turn left at the corner before where the chairs are placed. Walk forward passing the big pots placed on the floor on your left. Turn left and stop in front of the sofa. Observation: …A hallway with two bars, a fireplace, arches, and lights... Thought: I have passed some picture frames on the wall. I should continue walking forward … Action: action_maker("b882d05cc8d842879c647824f537a17c") Observation: …A hallway with a large painting and vase on travertine tile flooring is being cleaned while wooden... Thought: I have passed more picture frames on the wall. Now, I need to look for a corner before where the chairs are placed and turn left … Action: action_maker("041abeb88f4d4a9cb085e359686aa9ed") Observation: …and a large urn sitting on a tile floor... Thought: I have reached the corner before where the chairs are placed. Now, I need to turn left and walk forward passing the big pots placed on the floor on my left… Instruction: Walk into the house, and go through the kitchen. Leave the kitchen through the door on the right of the large black bookcase. Go down the stairs. Stop on the third stair from the top. Init observation: …with hardwood floors and a dining table… Thought: First, I need to go through the kitchen. The kitchen is in the front left direction, so I should move to the viewpoint '68aafa779b9c41eca16156cfddcedd2b'. Action: action_maker("68aafa779b9c41eca16156cfddcedd2b") Observation: …A dining room and kitchen with hardwood floors. Thought: I am now in the kitchen. Next, I need to find the door on the right of the large black bookcase. I should move to the viewpoint 'e685b29863794f00b09c23e1536412d8' to get a better view of the kitchen and find the door. Action: action_maker("e685b29863794f00b09c23e1536412d8") Observation: …A kitchen with white cabinets, varying flooring and counter top materials, … Thought: I can see the kitchen clearly now, but I still can't find the large black bookcase and the door on its right. I should move to the viewpoint '5b01ffae96874e60a9decd1fbac7e4ee' to explore the kitchen further. Action: action_maker("5b01ffae96874e60a9decd1fbac7e4ee") Observation: …A kitchen with a marble top island, counter top, and fireplace. Thought: I found the large black bookcase in the rear left direction, and the door on its right. I should move to the viewpoint 'e62661ec94d04adda7fee9a81d429d2b' to get closer to the door. Action: action_maker("e62661ec94d04adda7fee9a81d429d2b" Observation: ...A hallway with hardwood floors and a door leads to a room with a bookcase … Thought: I am now closer to the door … Thought: first I need to walk towards the sink. The sink is not visible in the current viewpoint, but there is a bathroom to the left, which might lead me closer to the sink. Thought: I am now in the living room area, and I can see the fireplace. I should move closer to the fireplace and look for the stool and table. Long instruction with progress tracking Short instruction with sub-goal planning Long instruction with exceptions handling Short instruction with progress tracking Integrating commonsense knowledge Figure 3: The qualitative of NavGPT. NavGPT can explicitly perform high-level planning for sequential action prediction, including decomposing instruction into sub-goals, integrating commonsense knowledge, identifying landmarks from observed scenes, tracking navigation progress, exceptions handling with plan adjustment. discerned. We suggest the limitations inhibiting the performance of LLMs in solving VLN can be primarily attributed to two factors: the precision of language-based depiction of visual scenes and the tracking capabilities regarding objects. NavGPT’s functionality is heavily reliant on the quality of captions generated from VFMs. If the target object delineated in the instruction is absent in the observation description, NavGPT is compelled to explore the environment. The ideal circumstance entails all target objects being visible pursuant to the instruction. However, the inherent granularity of language description inevitably incurs a loss of information. Moreover, NavGPT must manage the length of the navigation history to prevent excessively verbose descriptions as the steps accrue. To this end, a summarizer is implemented, albeit at the cost of further information loss. This diminishes NavGPT’s tracking ability, impeding the formation of seamless perceptions of the entire environment as the trajectory lengthens. Effect of Visual Components We perform additional experiments to investigate the effectiveness of visual components in NavGPT, we construct a baseline with GPT-3.5 for its easier access and budgetfriendly costs. To evaluate the zero-shot ability in various environments, we construct a new validation split sampling both from the original training set and the validation unseen set. The scenes from the training and validation unseen set are 61 and 11 respectively, 72 scenes in total. We randomly picked 1 trajectory from the 72 environment, each is associated with 3 instructions. In total, we sample 216 samples to conduct the ablation study. Effect of Granularity in Visual Observation Descriptions. The Field of View (FoV) of an image critically influences BILP-2’s captioning ability, with an overly large FoV leading to generalized room descriptions and an extremely small FoV hindering object recognition due to limited content. As shown in table 2, we investigate 3 granularity of visual representation from a viewpoint. Specifically, variant #1 utilizes an image with 60 FoV, turn heading angle 30 degrees clock-wise to obtain 12 views from a viewpoint, while variant #2 and #3 utilize an image with 30, 45 FoV, turn elevation angle 30 degrees from top to down, and turn heading angle 30, 45 degrees clockwise to form 36 views, 24 views respectively. From the results, we found that using FoV 45 a viewpoint generates the most suitable natural language description for navigation, surpassing variant #1 and #2 by 6.48% and 2.78% respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7646 Top-down Trajectory Drew by GPT-4 Instruction Generated by GPT-4 Trajectory of NavGPT Ground Truth Instruction Exit the sewing room. Turn right. Go toward the glass cabinet with the dolls in it. Turn into the doorway on the left. Pass the bed and go through the next doorway on the left into the bathroom. Wait by the sink. Start at the initial point, move to the hallway with the violin hanging from the ceiling, proceed to the building with chandeliers and wooden floors, then navigate to the room with a statue of a horse on a shelf, next, move towards the rooms with rugs and paintings, then proceed to the room with bathroom fixtures and framed pictures on the walls. Figure 4: We evaluate GPT-4 on a case where NavGPT successfully follows the ground truth path, using only the historical actions Aăt`1 and observations Oăt`1 to generate an instruction (without reasoning trace Răt`1 to avoid information leaking), and using the entire navigation history Hăt`1 to draw a top-down trajectory. Training Schema Method TL NEÓ OSRÒ SRÒ SPLÒ Train Only Seq2Seq (Anderson et al. 2018) 8.39 7.81 28 21 Speaker Follower (Fried et al. 2018) 6.62 45 35 EnvDrop (Tan, Yu, and Bansal 2019) 10.70 5.22 52 48 Pretrain + Finetune PREVALENT (Hao et al. 2020) 10.19 4.71 58 53 VLN œ BERT (Hong et al. 2021) 12.01 3.93 69 63 57 HAMT (Chen et al. 2021b) 11.46 2.29 73 66 61 DuET (Chen et al. 2022b) 13.94 3.31 81 72 60 No Train DuET (Init. LXMERT (Tan and Bansal 2019)) 22.03 9.74 7 1 0 NavGPT (Ours) 11.45 6.46 42 34 29 Table 1: Comparison with previous methods on R2R validation unseen split. Granularity # TL NEÓ OSRÒ SRÒ SPLÒ FoV@60 1 12.38 9.07 14.35 10.19 6.52 FoV@30 2 12.67 8.92 15.28 13.89 9.12 FoV@45 3 12.18 8.02 26.39 16.67 13.00 Table 2: The effect of granularity in visual observation. Observation # TL NEÓ OSRÒ SRÒ SPLÒ Baseline 1 16.11 9.83 15.28 11.11 6.92 + Obj 2 11.07 8.88 23.34 15.97 11.71 + Obj + Dis 3 12.18 8.02 26.39 16.67 13.00 Table 3: The effect of additional information. Effect of Semantic Scene Understanding and Depth Estimation. NavGPT also collaborates with other visual foundation models to enhance the perception of the environment. We investigate the effectiveness of adding the object information and the relative distance between the agent and the detected objects. We constructed a baseline method based on the caption results from BILP-2 and powered by GPT-3.5. As shown in table 3, by adding object information, the SR increases by 4.86% compared with the baseline, for the additional object information emphasizes the salient object in the scenes. Moreover, we observed a phenomenon in which agents failed to reach the destination because they do not know how close they are to the destination. Once the target viewpoint is visible in sight, they tend to stop immediately. Therefore by adding depth information, the agent has a better understanding of the current position and further raises the SR by 0.7% and SPL by 1.29%. Conclusion In this work, we explore the potential of utilizing LLMs in embodied navigation tasks. We present NavGPT, an autonomous LLM system specifically engineered for language-guided navigation, possessing the ability to process multi-modal inputs and unrestricted language guidance, engage with open-world environments, and maintain the navigation history. Limited by the quality of language description of visual scenes and the tracking abilities of objects, NavGPT’s zero-shot performance on VLN is still not compatible with trained methods. However, the reasoning trace of GPT-4 illuminates the latent potential of LLMs in embodied navigation planning. Interaction of LLMs with downstream specialized models or the development of multi-modal LLMs for navigation, heralding the future of versatile VLN agents. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7647 References Ahn, M.; Brohan, A.; Brown, N.; Chebotar, Y.; et al. 2022. Do As I Can and Not As I Say: Grounding Language in Robotic Affordances. In arXiv preprint arXiv:2204.01691. Anderson, P.; Wu, Q.; Teney, D.; Bruce, J.; Johnson, M.; Sünderhauf, N.; Reid, I.; Gould, S.; and Van Den Hengel, A. 2018. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In CVPR. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; et al. 2020. Language models are few-shot learners. In NeurIPS. Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. Chen, K.; Chen, J. K.; Chuang, J.; Vázquez, M.; and Savarese, S. 2021a. Topological planning with transformers for vision-and-language navigation. In CVPR. Chen, S.; Guhur, P.-L.; Schmid, C.; and Laptev, I. 2021b. History aware multimodal transformer for vision-andlanguage navigation. In NeurIPS. Chen, S.; Guhur, P.-L.; Tapaswi, M.; Schmid, C.; et al. 2022a. Learning from unlabeled 3d environments for visionand-language navigation. In ECCV. Chen, S.; Guhur, P.-L.; Tapaswi, M.; Schmid, C.; et al. 2022b. Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation. In CVPR. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; et al. 2022. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Deng, Z.; Narasimhan, K.; and Russakovsky, O. 2020. Evolving graphical planner: Contextual global planning for vision-and-language navigation. In NeurIPS. Dorbala, V. S.; Mullen Jr, J. F.; and Manocha, D. 2023. Can an Embodied Agent Find Your" Cat-shaped Mug"? LLM-Based Zero-Shot Object Navigation. arXiv preprint arXiv:2303.03480. Dorbala, V. S.; Sigurdsson, G.; Piramuthu, R.; et al. 2022. CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation. arXiv preprint arXiv:2211.16649. Driess, D.; Xia, F.; Sajjadi, M. S.; Lynch, C.; et al. 2023. PaLM-E: An Embodied Multimodal Language Model. arXiv preprint arXiv:2303.03378. Fried, D.; Hu, R.; Cirik, V.; Rohrbach, A.; et al. 2018. Speaker-follower models for vision-and-language navigation. In NeurIPS. Gadre, S. Y.; Wortsman, M.; Ilharco, G.; Schmidt, L.; and Song, S. 2022. CLIP on Wheels: Open-Vocabulary Models are (Almost) Zero-Shot Object Navigators. arXiv preprint arXiv:2203.10421v1. Gao, C.; Chen, J.; Liu, S.; Wang, L.; Zhang, Q.; and Wu, Q. 2021. Room-and-object aware knowledge reasoning for remote embodied referring expression. In CVPR. Gao, C.; Peng, X.; Yan, M.; Wang, H.; et al. 2023. Adaptive Zone-Aware Hierarchical Planner for Vision-Language Navigation. In CVPR. Girshick, R. 2015. Fast R-CNN. In ICCV. Gu, J.; Stefani, E.; Wu, Q.; Thomason, J.; and Wang, X. 2022. Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions. In ACL. Hao, W.; Li, C.; Li, X.; Carin, L.; and Gao, J. 2020. Towards learning a generic agent for vision-and-language navigation via pre-training. In CVPR. He, K.; Huang, Y.; Wu, Q.; Yang, J.; et al. 2021. LandmarkRxR: Solving Vision-and-Language Navigation with FineGrained Alignment Supervision. In NeurIPS. Hong, Y.; Rodriguez, C.; Qi, Y.; Wu, Q.; and Gould, S. 2020a. Language and visual entity relationship graph for agent navigation. In NeurIPS. Hong, Y.; Rodriguez-Opazo, C.; Wu, Q.; and Gould, S. 2020b. Sub-Instruction Aware Vision-and-Language Navigation. In NeurIPS. Hong, Y.; Wang, Z.; Wu, Q.; and Gould, S. 2022. Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation. In CVPR. Hong, Y.; Wu, Q.; Qi, Y.; Rodriguez-Opazo, C.; and Gould, S. 2021. VLN œ BERT: A recurrent vision-and-language bert for navigation. In CVPR. Hong, Y.; Zhou, Y.; Zhang, R.; Dernoncourt, F.; Bui, T.; Gould, S.; and Tan, H. 2023. Learning navigational visual representations with semantic map supervision. In ICCV. Huang, C.; Mees, O.; Zeng, A.; and Burgard, W. 2022. Visual Language Maps for Robot Navigation. arXiv preprint arXiv:2210.05714. Karpas, E.; Abend, O.; Belinkov, Y.; Lenz, B.; et al. 2022. MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445. Krantz, J.; Wijmans, E.; Majumdar, A.; Batra, D.; and Lee, S. 2020. Beyond the nav-graph: Vision-and-language navigation in continuous environments. In ECCV. Ku, A.; Anderson, P.; Patel, R.; Ie, E.; et al. 2020. RoomAcross-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding. In EMNLP. Li, J.; and Bansal, M. 2023. PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-andLanguage Navigation. In NeurIPS. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023a. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Li, J.; Tan, H.; and Bansal, M. 2022. EnvEdit: Environment Editing for Vision-and-Language Navigation. In CVPR. Li, L. H.; Zhang, P.; Zhang, H.; Yang, J.; Li, C.; Zhong, Y.; Wang, L.; Yuan, L.; Zhang, L.; Hwang, J.-N.; et al. 2022a. Grounded language-image pre-training. In CVPR. Li, M.; Wang, Z.; Tuytelaars, T.; and Moens, M.-F. 2023b. Layout-aware Dreamer for Embodied Referring Expression Grounding. In AAAI. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7648 Li, X.; Zhang, Y.; Yuan, W.; et al. 2022b. Incorporating External Knowledge Reasoning for Vision-and-Language Navigation with Assistant’s Help. Applied Sciences. Ma, C.-Y.; Lu, J.; Wu, Z.; AlRegib, G.; et al. 2019. Selfmonitoring navigation agent via auxiliary progress estimation. arXiv preprint arXiv:1901.03035. Majumdar, A.; Aggarwal, G.; Devnani, B.; Hoffman, J.; and Batra, D. 2022. Zson: Zero-shot object-goal navigation using multimodal goal embeddings. arXiv preprint arXiv:2206.12403. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Parvaneh, A.; Abbasnejad, E.; Teney, D.; Shi, J. Q.; et al. 2020. Counterfactual vision-and-language navigation: Unravelling the unseen. In NeurIPS. Pashevich, A.; Schmid, C.; and Sun, C. 2021. Episodic transformer for vision-and-language navigation. In ICCV. Peng, B.; Galley, M.; He, P.; Cheng, H.; et al. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813. Qi, Y.; Pan, Z.; Hong, Y.; Yang, M.-H.; van den Hengel, A.; and Wu, Q. 2021. The road to know-where: An object-androom informed sequential bert for indoor vision-language navigation. In ICCV. Qi, Y.; Pan, Z.; Zhang, S.; Hengel, A. v. d.; and Wu, Q. 2020a. Object-and-action aware model for visual language navigation. In ECCV. Qi, Y.; Wu, Q.; Anderson, P.; Wang, X.; et al. 2020b. Reverie: Remote embodied visual referring expression in real indoor environments. In CVPR. Qiao, Y.; Qi, Y.; Hong, Y.; Yu, Z.; Wang, P.; and Wu, Q. 2023a. HOP+: History-enhanced and Order-aware Pretraining for Vision-and-Language Navigation. IEEE TPAMI. Qiao, Y.; Qi, Y.; Yu, Z.; Liu, J.; and Wu, Q. 2023b. March in chat: Interactive prompting for remote embodied referring expression. In ICCV. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; et al. 2021. Learning transferable visual models from natural language supervision. In ICML. Scao, T. L.; Wang, T.; Hesslow, D.; Saulnier, L.; et al. 2022. What Language Model to Train if You Have One Million GPU Hours? arXiv preprint arXiv:2210.15424. Schick, T.; Dwivedi-Yu, J.; Dessì, R.; Raileanu, R.; et al. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761. Shah, D.; Osi´nski, B.; Levine, S.; et al. 2023. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In CoRL. Tan, H.; and Bansal, M. 2019. LXMERT: Learning CrossModality Encoder Representations from Transformers. In EMNLP. Tan, H.; Yu, L.; and Bansal, M. 2019. Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout. In NAACL. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Vemprala, S.; Bonatti, R.; Bucker, A.; and Kapoor, A. 2023. Chatgpt for robotics: Design principles and model abilities. arXiv preprint arXiv:2306.17582. Wang, S.; Montgomery, C.; Orbay, J.; Birodkar, V.; et al. 2022. Less is More: Generating Grounded Navigation Instructions from Landmarks. In CVPR. Wang, X.; Huang, Q.; Celikyilmaz, A.; Gao, J.; et al. 2019. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In CVPR. Wang, Z.; Li, J.; Hong, Y.; et al. 2023. Scaling data generation in vision-and-language navigation. In ICCV. Wei, J.; Bosma, M.; Zhao, V. Y.; Guu, K.; Yu, A. W.; Lester, B.; Du, N.; Dai, A. M.; and Le, Q. V. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652. Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; et al. 2022a. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Chi, E.; Le, Q.; and Zhou, D. 2022b. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903. Wu, C.; Yin, S.; Qi, W.; Wang, X.; Tang, Z.; and Duan, N. 2023. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629. Yongliang, S.; Kaitao, S.; Xu, T.; Dongsheng, L.; et al. 2023. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace. arXiv:2303.17580. Zhang, S.; Roller, S.; Goyal, N.; Artetxe, M.; Chen, M.; Chen, S.; Dewan, C.; Diab, M.; Li, X.; Lin, X. V.; et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Zhao, C.; Qi, Y.; and Wu, Q. 2023. Mind the Gap: Improving Success Rate of Vision-and-Language Navigation by Revisiting Oracle Success Routes. In ACM MM. Zhao, Y.; Chen, J.; Gao, C.; Wang, W.; Yang, L.; Ren, H.; Xia, H.; and Liu, S. 2022. Target-Driven Structured Transformer Planner for Vision-Language Navigation. In ACM MM. Zhou, K.; Zheng, K.; Pryor, C.; Shen, Y.; Jin, H.; Getoor, L.; and Wang, X. E. 2023. ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation. arXiv preprint arXiv:2301.13166. Zhu, F.; Zhu, Y.; Chang, X.; and Liang, X. 2020. Visionlanguage navigation with self-supervised auxiliary reasoning tasks. In CVPR. Zhu, W.; Qi, Y.; Narayana, P.; Sone, K.; Basu, S.; Wang, X. E.; Wu, Q.; Eckstein, M. P.; and Wang, W. Y. 2022. Diagnosing Vision-and-Language Navigation: What Really Matters. In NAACL. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7649
2024
849
18,683
Improving Diffusion-Based Image Restoration with Error Contraction and Error Correction Qiqi Bao1, Zheng Hui2, Rui Zhu3, Peiran Ren2, Xuansong Xie2, Wenming Yang1* 1Tsinghua University 2Institute for Intelligent computing, Alibaba Group 3City, University of London bqq19@mails.tsinghua.edu.cn, zheng hui@aliyun.com, rui.zhu@city.ac.uk, peiran r@sohu.com, xingtong.xxs@taobao.com, yang.wenming@sz.tsinghua.edu.cn Abstract Generative diffusion prior captured from the off-the-shelf denoising diffusion generative model has recently attained significant interest. However, several attempts have been made to adopt diffusion models to noisy inverse problems either fail to achieve satisfactory results or require a few thousand iterations to achieve high-quality reconstructions. In this work, we propose a diffusion-based image restoration with error contraction and error correction (DiffECC) method. Two strategies are introduced to contract the restoration error in the posterior sampling process. First, we combine existing CNNbased approaches with diffusion models to ensure data consistency from the beginning. Second, to amplify the error contraction effects of the noise, a restart sampling algorithm is designed. In the error correction strategy, the estimationcorrection idea is proposed on both the data term and the prior term. Solving them iteratively within the diffusion sampling framework leads to superior image generation results. Experimental results for image restoration tasks such as superresolution (SR), Gaussian deblurring, and motion deblurring demonstrate that our approach can reconstruct high-quality images compared with state-of-the-art sampling-based diffusion models. Introduction Low-level vision tasks in image restoration, such as image denoising, image super-resolution (SR), and image deblurring can be cast as inverse problems y = A(x) + n, where x stands for the original image, A(·) is the forward measurement operator and n represents the noise. The inverse problems aim to infer the underlying signal from measurements and yield a high-quality image. Recently, diffusion models (Song et al. 2020; Song and Ermon 2020; Dhariwal and Nichol 2021; Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021; Karras et al. 2022; Chung, Sim, and Ye 2022a; Mokady et al. 2022; Liu et al. 2023; Permenter and Yuan 2023) have shown state-of-theart performance in image generation compared to Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Diffusion models define a forward process by gradually *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: An example visualization of the intermediate results for the prediction of ˆx0|t in DPS (Chung et al. 2023). adding Gaussian noise to the input data that maps data to noise. During the reverse sampling process, diffusion models start with a pure Gaussian noise image and progressively sample a less noisy image until reaching a clean one. Such diffusion models use the parameterized prior of the highdimensional data distributions. In addition to the unconditional generative power, diffusion models have achieved remarkable success in solving inverse problems (Wang, Yu, and Zhang 2022; Abu-Hussein, Tirer, and Giryes 2022; Meng and Kabashima 2022; Kawar et al. 2022b; Song et al. 2023; Fabian, Tinaz, and Soltanolkotabi 2023; Murata et al. 2023; Song et al. 2021; Chung, Lee, and Ye 2023). Chung et al. (Chung et al. 2023) proposed a Diffusion Posterior Sampling (DPS) method for solving general noisy non-linear inverse problems. However, adapting denoising diffusion models from the pure Gaussian noise for image restoration behaves slowly in sampling. DPS requires a few hundreds iterations to achieve high-quality reconstructions. Denoising Diffusion Null-Space Model (DDNM) (Wang, Yu, and Zhang 2022) decomposes samples into the range-space and the null-space of the measurement. By refining the null-space during the reverse diffusion process, DDNM assures data consistency and incorporates priors from diffusion models. Though DDNM ensures data consistency from the beginning that helps reduce iterations, the ability to generate images with higher quality is constrained. In this work, we depend on generative priors from pretrained unconditional diffusion models. Data distribution is modeled regardless of the forward measurement operator The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 756 A(·) and can be generalized to different degradations. From Fig. 1, we can see that predicted images contain little information in the first line during the reverse diffusion process. The inaccuracy of the initial prediction in the early stage causes accumulated errors for image inverse tasks. In (Xu et al. 2023), Xu et al. find that SDE-based samplers consist of the discretization error along the trajectory. Both the initial prediction error and the discretization error lead to increased sampling steps to deliver higher sample quality. Therefore, we propose an error contraction strategy by introducing the accurate initial prediction and the restart sampling operation. First, to rectify estimation ˆx0|t to guarantee the data consistency in the initial phase, outputs from the existing neural network for instance the RealESRNet (Wang et al. 2021) are utilized. The reverse diffusion path is reduced to T ′ steps, where T ′ < T. The priors encapsulated from the pretrained neural network help generate a better initial point and contract errors in the initial phase. Second, rather than using SDE-based samplers in (Abu-Hussein, Tirer, and Giryes 2022; Chung et al. 2023; Wang, Yu, and Zhang 2022; Song et al. 2023; Chung, Lee, and Ye 2023), ODE-based samplers with restart sampling algorithm is involved. The deterministic backward processes reduce the discretization error, while the forward-backward restart sampling operation strengthens the contraction effect. Apart from the initialization error and the discretization error, the approximation error of the learned neural network and the natural image distribution prediction for the conditional image generation also affect the realness and consistency of the reconstructed images. We design an error correction strategy to solve the optimization problem. The error correction strategy is composed of two iterative steps: an efficient Adam optimization of the neural network’s prediction and one step of gradient descent extended from the DPS framework. Contributions. The main contributions are summarized as follows: • We propose an error contraction strategy by integrating existing neural network priors and harnessing the restart sampling technique to achieve accurate reconstruction. • We design an error correction strategy by imposing the prior term to correct the neural estimation and reconstructing y given the measurement model iteratively within the diffusion sampling framework. • Compared with state-of-the-art methods, our model achieves superior performance on different image restoration tasks such as image SR, Gaussian deblurring, and motion deblurring. Background Score-based Generative Formulation Let a random variable x0 with the data distribution q0 (x0) = pdata (x0). Diffusion is the process of progressively adding Gaussian noise to the observation x0 to transform q0 (x0) at time 0 to a normal distribution qT (xT ) at time T. Song et al. (Song et al. 2020) defined the forward SDE as dx = f (t) xdt + g (t) dwt, (1) where wt is the standard Wiener process, and f (t), g (t) are the drift and diffusion coefficients, respectively. The forward process described in Eq. (1) has the corresponding reverse process from T to 0: dxt =  f (t) xt −g2 (t) ∇x log qt (xt)  dt+g (t) d ¯wt. (2) where ∇x log qt (xt) is the score function of qt (xt). For the specific choice of f (t) = −1 2β (t) and g (t) = p β (t), VP SDE (Song et al. 2020) has the following form dx = −β (t) 2 xdt + p β (t)dwt, (3) where β (t) = βmin + t (βmax −βmin) is the noise schedule of the forward process. The corresponding reverse SDE of Eq. (3) is dx =  −β (t) 2 x −β (t) ∇xt log qt (xt)  dt + p β (t)d ¯wt. (4) Song et al. (Song et al. 2020) proved that the ordinary differential equation (ODE) of Eq. (2), dubbed the probability flow ODE, is: dxt dt = f (t) xt −1 2g2 (t) ∇x log qt (xt). (5) To estimate ∇xt log qt (xt), Song et al. (Song et al. 2020) trained a time-dependent score-based model sθ (xt, t) via min θ Et n λ (t) Ex0,xt h ∥sθ (xt, t) −∇xt log q0t (xt |x0 )∥2 2 io , (6) where λ(t) is a positive weight coefficient, t is uniformly sampled from [0, T], xt ∼q(xt|x0). Forward and Reverse Diffusion Processes We have the continuous version of the diffusion process for the denoising diffusion probabilistic model (DDPM) formulation (Ho, Jain, and Abbeel 2020) in Eq. (4). One forward step of (discrete) DDPM is xt = p 1 −βtxt−1 + p βtϵt−1, (7) where ϵt−1 ∼N(0, I). With the properties of Gaussian, we can sample xt from x0 as xt = √αtx0 + √ 1 −αtϵ, (8) where αt = 1−βt and αt = Qt i=1 αi. One reverse sampling step is xt−1 = 1 √αt  xt − βt √1 −αt ϵθ(xt, t)  + σtϵt, (9) where sθ(xt, t) = −ϵθ(xt,t) √1−αt . Song et al. (Song, Meng, and Ermon 2020) proposed denoising diffusion implicit model formulation (DDIM) to enable a faster sampling process. One reverse sampling step in Eq. (9) is rewritten as xt−1 = p αt−1ˆx0|t(xt)+ q 1 −αt−1 −σ2ηt·ϵθ(xt, t)+σηtϵt, (10) where σηt controls the stochasticity of the diffusion process. By setting σηt = 0, the reverse process beyond the initial randomization becomes deterministic. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 757 Figure 2: Illustration of our proposed DiffECC method. The gray boxes present the error contraction strategy. The yellow box presents the error correction strategy. The green arrows indicate the restart sampling operation. The blue arrows indicate the noise correction route. The red arrows indicate the corrected route of ˆx0|t. Diffusion Models for Inverse Problems The general form of inverse problems can be formulated as y = A(x0) + n, x0 ∈RD, y ∈Rd, n ∈Rd, (11) where A(·) : RD →Rd is the known forward measurement operator and n ∼N(0, σ2 yI) is the white Gaussian noise. The likelihood function can be written as p(y|x0) = 1 q (2π)nσ2n y exp  −∥y −A(x0)∥2 2 2σ2y  , (12) with mean A(x0) and variance σ2 y. The goal for the inverse problems is to recover ˆx0 ∈RD from a degraded image y. As in (Chung, Sim, and Ye 2022b; Chung et al. 2023; Wang, Yu, and Zhang 2022; Zhu et al. 2023), we can use diffusion models to solve inverse problems by replacing the score function in Eq. (4) with the conditional score function ∇xt log p (xt |y ). By Bayes rule, we can derive the following equation: ∇xt log p (xt |y ) = ∇xt log p (xt) | {z } unconditional score + ∇xt log p (y |xt ) | {z } adversarial gradient ✭✭✭✭✭✭✭ −∇xt log p (y). (13) To adjust the control intensity, Classifier Guidance scales the adversarial gradient by a γ parameter: ∇xt log p (xt |y ) = ∇xt log p (xt) + γ∇xt log p (y |xt ) , (14) where the first term can be approximated with the pretrained score function sθ(xt, t), and the second term is the guidance term with the conditional score of pt(y|xt). Method Error Contraction Strategy Accurate initial prediction. To restore the data distribution of the high-quality image from the degraded counterpart y, the marginal distribution can be written as p(x0|y) = Z pθ (xT ) T Y t=1 p(t) θ (xt−1 |xt, y )dx1:T . (15) In Fig. 1, the predicted ˆx0|t is seriously destroyed when t is close to t = T. Since xt−1 is yield by sampling from p(xt−1|xt, ˆx0|t), the corrupted prediction for ˆx0|t makes the reverse sampling process converges slowly. The observation indicates that decreasing the estimation errors from the initialization would improve image reconstruction with data consistency. To satisfy the data consistency from the initial phase, we simply modify Eq. (15) to reconstruct the diffusion posterior distribution p(x0|y) by p (x0 |y ) = Z pθ (xT ′ |y ) T ′ Y t=1 p(t) θ (xt−1 |xt, y ) dx1:T ′, (16) where T ′ < T represents the starting timestep. Parameters in the main diffusion process are defined as the noise Schedule1. The reverse transition distribution pθ(xt−1|xt, y) is described in the next part. Now the goal turns to design the transition distribution of pθ (xT ′|y). Inspired by (Chung, Sim, and Ye 2022a; Wang et al. 2023; Yue and Loy 2022), rather than applying the initial randomization, we start with more accurate initial prediction. The transition distribution pθ (xT ′|y) is formulated as a Gaussian distribution: pθ (xT ′|y) = N(xT ′; √αT ′xinit, (1 −αT ′)I). (17) Via the reparameterization trick, we have the forward diffusion process represented as xinit = f(y; ψ) xT ′ = √αT ′xinit + √ 1 −αT ′ · ϵ, ϵ ∼N(0, I), (18) where f(·; ψ) is a pre-trained image restoration network (like Real-ESRNet (Wang et al. 2021), MPRNet (Zamir et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 758 Algorithm 1: DiffECC Require: T, T ′, A(·), y, f(·; ψ), UNet(·), tcond, ζ, K, krestart, kskip 1: xinit = f(y; ψ) ▷Obtain a good initial prediction via the pretrained CNN-based network 2: Respace the intervals of the diffusion process in T, and obtain the parameters in this schedule as noise Schedule1. 3: ϵ ∼N(0, I) ▷Sample noise 4: xT ′ = √αT ′xinit + √1 −αT ′ · ϵ ▷Starting state at time T ′ 5: for t = T ′ to tcond do 6: ϵt θ = UNet(xt, t) ▷Noise estimation 7: ϵt θ = ϵt θ + arg min ∆ d(ϵt+1 θ , (ϵt θ + ∆))  ▷Error correction for the noise estimation ϵt θ at time t 8: ˆx0|t = 1 √αt (xt −√1 −αt · ϵt θ) ▷Calculate ˆx0|t from the estimated noise 9: ˆx′ 0|t = ˆx0|t − ζ σ2 y · ∇xt y −A ˆx0|t (xt)  2 2 ▷Error correction for the prediction of ˆx0|t at time t 10: ˆϵt θ = 1 √1−αt (xt −√αt · ˆx′ 0|t) ▷Calculate ˆϵt θ from the predicted reconstruction result 11: xt−1 = √¯αt−1ˆx′ 0|t + √1 −αt−1 · ˆϵt θ ▷One deterministic step of reverse diffusion sampling 12: if t = krestart then 13: Respace the intervals of the diffusion process in K, and obtain the parameters in this schedule as noise Schedule2. 14: ˆxr 0 = ˆx′ 0 15: kmax = K −kskip 16: xkmax = √αkmax · ˆxr 0 + √1 −αkmax · ϵ ▷Starting state for restart operation 17: for k = kskip−1, ..., 0 do 18: ϵk θ = UNet(xk, k) 19: ˆx0|k = 1 √αk (xk −√1 −αk · ϵk θ) 20: xk−1 = √¯αk−1ˆx0|k + √1 −αk−1 · ϵk θ ▷One deterministic sampling step in the restart sampling process 21: end for 22: xt−1 = √αt−1 · ˆx0 + √1 −αt−1 · ϵ 23: end if 24: end for 25: for t = tcond −1 to 0 do 26: ϵt θ = UNet(xt, t) ▷Noise estimation 27: ˆx0|t = 1 √αt (xt −√1 −αt · ϵt θ) ▷Predicted ˆx0|t 28: xt−1 = √¯αt−1ˆx0|t + √1 −αt−1 · ϵt θ ▷One deterministic step of reverse diffusion sampling 29: end for 2021), MIRNet (Zamir et al. 2020, 2022)) with parameter ψ. Instead of employing Real-ESRGAN (GAN-based), we resort to Real-ESRNet (CNN-based trained with MAE loss). The principal intention is to rely on diffusion reverse sampling to synthesize image detail information. In addition, the CNN-based solution is more common and simpler. Restart sampling algorithm. In the DDIM fashion, we can get the final one-step sampling expression as xt−1 = p αt−1ˆx0|t(y)+ q 1 −αt−1 −σ2ηt·ϵθ(xt, t)+σηtϵt. (19) In our case, since the noise term σηt may not be strong enough and can cause the discretization error, we set σηt = 0. Instead, we extend the idea in (Xu et al. 2023) and propose the restart sampling operation to amplify the error contraction effects of the noise in a deterministic backward processes to reduce the discretization error simultaneously. In the restart sampling algorithm, the back-and-forth step is performed in a new time interval. We respace the intervals of the diffusion process in K. Parameters in the restart process are defined as the noise Schedule2. The amount of added noise in the restart forward process is larger than the small single-step noise in Eq. (19), thus amplifying the error contraction effect. We set the predicted ˆx′ 0 at time krestart as ˆxr 0, being the input to the restart sampling algorithm. In the restart forward process, a substantial amount of noise is added to transit the ˆxr 0 from k = 0 to k = kskip, ˆxr 0 = ˆx′ 0 kmax = K −kskip xkmax = p αkmax · ˆxr 0 + p 1 −αkmax · ϵ, (20) where kskip represents the number steps to skip during the restart diffusion process. A restart backward process runs the backward ODE. Error Correction Strategy Using the ODE-based sampler, we can derive a general update formula for the conditional diffusion as xt−1 = p αt−1ˆx′ 0|t(y) + p 1 −αt−1 · ˆϵt θ, (21) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 759 where the following forms are iteratively used for ˆx′ 0|t(y) and ˆϵt θ: ˆx′ 0|t =      ˆx0 (xt) −ζ σ2y · ∇xt ∥y −A (ˆx0 (xt))∥2 2 fx0|ϵθ(xt, ϵt θ), where ϵt θ = ϵt θ + ∆, (22) ˆϵθ =      ϵt θ + arg min ∆ d(ϵt+1 θ , (ϵt θ + ∆))  fϵθ|x0(xt, ˆx′ 0|t), where ˆx′ 0|t = ˆx0|t −ζ σ2y · ∇xtl(ˆx0|t), (23) where fx0|ϵθ(·) represents the function of predicting x0 from ϵθ and fϵθ|x0(·) represents the function of predicting ϵθ from x0. Extended the idea in DPS (Chung et al. 2023) and ΠGDM (Song et al. 2023), data consistency in our method is imposed as ˆx′ 0 ←ˆx0 −γ · ∇xtl(ˆx0 (xt)), (24) where ∇xtl(ˆx0 (xt)) represents the computation of the gradient. Specifically, we use the Jensen approximation from DPS (Chung et al. 2023) as p(y|xt) ≃p(y|ˆx0). (25) Using the likelihood function in Eq. (12), we get the correcting step under the Gaussian measurement model as ∇xt log p(y|xt) ≃−1 σ2n ∇xt ∥y −A (ˆx0 (xt))∥2 2 . (26) The update of ˆx′ 0 is calculated as ˆx′ 0 = ˆx0 −ζ σ2y · ∇xt ∥y −A (ˆx0 (xt))∥2 2 . (27) An efficient Adam optimizer (Kingma and Ba 2014) is implied to correct the neural estimation. As for a clean image, the backward diffusion is expected to reach the fixed point at each time step with ϵθ(xt, t) = ϵ. For image restoration tasks, the initial inputs are contaminated with unknown degradations. In this way, we design the neural estimation correction by combining the current denoiser output with the previous denoiser output as the regularization term. We form the optimization as ϵt θ = ϵt θ + arg min ∆ d(ϵt+1 θ , (ϵt θ + ∆))  , (28) where the l1 loss is used for the distance metric d(·, ·). Based on the above discussion, we summarize the detailed algorithm of our proposed method namely DiffECC in Algorithm 1. The overall framework of our sampling method is demonstrated in Fig. 2. Experiments We test our proposed method on image SR, Gaussian deblurring and motion deblurring. In particular, the forward measurement operator for image SR is performed with bicubic down-sampling. For Gaussian deblurring, the kernel has the size of 61× 61 with a standard deviation of 3.0. The motion deblurring is with the kernel size of 61× 61 and the intensity value being 0.5. All tasks can be formulated by convolving the kernels with ground truth images. Experimental Setup Dataset. For vision tasks using face images, we test our experiment on the Flickr Faces High Quality (FFHQ) dataset (Karras, Laine, and Aila 2019). We sample 1k images for evaluation, which are of size 256×256 pixels. For vision tasks using natural images, we evaluate quantitative results on the ImageNet test dataset (Deng et al. 2009) as (Kawar et al. 2022a), with 1k validation images of size 256×256 pixels. All images are normalized to the range [0, 1]. Problem-specific pre-trained diffusion models for face images and natural images are taken from (Choi et al. 2021) and (Dhariwal and Nichol 2021) respectively. Quantitative metrics. For quantitative comparison, we evaluate different methods with the standard distortion metrics Peak Signal Noise Ratio (PSNR) (dB) and Structural Similarity Index (SSIM) (Wang et al. 2004) (higher is better), as well as widely-used perceptual metrics Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al. 2018) and Frechet Inception Distance (FID) (Heusel et al. 2017) (lower is better). PSNR and SSIM measure the faithfulness of reconstructed images, which is not important but necessary for image restoration tasks. LPIPS measures the perceptual similarity between the generated image and the original high-quality image. FID evaluates the quality and diversity between generated distribution and data distribution. The sampling time is measured by the number of function evaluations (NFE). Experimental Results We perform comparisons with four state-of-the-art methods, including DPS (Chung et al. 2023), denoising diffusion restoration models (DDRM) (Kawar et al. 2022a), DDNM (Wang, Yu, and Zhang 2022) and denoising diffusion models for plug-and-play IR (DiffPIR) (Zhu et al. 2023). The same pre-trained diffusion models, degradation kernels, and validation datasets are employed for all methods in comparisons for fairness. Quantitative evaluations on FFHQ and ImageNet 256×256-1k validation datasets are provided in Table 1 and Table 2 respectively. The qualitative comparisons for image 4× SR with σn = 0.05 are shown in Fig. 3. The results demonstrate that DiffECC achieves superior performance compared to other methods. We provide extended quantitative and qualitative results with different scaling factors and noise values in the supplementary. Comparison results for real-world image restoration where the forward measurement operator A(·) is unknown are given in the supplementary. Ablation Studies Effects of xinit. For inverse problems, we perform ablation studies analyzing the effectiveness of starting from different initial predictions in the reverse process. First, we take Real-ESRNet and MPRNet on motion deblurring and obtain PSNR scores with 29.27 and 29.64 respectively. Though the predicted error using the CNN model for pre-processing different degradations is contracted less than 1 after transformed to xT ′ after multiplying a factor of √αT ′, the accuracy of the predicted regressed image affects the result The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 760 Figure 3: Visual comparisons of ×4 image SR (σn = 0.05) on FFHQ and ImageNet 256×256-1k validation datasets. FFHQ Super-resolution (×4) Deblur (Gaussian) Deblur (motion) Method NFEs ↓PSNR ↑SSIM ↑LPIPS ↓FID ↓PSNR ↑SSIM ↑LPIPS ↓FID ↓PSNR ↑SSIM ↑LPIPS ↓FID ↓ DiffECC 58 28.47 0.9140 0.1843 24.08 26.23 0.8789 0.2455 27.17 27.06 0.8922 0.2465 23.67 DiffPIR 100 26.73 0.8812 0.2571 25.36 24.85 0.8670 0.2838 28.27 26.98 0.8887 0.2477 24.98 DDRM 100 27.52 0.8758 0.2455 45.84 25.50 0.8427 0.2813 52.10 DPS 1000 24.02 0.8333 0.3034 34.56 25.34 0.8424 0.2581 28.37 21.61 0.7961 0.3266 30.83 Table 1: Quantitative results (PSNR, SSIM, LPIPS, and FID) of solving inverse problems: super-resolution, Gaussian deblur and Motion deblur with σn = 0.05 on FFHQ 256×256-1k validation dataset. Black colors in bold indicate the best scores. ImageNet Super-resolution (×4) Deblur (gaussian) Deblur (motion) Method NFEs ↓PSNR ↑SSIM ↑LPIPS ↓FID ↓PSNR ↑SSIM ↑LPIPS ↓FID ↓PSNR ↑SSIM ↑LPIPS ↓FID ↓ DiffECC 58 23.88 0.7815 0.3470 43.04 22.35 0.7667 0.3961 55.14 24.04 0.8045 0.3470 48.32 DiffPIR 100 22.99 0.7045 0.4157 55.45 21.71 0.7246 0.4286 60.7 13.55 0.2877 0.6899 162.06 DDRM 100 22.36 0.7221 0.3869 56.54 22.84 0.7092 0.4290 75.37 DPS 1000 21.07 0.7213 0.4612 67.46 19.76 0.5990 0.4342 65.62 19.18 0.6772 0.4647 66.88 Table 2: Quantitative results (PSNR, SSIM, LPIPS, and FID) of solving inverse problems: super-resolution, Gaussian deblur and Motion deblur with σn = 0.05 on ImageNet 256×256-1k validation dataset. Black colors in bold indicate the best scores. of the diffusion model. Since Real-ESRNet obtains coarse blind restoration results, in general, the output obtained by Real-ESRNet can be used as the initial value. We further perform 4× noisy SR (σn = 0.03) experiment on images. Quantitative comparisons are listed in Table 3. model-1 denotes that x′ T is calculated as T ′ = T = 50, x′ T = √αT ′x0 + √ 1 −αT ′ · ϵ. (29) model-2 denotes that x′ T is calculated as T = 100, T ′ = 50, x′ T = √αT ′x0 + √ 1 −αT ′ · ϵ. (30) model-3 denotes that the initialization is constructed by the DDIM sampling inversion technique. DDIM inversion procedure is the inverted scheduler of DDIM (Song, Meng, and Ermon 2020) scheduler. The reversed ODE process in the limit of small steps is calculated as xt+1 = rαt+1 αt xt+ s 1 αt+1 −1 − r 1 αt −1 ! ·ϵθ(xt, t). (31) We formulate the trajectory from x0 to xT ′, where T = The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 761 Method PSNR ↑ SSIM ↑ LPIPS ↓ FID ↓ model-1 22.38 0.7244 0.3838 49.52 model-2 24.39 0.7937 0.3394 43.48 model-3 24.55 0.8002 0.3983 96.20 model-4 24.68 0.8004 0.3079 35.52 Table 3: Quantitative evaluation of the image with different initialization strategy from ImageNet validation. Figure 4: Visualization of restoration results with different initialization process. Method PSNR ↑ SSIM ↑ LPIPS ↓ FID ↓ wo restart 23.97 0.7764 0.3187 37.59 w restart 24.68 0.8004 0.3079 35.52 Table 4: Quantitative evaluation of the restart sampling strategy. 100, T ′ = 50. model-4 indicates that the output of the simple RealESRNet model is used as the initial prediction. The formulation is calculated as: T = 100, T ′ = 50, xT ′ = √αT ′f (y; ψ) + √ 1 −αT ′ϵ. (32) From Table 3 we can see that combining the generally pretrained neural network Real-ESRNet with the diffusion model attributes to an error contraction. Visualization of different starting points are shown in Fig. 4. Effects of restart sampling algorithm. To investigate the effect of restart sampling algorithm, we perform two experiments: without restart sampling algorithm (T’=58, NFEs=58) and with restart sampling algorithm (T’=50, NFEs=58). It is evident from Table 4 that the restart strategy harnesses and enhances the reconstruction ability by providing error contraction effects. To illustrate the effects of the hyperparameters K and kskip in restart strategy, we show the reconstructed images of SR samples in Fig. 5. Hyperparameters are fixed as K = 40, kskip = 32 since the generated images tend to be more stable. Effects of components in error correction strategy. To analyze the impact of the iteratively corrected ˆx′ 0|t and ˆϵθ in error correction strategy, we show in Table 5 how quantitative results change with different operations. Figure 5: Effect of hyperparameters K and kskip. Method PSNR ↑ SSIM ↑ LPIPS ↓ FID ↓ Model-1 23.43 0.7179 0.3615 54.61 Model-2 23.91 0.7763 0.3192 38.24 Model-3 24.68 0.8004 0.3079 35.52 Table 5: Quantitative evaluation of the error correction strategy. In Model-1, the reverse diffusion process is calculated as ˆx′ 0|t = ˆx0 (xt) −ζ σ2y · ∇xt ∥y −A (ˆx0 (xt))∥2 2 xt−1 = p αt−1ˆx′ 0|t + p 1 −αt−1 · ϵt θ. (33) In Model-2, the reverse diffusion process is calculated as ˆx′ 0|t = ˆx0 (xt) −ζ σ2y · ∇xt ∥y −A (ˆx0 (xt))∥2 2 ˆϵt θ = fϵθ|x0(xt, ˆx′ 0|t) xt−1 = p αt−1ˆx′ 0|t + p 1 −αt−1 · ˆϵt θ. (34) In Model-3, the reverse diffusion process is calculated as ϵt θ = ϵt θ + arg min ∆ d(ϵt+1 θ , (ϵt θ + ∆))  ˆx0|t = fx0|ϵθ(xt, ϵt θ) ˆx′ 0|t = ˆx0|t −ζ σ2y · ∇xt y −A ˆx0|t (xt)  2 2 ˆϵt θ = fϵθ|x0(xt, ˆx′ 0|t) xt−1 = p αt−1ˆx′ 0|t + p 1 −αt−1 · ˆϵt θ. (35) Conclusion In this paper, we introduce a diffusion model-based sampling technique with error contraction and error correction strategies for image restoration, referred to as DiffECC. Specifically, by integrating existing neural network techniques and interweaving a restart diffusion sampling process, the error contraction method improves the visual quality for inverse problems. In the error correction method, we incorporate the denoiser into optimization algorithms with iterative correction in the backward sampling process. Extensive experimental results highlight the superior performance of DiffECC in comparison to other methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 762 Acknowledgments This work was partly supported by the National Natural Science Foundation of China (Nos. 62171251 & 62311530100) and the Special Foundations for the Development of Strategic Emerging Industries of Shenzhen (Nos.JSGG 20211108092812020 & CJGJZD 20210408092804011). References Abu-Hussein, S.; Tirer, T.; and Giryes, R. 2022. ADIR: Adaptive Diffusion for Image Reconstruction. arXiv preprint arXiv:2212.03221. Choi, J.; Kim, S.; Jeong, Y.; Gwon, Y.; and Yoon, S. 2021. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv preprint arXiv:2108.02938. Chung, H.; Kim, J.; Mccann, M. T.; Klasky, M. L.; and Ye, J. C. 2023. Diffusion Posterior Sampling for General Noisy Inverse Problems. In The Eleventh International Conference on Learning Representations. Chung, H.; Lee, S.; and Ye, J. C. 2023. Fast Diffusion Sampler for Inverse Problems by Geometric Decomposition. arXiv preprint arXiv:2303.05754. Chung, H.; Sim, B.; and Ye, J. C. 2022a. Come-closerdiffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12413–12422. Chung, H.; Sim, B.; and Ye, J. C. 2022b. Improving Diffusion Models for Inverse Problems using Manifold Constraints. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Dhariwal, P.; and Nichol, A. 2021. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34: 8780–8794. Fabian, Z.; Tinaz, B.; and Soltanolkotabi, M. 2023. DiracDiffusion: Denoising and Incremental Reconstruction with Assured Data-Consistency. arXiv preprint arXiv:2303.14353. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851. Karras, T.; Aittala, M.; Aila, T.; and Laine, S. 2022. Elucidating the design space of diffusion-based generative models. arXiv preprint arXiv:2206.00364. Karras, T.; Laine, S.; and Aila, T. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4401–4410. Kawar, B.; Elad, M.; Ermon, S.; and Song, J. 2022a. Denoising Diffusion Restoration Models. In Advances in Neural Information Processing Systems. Kawar, B.; Song, J.; Ermon, S.; and Elad, M. 2022b. Jpeg artifact correction using denoising diffusion restoration models. arXiv preprint arXiv:2209.11888. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Liu, X.; Park, D. H.; Azadi, S.; Zhang, G.; Chopikyan, A.; Hu, Y.; Shi, H.; Rohrbach, A.; and Darrell, T. 2023. More control for free! image synthesis with semantic diffusion guidance. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 289–299. Meng, X.; and Kabashima, Y. 2022. Diffusion Model Based Posterior Samplng for Noisy Linear Inverse Problems. arXiv preprint arXiv:2211.12343. Mokady, R.; Hertz, A.; Aberman, K.; Pritch, Y.; and CohenOr, D. 2022. Null-text Inversion for Editing Real Images using Guided Diffusion Models. arXiv preprint arXiv:2211.09794. Murata, N.; Saito, K.; Lai, C.-H.; Takida, Y.; Uesaka, T.; Mitsufuji, Y.; and Ermon, S. 2023. GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse Problems with Denoising Diffusion Restoration. arXiv preprint arXiv:2301.12686. Nichol, A. Q.; and Dhariwal, P. 2021. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, 8162–8171. PMLR. Permenter, F.; and Yuan, C. 2023. Interpreting and Improving Diffusion Models Using the Euclidean Distance Function. arXiv preprint arXiv:2306.04848. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502. Song, J.; Vahdat, A.; Mardani, M.; and Kautz, J. 2023. Pseudoinverse-guided diffusion models for inverse problems. In International Conference on Learning Representations. Song, Y.; and Ermon, S. 2020. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33: 12438–12448. Song, Y.; Shen, L.; Xing, L.; and Ermon, S. 2021. Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2020. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456. Wang, X.; Xie, L.; Dong, C.; and Shan, Y. 2021. Realesrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF international conference on computer vision, 1905–1914. Wang, Y.; Yu, J.; and Zhang, J. 2022. Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. arXiv preprint arXiv:2212.00490. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 763 Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wang, Z.; Zhang, Z.; Zhang, X.; Zheng, H.; Zhou, M.; Zhang, Y.; and Wang, Y. 2023. DR2: Diffusion-based Robust Degradation Remover for Blind Face Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1704–1713. Xu, Y.; Deng, M.; Cheng, X.; Tian, Y.; Liu, Z.; and Jaakkola, T. 2023. Restart Sampling for Improving Generative Processes. arXiv preprint arXiv:2306.14878. Yue, Z.; and Loy, C. C. 2022. DifFace: Blind Face Restoration with Diffused Error Contraction. arXiv preprint arXiv:2212.06512. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2020. Learning Enriched Features for Real Image Restoration and Enhancement. In ECCV. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2021. Multi-Stage Progressive Image Restoration. In CVPR. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2022. Learning enriched features for fast image restoration and enhancement. IEEE transactions on pattern analysis and machine intelligence, 45(2): 1934–1948. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595. Zhu, Y.; Zhang, K.; Liang, J.; Cao, J.; Wen, B.; Timofte, R.; and Gool, L. V. 2023. Denoising Diffusion Models for Plugand-Play Image Restoration. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (NTIRE). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 764
2024
85
18,684
Novel Class Discovery in Chest X-rays via Paired Images and Text Jiaying Zhou1, 2, Yang Liu3, Qingchao Chen1, 2, 4 * 1National Institute of Health Data Science, Peking University, Beijing, China 2Institute of Medical Technology, Peking University Health Science Center, Beijing, China 3Wangxuan Institute of Computer Technology, Peking University, Beijing, China 4 National Key Laboratory of General Artificial Intelligence, Beijing, China {zhoujiaying, qingchao.chen, yangliu}@pku.edu.cn Abstract Novel class discover(NCD) aims to identify new classes undefined during model training phase with the help of knowledge of known classes. Many methods have been proposed and notably boosted performance of NCD in natural images. However, there has been no work done in discovering new classes based on medical images and disease categories, which is crucial for understanding and diagnosing specific diseases. Moreover, most of the existing methods only utilize information from image modality and use labels as the only supervisory information. In this paper, we propose a multi-modal novel class discovery method based on paired images and text, inspired by the low classification accuracy of chest X-ray images and the relatively higher accuracy of the paired text. Specifically, we first pretrain the image encoder and text encoder with multi-modal contrastive learning on the entire dataset and then we generate pseudo-labels separately on the image branch and text branch. We utilize intra-modal consistency to assess the quality of pseudo-labels and adjust the weights of the pseudo-labels from both branches to generate the ultimate pseudo-labels for training. Experiments on eight subset splits of MIMIC-CXRJPG dataset show that our method improves the clustering performance of unlabeled classes by about 10% on average compared to state-of-the-art methods. Code is available at: https://github.com/zzzzzzzzjy/MMNCD-main. Introduction The success of deep learning based classification methods greatly depends on labeled data. However, it is difficult to gather high-quality labeled data, especially for medical data. And in real-world scenarios, it is almost impossible to collect labeled data for all classes because of missing definitions, vague categories, infinite categories, etc. To address this problem, a new paradigm called novel class discovery(NCD) has been proposed and gained significant attention due to its potential applications in various domains, such as surveillance, medical image analysis, and anomaly detection. Given a labeled set, the goal of NCD is to discover undefined categories in the unlabeled set, which distinguishes it *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Pseudo-label accuracy. (b) Validation set accuracy. Figure 1: (a)Pseudo-label accuracy of different methods and different datasets. (b)Validation set accuracy of unlabeled classes in SET1 under supervision. from semi-supervised learning. Most of the existing methods start with supervised pretraining on the labeled set, while others adopt self-supervised pretraining on the whole dataset. Two-stage methods then employ learned similarity prediction networks or feature extraction networks to classify the unlabeled set through clustering or (pairwise) pseudo-labeling(Hsu, Lv, and Kira 2017; Hsu et al. 2019; Han, Vedaldi, and Zisserman 2019). Some one-stage methods tune the feature representation while classifying/clustering by using different objective functions for labeled set and unlabeled set(Han et al. 2020; Zhong et al. 2021a; Han et al. 2021). Others unify the objective function as cross-entropy loss by assigning pseudo-labels to unlabeled samples(Fini et al. 2021; Li et al. 2022a; Yang et al. 2022). In this paper, we mainly focus on one-stage methods that unify the objective function through pseudo-labels. Despite abundant research on the topic of NCD, most of these works are conducted on benchmark datasets of natural images. The usability and effectiveness of the NCD methods on large-scale medical image datasets are not yet known. NCD for medical images is of great importance for disease diagnosis and precision medicine because of its ability to uncover new disease types or unknown disease subtypes. In this paper, we focus on NCD in Chest X-Ray (CXR) images that are widely used in medical diagnostics for detecting lung diseases, tumors, and other abnormalities. The success of pseudo-label-based one-stage methods relies heavily on the quality of the pseudo-labels(Lee et al. 2013). Existing methods (Fini et al. 2021; Li et al. 2022a) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7650 formulate pseudo-label assignment as an optimal transport problem that can be solved by Sinkhorn-Knopp algorithm(Cuturi 2013). Also, they improve the reliability of pseudo-labels by constraining the consistency of the predictions of the image and its augmented view. However, the pseudo-label assignment accuracy of these methods on the CXR dataset is much lower than that on the natural image datasets with a similar number of classes, as shown in Fig.1. We speculate that this may be due to the highly similar appearances of CXR images from different anatomical structures or classes. With these pseudo-labels, the mainstream methods do not achieve good results on the CXR dataset. Meanwhile, we find that the classification accuracy of CXR images under supervision is also not very good, at least not as good as text classification, as shown in Fig.1. Inspired by this, we hold the hypothesis that text can help generate better pseudo-labels in some cases and aim to investigate the usage of the descriptive text paired with CXR images to improve performance. We would like to solve the following research questions: when does one modality provide better pseudo-labels than another and how to boost the image NCD performance utilizing the advantageous text information in the training process. To answer the two questions and innovate text-augmented CXR NCD, it still entails three technical challenges: given the heterogeneous cross-modal semantic gap, (1) how to quantify the quality of pseudo-labels calculated by visual and text modalities; (2) how to transfer useful information between two modalities to improve CXR NCD performance through text; (3) the unavailable text at the test time in the real-world deployments. Constrained by unavailable text at the test time, instead of multi-modal feature fusion, we propose to use a two-branch network, encoding the visual and text features separately. To quantify the pseudo-label quality of two modalities, we propose a novel measurement – the consistency between semantic feature structure and the pseudo-label structure in each modality. The quality of the pseudo-labels in NCD relies on two aspects: (1) how much the abstract features can distinguish the unknown classes; (2) how the semantic patterns in the local feature embeddings are able to identify and comprehend the relationship between the known and the novel classes. Pseudo-labels or the features close to them may represent abstract and categorical information but tend to lose local visual semantic features. Therefore, to bring advantages from both worlds, i.e. the categorical and local semantic features, we propose to utilize their structural consistency for pseudo-label quality identification. By comparing the consistencies in both modalities, we can identify when to transfer information from one modality to another. To integrating effective information from both modalities and reduce the cross-modal semantic gap, we propose to synthesize pseudo-labels that guide both visual and text NCD. The synthetic pseudo-labels are the linear combination of the visual and text ones, using the quantified consistency scores respectively. The potential reason is that visual and text features are heterogeneous, although the crossmodal alignment losses are performed(Liang et al. 2022), direct transfer/distillation or feature alignment may result in degraded NCD performance. Our contributions can be summarized as follows: • We explore NCD in CXR images and propose a method based on paired images and text. • We introduce intra-modal consistency as a basis for pseudo-label quality measuring and weighting. • Based on the MIMIC-CXR-JPG dataset, we set up two benchmarks which have the same known classes and different new classes. We evaluate the proposed methods on eight data splits of the two benchmarks and demonstrate significant performance improvements over the state-ofthe-art methods. Related Work Novel Class Discovery Novel class discovery(NCD) aims to discover new classes in an unlabeled dataset given different but related labeled classes. Existing methods can be divided into two categories: two-stage methods and one-stage methods. The pioneering works of NCD are two-stage methods, including KCL(Hsu, Lv, and Kira 2017), MCL(Hsu et al. 2019) and DTC(Han, Vedaldi, and Zisserman 2019). The KCL and MCL utilize similarity prediction networks to generate pairwise pseudo-labels and leverage the clustering models to classify unlabeled data. The two stages adopt different objective functions. DTC first trains a model with supervised learning on the labeled set and then discovers novel visual categories using DEC(Xie, Girshick, and Farhadi 2016). MM/MP(Chi et al. 2021) trains a group of classifiers on the labeled set and fine-tunes classifiers on the unlabeled set. Compared to two-stage methods, one-stage methods received more attention in the field recently. One-stage methods use both labeled data and unlabeled data simultaneously at some point in the optimization process. RS/AutoNovel(Han et al. 2020, 2021) may be the first work among the one-stage methods. It uses the pairwise similarity obtained by ranking statistics as supervision to discover novel classes. Follow-up work DualRS(Zhao and Han 2021) expands this method to a two-branch framework focusing on both local and global features. Afterward, NCL(Zhong et al. 2021a) further boosts the performance by leveraging the framework of contrastive learning. In addition, OpenMix(Zhong et al. 2021b) uses MixUp(Zhang et al. 2017) to generate more robust pseudo-labels for the unlabeled data. Other one-stage methods eliminate the use of pairwise pseudo-labels and directly assign pseudo-labels to unlabeled samples. UNO(Fini et al. 2021) may be the first of these works. UNO unifies the training objective by using a multiview self-labeling strategy to generate pseudo-labels that can be treated homogeneously with ground truth labels. Based on UNO, IIC(Li et al. 2022a) models both interclass and intra-class constraints based on the symmetric Kullback-Leibler divergence. ComEx(Yang et al. 2022) focuses on the generalized setting of NCD(GNCD) and classifies the data with two complementary groups of classifiers with global-to-local and local-to-local regularization The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7651 to strengthen pseudo-labels. Similar to the ComEx, some work focuses on generalized class discovery(GCD). Among them, CLIP-GCD is the first work to combine multi-modal (image and text) models in GCD. CLIP-GCD proposes a retrieval-based mechanism that leverages CLIP’s aligned visual-language representations. Different from the previous works, we propose to solve a novel task, text-augmented NCD in medical image analysis. Our work focuses on quantifying the quality of pseudolabels from both modalities and integrating effective information from both text and image to boost CXR NCD performance. We did not adopt the solution to retrieve text annotations for medical images from the text corpus due to unavailable cross-modal pre-trained models like CLIP for medical images. In addition, the medical image exhibits unique challenges for NCD, i.e., high semantic similarity in the anatomical structures. To our best knowledge, this is the first work that tackle the NCD in medical image analysis using both image and text. Pseudo-Labeling in Semi-Supervised Learning Our work is related to part of the work on semi-supervised learning involving pseudo-labeling. Among them, MixMatch(Berthelot et al. 2019b) averages and sharpens the predictions of multiple strongly augmented views as pseudolabels. ReMixMatch(Berthelot et al. 2019a) proposes to generate the pseudo-labels with weakly augmented views and align the pseudo-label distribution with the marginal distribution of ground-truth labels. Instead of using all pseudolabels, FixMatch(Sohn et al. 2020) retains only those with high confidence. SoftMatch(Chen et al. 2023) overcomes the trade-off between quantity and quality of pseudo-labels with truncated Gaussian weighting function and uniform alignment. Different from these methods, we do not employ data augmentation and rely solely on the images and paired text to generate pseudo-labels. Our contribution lies in not only the quality quantification of pseudo-labels but also in proposing a new strategy to integrate two modality information for joint learning. Method Overall Problem Formulation: Similar to the image-only NCD setting, our training data are split into a labeled set and an unlabeled set. The labeled set Dl = {(vl 1, tl 1, yl 1), ..., (vl N, tl N, yl N)} contains paired image and text (vl i, tl i) with corresponding label yl i from Cl classes. The unlabeled set Du = {(vu 1, tu 1), ..., (vu M, tu M)} contains unlabeled paired image and text (vu i , tu i ) from Cu classes, where Cu is known as a prior. The set of Cl labeled classes is disjoint with the set of Cu unlabeled classes. The purpose of NCD is to discover Cu clusters in the unlabeled set. Following the UNO(Fini et al. 2021), we formulate this problem as learning a mapping from the sample to the complete-label set Y = {1, ..., Cl, Cl + 1, ..., Cl + Cu}. To generalize the method to real-world scenarios, we assume that the text is not available at the test time. Architecture: We propose a method based on paired images and text, using the intra-modal structural consistency to generate and adjust pseudo-labels. Our network architecture is shown in Fig.2, where it consists of two branches: the image branch and the text one. Given a CXR image v, the semantic embedding zv ∈Rk is firstly obtained via the visual encoder Ev and the projection head Projv, i.e. zv = Projv(Ev(v)). Then, the two visual classification heads, labeled head hv and unlabeled head gv, predict their categorical contents(logits), lhv and lgv, leveraging the semantic embeddings. Finally, we concatenate both the logits from hv and gv as follows: lv = [lhv, lgv] and get the probability distribution pv = σ(lv/τ) via a softmax layer σ, with τ as the temperature parameter. Considering the unavailable text at test time, we utilize a parallel text branch adopting the same architecture. To be specific, the text semantic embedding zt ∈Rk is obtained from text encoder Et and projection head Projt. The text classification logits are predicted as lt = [lht, lgt] via the text classification heads ht(labeled head), gt(unlabeled head), so as the probability predictions pt = σ(lt/τ). Given our setup of CXR NCD using text data, it is essential to quantify the quality of pseudo-labels from both the visual and text modalities. As it is challenging to quantify the heterogeneous feature structures, we propose a solution to calculate the consistency index between the semantic feature structure and the pseudo-label structure. A higher consistency score may indicate better pseudo-label quality, as the score compromises the capabilities of distinguishing the unknown classes and capturing the relationship between the known and the unknown classes. The cross-modal comparison in the space of our proposed consistency alleviates the cross-modal embedding gap in the processing. More technical details are in the following section. Once the quality of pseudo-labels from both image and text branches is quantified, we propose to generate synthetic pseudo-labels as supervision of both branches. The synthetic pseudo-labels are integrated by weighting the pseudolabels from images and text using the calculated consistency scores. Our design is able to transfer effective information between image and text branches by scheduling based on the quantified quality of each modality. Note this procedure effectively reduce the inter-modal gap as well. Pseudo-Labeling with Estimated Prior Distribution Pseudo-labeling is a key step of NCD. Following prior works(Fini et al. 2021; Li et al. 2022a; Yang et al. 2022), we re-formulate the clustering problem into an optimal transport(OT) problem that finds the optimal transportation between the sample distribution and the class distribution. Formally, given logits Lv of a batch of data with batch size B, we select the logits of all the unlabeled samples: Lu v = [l1 gv, ..., lBu gv ], our goal is to assign pseudo-labels ˆY u v = [ˆy1 v, ...,ˆyBu v ], where the rows of Lv represent logits while the rows of ˆYv represents the pseudo-labels for unlabeled samples, and Bu is the number of unlabeled samples in the batch. The problem can be solved by the SinkhornKnopp algorithm as follows: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7652 Sinkhorn-Knopp Sinkhorn-Knopp Weighting Labeled Head Unlabeled Head Unlabeled Head Labeled Head Intra-modal Consistency Intra-modal Consistency Image Encoder Projection Head Embeddings Text Encoder Projection Head lung: edema; ... Image Text GT PL GT 0 PL 0 PL:Pseudo Label GT: Ground Truth Label CE: Cross-Entropy : Unlabeled samples : Stop Gradient PLimg PLtext Limg(CE) Ltext(CE) Cl Cu Cl Cu Embeddings Figure 2: Overview of the proposed architecture. We present image branch in green and text branch in yellow. We generate pseudo-labels on each branch and use intra-modal consistency for weighting to generate final pseudo-labels(in pink). Crossentropy loss is calculated based on logits, pseudo-labels and ground-truth labels. When training, the parameters of both branches are updated simultaneously; when testing, the clustering accuracy is calculated based only on the pseudo-labels assigned by the image branch. ˆY u v = max Y u v ∈Γv Tr(Yu vLu v) + ϵH(Yu v) (1) where ϵ > 0 is a hyper-parameter, H represents the entropy function used to constrain the pseudo-labels and Tr is the trace operation. Γv is the transportation prototype defined as: Γv = {Yu v ∈RCu×Bu + |Yu v1Bu = pv, Yu v ⊤1Cu = 1 Bu 1Bu} (2) A common setting is to assume that pv = 1 Cu 1Cu, i.e. to distribute the samples uniformly across classes. However, in our CXR NCD scenario, data are not always uniformly distributed, e.g. the number of samples for rare diseases is always smaller than the number of samples for common diseases. Instead of assuming that the unlabeled samples are uniformly distributed across classes, we follow BYOP(Yang et al. 2023) to estimate and iteratively update the novel class distribution prior pv in the OT procedure and optimization. Same as the operation in the image branch, we can also obtain pseudo-labels ˆY u t = [ˆy1 t, ...,ˆyBu t ] in the text branch. Pseudo-Label Quality Estimation and Synthetic Pseudo-Label Generation Due to the low pseudo-label accuracy generated by image branch discussed in the introduction, we aim to improve the pseudo-label quality by leveraging the text information and interaction between the two branches. A straightforward idea is to take the average of the pseudo-labels of both branches so that all the high-quality pseudo-labels are retained to some extent. However, the unreliable pseudo-labels are equally retained and transferred. It tends to propagate and accumulate erroneous information and prevent taking advantage of high-quality pseudo-labels. Therefore, we propose a novel and robust measurement to quantify the pseudolabel quality from both modalities. Then, leveraging this quantification score, advantageous and complementary information from both modalities can be scheduled and utilized to boost the NCD performance. Before introducing the details of our proposal, it seems essential to define the spectrum of pseudo-label quality in the NCD problem, especially without the ground-truth labels in novel classes. Different from the semi-supervised problem, in NCD, the capability to extend and explore new classes leverages not only the abstract information (e.g., logits) but also the local visual semantics that are shared among unknown and known categories(Sun et al. 2023). The mainstream theory(Li et al. 2022b) demonstrated that the NCD performance depends on how similar/shareable local semantic attributes are across known and novel classes. Therefore, there are at least two aspects to quantify the quality of the pseudo-labels: (1) how much the abstract features(logits/pseudo-labels) can distinguish the unknown classes; (2) how the semantic patterns in the local feature embeddings are able to identify and comprehend the relationship between the known and the novel classes. To bring advantages from both worlds, i.e. the abstract and local feature embeddings, we propose to utilize their structural consistency for pseudo-label quality identification. To put it simply, we propose a new hypothesis that if samples with more similar local semantic attributes/features, their structural similarity of pseudo-labels should be maintained. Therefore, the higher the consistency between the local semantic structure and pseudo-label structure is, the better the pseudo-label quality is. And vice versa. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7653 Specifically, let Zu v = [z1 v, ..., zBu v ]⊤∈RBu×k be the embeddings of unlabeled images in the batch, the local semantic similarity can be calculated as: Simembv = Zu vZu v ⊤ (3) where Simi,j embv = zi v · zj v represents the similarity between ith image embedding and jth image embedding. Similarly, given pseudo-labels ˆY u v = [ˆy1 v, ...,ˆyBu v ]⊤∈ RBu×Cu of unlabeled images, we can get similarity matrix of pseudo-labels as follow: Simplv = ˆY u v ˆY u v ⊤ (4) where Simi,j plv = ˆyi v ·ˆyj v represents the similarity between ith image’s pseudo-label and jth image’s pseudo-label. For the ith image, we propose to use JS-divergence to measure the consistency Coni v of embedding similarity and pseudo-label similarity, where Coni v = max(m, 1 −λDJS(Simi plv||Simi embv)) (5) and m is a threshold that prevents consistency from being 0. Following the above steps, we can calculate the embedding similarity Simembt and pseudo-label similarity Simplt of the text modality as well. And for the ith text data, we use JS-divergence to measure the consistency Coni t of embedding similarity and pseudo-label similarity, where Coni t = max(m, 1 −λDJS(Simi plt||Simi embt)) (6) After calculating the intra-modal consistency, we propose to generate synthetic pseudo-labels that guide the learning process for both modalities. Specifically, using the quality indexes as pseudo-label weights, the synthetic pseudo-labels of the ith sample pair can be expressed as: pli = Coni v Coni v + Coni t ˆyi v + Coni t Coni v + Coni t ˆyi t (7) We train both branches using synthetic pseudo-labels. Following UNO(Fini et al. 2021), for samples from the labeled set, we zero-pad yl, i.e. y = [yl, 0Cu]; for samples from unlabeled set, we zero-pad plu, i.e. y = [0Cl, plu]. Then we can train the whole network using standard cross-entropy: Limg = −−1 B B X b=1 C X c=1 yb(c)log(pb t(c)) (8) Ltext = −1 B B X b=1 C X c=1 yb(c)log(pb t(c)) (9) Lcls = Limg + Ltext (10) where C = Cu + Cl, yb(c) is the c-th element of the label yb of the b-th sample in a batch, pb v(c) is the c-th element of b-th image’s prediction pb v, pb t(c) is the c-th element of b-th text’s prediction pb t. Experiment Experiment Setup Datasets MIMIC-CXR-JPG Dataset(Johnson et al. 2019b) This dataset contains 377,110 chest X-ray images from 65,379 patients. Each image is provided with 14 labels derived from two natural language processing tools applied to the corresponding free-text radiology reports. In our experiment, we only focus on investigating images from the frontal view. Based on the relationship between classes and the number of samples in classes, 11 classes were selected. We divided these classes into three groups, one group as labeled classes and the remaining two groups as unlabeled classes. The labeled classes include No Finding, Atelectasis, Lung Opacity and Edema. The first group of unlabeled classes are all lung diseases, including Consolidation, Pneumonia, Pneumothorax. The second group of unlabeled classes are diseases that occur in other anatomical structures, including Cardiomegaly, Enlarged Cardiomediastinum, Fracture, Pleural Effusion. We refer to the combination of the labeled classes and the first group of unlabeled classes as SET1 and the combination of the labeled classes and the second group of unlabeled classes as SET2. We adjust the number of samples for each class and obtain eight different dataset splits. The details are shown in the Table 1. Chest ImaGenome Dataset(Wu et al. 2021) Chest ImaGenome dataset is automatically constructed from the MIMIC-CXR dataset(Johnson et al. 2019a). This dataset uses a rule-based text-analysis pipeline to correlate anatomies with various CXR attributes extracted from text reports. To reduce noise and prevent label leakage, we filter the attributes and formalize the text report into the form of “Anatomy-1: Attribute-1,...,Anatomy-k:Attribute-j”, where k and j mean the j-th attribute of k-th anatomy. All our experiments are conducted based on formalized text. Evaluation Metrics Adhering to the evaluation protocols employed in existing studies(Fini et al. 2021; Li et al. 2022a), our experiments are also conducted under both taskaware and task-agnostic protocols. Under the task-aware protocol, we are aware of if the paired image and text originate from the labeled set or the unlabeled set. Conversely, under the task-agnostic protocol, such information is unavailable. We use the average clustering accuracy to evaluate the performance of our method on unlabeled sets. It is defined as: Cluster Acc = max perm∈P 1 N N X i=1 1{yi = perm(pli)} (11) where yi and pli represent the ground-truth label and pseudo-label of sample (vi, ti). P indicates the set of all permutations. The optimal permutation can be calculated by the Hungarian algorithm(Kuhn 2005). Implementation Details We use ResNet-50(He et al. 2016) as the image encoder and BioClinicalBERT(Alsentzer et al. 2019) as the text encoder. We train our model in two stages. First, we finetune the encoders following GLoRIA(Huang et al. 2021) with all training data. Then we conduct novel class discovery on our network with 200 epochs. All experiments are conducted with a fixed batch size of 128. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7654 Size SET1 SET2 Labeled Unlabeled Sum Labeled Unlabeled Sum I 4000 3000 7000 4000 4000 8000 II 12000 3000 15000 12000 4000 16000 III 12000 6000 18000 12000 8200 20200 IV 21000 6000 27000 21000 8200 29200 Table 1: Train-set size of the splits used in our benchmark. 6000 means the number of samples for the three unlabeled classes of SET1 is 1000/2500/2500 respectively. 8200 means the number of samples for the four unlabeled classes of SET2 is 3000/1000/1200/3000 respectively. 21000 means the number of samples for the four labeled classes is 6000/3000/6000/6000 respectively. Unspecified indicates that the number of samples is the same for each class. Following the literature(Fini et al. 2021; Li et al. 2022a; Yang et al. 2022), we use multi-head clustering and overclustering to boost the clustering performance. The methods under comparison all use the same setup. Comparison with State-of-the-Arts We compare our method with the current state-of-theart methods, including AutoNovel(Han et al. 2021), NCL(Zhong et al. 2021a), UNO(Fini et al. 2021), IIC(Li et al. 2022a) and ComEx(Yang et al. 2022) besides KMeans(McQueen 1967). We also combine our method with pseudo-label assignment based on novel class distribution prior estimation in BYOP(Yang et al. 2023) for the unbalanced data. We report the experimental results under the task-aware and task-agnostic protocol in Table 2 and Table 3 respectively. In Table 2, we report the average clustering accuracy on the unlabeled test sets using the task-aware protocol. As we can see, our method achieves the best results on all data splits, with a substantial performance improvement over other methods. The results demonstrate the significant gain of introducing text to NCD in CXR images. Note that when using the method of novel class distribution prior estimation in BYOP(Yang et al. 2023) to guide the pseudo-label assignment, our method shows better clustering performance, although the imbalance of our data splits is relatively low. We have also tried a multi-modal knowledge distillation for NCD(UNO MMKD in Table 2). Specifically, we first train the text encoder and classification head. Then, the parameters of the text branch are frozen, and the structure of the image branch is kept the same as UNO. We conduct knowledge distillation at the logits end, but the performance improvement is not obvious and the advantages of multi-modal are not utilized. We also report the classification accuracy of labeled test sets and the average clustering accuracy of unlabeled test sets of SET1 using the task-agnostic protocol in Table 3. In this setting, our method is significantly better at classifying labeled sets than other methods, improving classification performance by about 8% on average across four splits. Although the performance advantage of our method for unlabeled clustering is not as pronounced, we still achieve about 5% performance improvement on split II and split IV, and the clustering performance of our method is close to the optimal on split I and split III. Results on SET2 are placed in the supplementary material due to space issues. Analysis Ablation Study Intra-modal Consistency Weighting v.s. Averaging In this paper, we propose a novel pseudo-label quality quantification index and use it to weigh the pseudo-labels of the two branches as the final pseudo-label. We hope that the two modalities learn in an interactive manner and generate better pseudo-labels. A straightforward way to combine the pseudo-label of the two modalities is to take an average, but we hypothesize that the higher the quality pseudo-labels are, the higher the weights should be assigned. To verify the effectiveness of the proposed intra-modal consistency weighting, we compare pseudo-label generation based on intra-modal consistency weighting, two-branch pseudo-label averaging, and two-branch logits averaging to generate the final synthetic pseudo-labels. We perform ablation experiments on the four splits of SET1 and report our results in Table 4. From the results, it can be observed that although averaging over pseudo-labels and averaging over logit to generate pseudo-labels also perform well, our method is superior and outperforms them by a margin. This illustrates the importance of intra-modal consistency weighting for pseudo-label generation. Qualitative Analysis In order to better illustrate the effectiveness of our approach, we conducted a quantitative analysis. Specifically, following the approach used by UNO(Fini et al. 2021), IIC(Li et al. 2022a), we used t-SNE(Van der Maaten and Hinton 2008) to visualize the concatenated logits from two classification heads of image branch. Visualization results are shown in Fig.3. It can be seen that our method exhibits better clustering performance on the logits, compared to UNO and IIC. Although mixing is still significant, relatively distinct clusters have emerged. Discussion Why not feature concatenation? We assume that the text is not available at test time, so feature concatenation does not apply in our task setup. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7655 Method SET1 SET2 I II III IV I II III IV K-means(McQueen 1967) 38.9 37.1 39.2 39.8 30.0 29.8 32.8 33.6 AutoNovel(Han et al. 2021) 38.7 36.8 38.3 39.4 31.3 32.0 37.6 37.0 NCL(Zhong et al. 2021a) 39.6 37.1 39.3 40.8 35.2 34.5 36.8 37.4 UNO(Fini et al. 2021) 43.6 37.5 38.9 46.2 35.8 35.1 36.1 35.6 UNO MMKD 42.4 37.3 41.5 43.7 34.8 36.6 38.0 36.3 IIC(Li et al. 2022a) 43.8 40.1 44.5 45.6 35.7 35.3 39.2 38.6 ComEx(Yang et al. 2022) 42.7 41.4 40.3 41.1 34.8 35.7 40.8 40.4 Ours 56.9 53.7 55.7 52.6 50.1 48.8 50.4 47.8 Ours+BYOP(Yang et al. 2023) 56.1 58.8 56.5 52.4 50.2 49.8 51.3 50.0 Table 2: Comparison of state-of-the-art methods on eight splits of SET1 and SET2 using task-aware protocol. Cluster accuracy is reported on the unlabeled test set. The optimal results from the 5 runs are reported in the table. Noting that UNO MMKD means taking the frozen text branch as teacher and the image branch as student and distilling the knowledge at the logits end. Method I II III IV Lab Unlab All Lab Unlab All Lab Unlab All Lab Unlab All UNO(Fini et al. 2021) 38.0 37.5 37.8 39.1 36.6 38.6 40.6 38.9 40.0 43.3 40.2 42.6 IIC(Li et al. 2022a) 39.5 37.6 38.7 40.8 38.5 40.4 41.0 39.7 40.6 42.8 37.3 41.6 ComEx(Yang et al. 2022) 41.0 38.9 40.1 41.8 38.1 41.1 39.7 36.5 38.6 43.1 39.0 42.2 Ours 46.3 38.7 43.1 50.9 42.4 49.2 43.6 38.3 41.8 55.9 44.2 53.3 Table 3: Comparison with some state-of-the-art methods on four splits of SET1 under the task-agnostic protocol. Both the classification accuracy of the labeled test sets(“Lab”) and the clustering accuracy of the unlabeled test sets(“Unlab”) are reported. Method SET1 I SET1 II SET1 III SET1 IV PL avg. 54.8 53.6 54.8 51.2 Logits avg. 52.7 52.4 51.5 51.8 Weighting 56.9 53.7 55.7 52.6 Table 4: Ablation study performed on four splits of SET1 on synthetic pseudo-label generation. PL avg. means averaging pseudo-labels from two branches. Logits avg. means averaging unlabeled logits from two branches to generate pseudo-labels. Weighting means weighting by intra-modal consistency. The results are reported on the unlabeled test set using the task-aware protocol. Why not swapping predictions like UNO(Fini et al. 2021)? While text can be viewed as a form of augmentation, we believe it is fundamentally different from augmented images. There is an information difference between image modality and text modality, so we want to achieve both intra-modal and inter-modal optimization when preserving the supervised information from both modalities. Conclusion In this paper, we propose a method for NCD in CXR images based on paired images and text. During pretraining, we perform multi-modal contrastive learning on the train(a) UNO (b) IIC (c) Ours 0 1 2 3 No Finding Atelectasis Edema Lung Opacity 4 5 6 Consolidation Pneumonia Pneumothorax (d) Index of numbers, colors to classes. Figure 3: t-SNE visualization for all classes in SET1. ing set to mitigate the bias towards labeled classes. In the discovery phase, we generate pseudo-labels on both image branch and text branch, respectively, and weight the pseudolabels by intra-modal consistency. In this way, pseudo-labels that combine information from both modalities are used for training on both branches. Through extensive experiments and analysis, we illustrate the effectiveness of our approach. Our method achieves the best performance on the novel class discovery on CXR images, which is simple but valid. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7656 Acknowledgements This work was supported by the grants from the National Natural Science Foundation of China (62201014), Beijing Advanced Discipline Construction Project (BMU2019GJJXK001) and PKU-OPPO Innovation Fund (BO202103). References Alsentzer, E.; Murphy, J. R.; Boag, W.; Weng, W.-H.; Jin, D.; Naumann, T.; and McDermott, M. 2019. Publicly available clinical BERT embeddings. arXiv preprint arXiv:1904.03323. Berthelot, D.; Carlini, N.; Cubuk, E. D.; Kurakin, A.; Sohn, K.; Zhang, H.; and Raffel, C. 2019a. Remixmatch: Semisupervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785. Berthelot, D.; Carlini, N.; Goodfellow, I.; Papernot, N.; Oliver, A.; and Raffel, C. A. 2019b. Mixmatch: A holistic approach to semi-supervised learning. Advances in neural information processing systems, 32. Chen, H.; Tao, R.; Fan, Y.; Wang, Y.; Wang, J.; Schiele, B.; Xie, X.; Raj, B.; and Savvides, M. 2023. Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning. arXiv preprint arXiv:2301.10921. Chi, H.; Liu, F.; Han, B.; Yang, W.; Lan, L.; Liu, T.; Niu, G.; Zhou, M.; and Sugiyama, M. 2021. Meta discovery: Learning to discover novel classes given very limited data. arXiv preprint arXiv:2102.04002. Cuturi, M. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26. Fini, E.; Sangineto, E.; Lathuili`ere, S.; Zhong, Z.; Nabi, M.; and Ricci, E. 2021. A unified objective for novel class discovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9284–9292. Han, K.; Rebuffi, S.-A.; Ehrhardt, S.; Vedaldi, A.; and Zisserman, A. 2020. Automatically Discovering and Learning New Visual Categories with Ranking Statistics. In International Conference on Learning Representations (ICLR). Han, K.; Rebuffi, S.-A.; Ehrhardt, S.; Vedaldi, A.; and Zisserman, A. 2021. AutoNovel: Automatically Discovering and Learning Novel Visual Categories. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Han, K.; Vedaldi, A.; and Zisserman, A. 2019. Learning to discover novel visual categories via deep transfer clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8401–8409. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hsu, Y.-C.; Lv, Z.; and Kira, Z. 2017. Learning to cluster in order to transfer across domains and tasks. arXiv preprint arXiv:1711.10125. Hsu, Y.-C.; Lv, Z.; Schlosser, J.; Odom, P.; and Kira, Z. 2019. Multi-class classification without multi-class labels. arXiv preprint arXiv:1901.00544. Huang, S.-C.; Shen, L.; Lungren, M. P.; and Yeung, S. 2021. Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3942–3951. Johnson, A. E.; Pollard, T. J.; Berkowitz, S. J.; Greenbaum, N. R.; Lungren, M. P.; Deng, C.-y.; Mark, R. G.; and Horng, S. 2019a. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data, 6(1): 317. Johnson, A. E.; Pollard, T. J.; Greenbaum, N. R.; Lungren, M. P.; Deng, C.-y.; Peng, Y.; Lu, Z.; Mark, R. G.; Berkowitz, S. J.; and Horng, S. 2019b. MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042. Kuhn, H. W. 2005. The Hungarian method for the assignment problem. Naval Research Logistics (NRL), 52(1): 7– 21. Lee, D.-H.; et al. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, 896. Atlanta. Li, W.; Fan, Z.; Huo, J.; and Gao, Y. 2022a. Modeling InterClass and Intra-Class Constraints in Novel Class Discovery. arXiv preprint arXiv:2210.03591. Li, Z.; Otholt, J.; Dai, B.; Meinel, C.; Yang, H.; et al. 2022b. A closer look at novel class discovery from the labeled set. arXiv preprint arXiv:2209.09120. Liang, V. W.; Zhang, Y.; Kwon, Y.; Yeung, S.; and Zou, J. Y. 2022. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. Advances in Neural Information Processing Systems, 35: 17612–17625. McQueen, J. 1967. Some methods for classification and analysis of multivariate observations. In Proc. Fifth Berkeley Symposium on Mathematical Statistics and Probability, 1967, 281–297. Sohn, K.; Berthelot, D.; Carlini, N.; Zhang, Z.; Zhang, H.; Raffel, C. A.; Cubuk, E. D.; Kurakin, A.; and Li, C.-L. 2020. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems, 33: 596–608. Sun, Y.; Shi, Z.; Liang, Y.; and Li, Y. 2023. When and How Does Known Class Help Discover Unknown Ones? Provable Understanding Through Spectral Analysis. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Wu, J. T.; Agu, N. N.; Lourentzou, I.; Sharma, A.; Paguio, J. A.; Yao, J. S.; Dee, E. C.; Mitchell, W.; Kashyap, S.; Giovannini, A.; et al. 2021. Chest ImaGenome dataset for clinical reasoning. arXiv preprint arXiv:2108.00316. Xie, J.; Girshick, R.; and Farhadi, A. 2016. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, 478–487. PMLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7657 Yang, M.; Wang, L.; Deng, C.; and Zhang, H. 2023. Bootstrap Your Own Prior: Towards Distribution-Agnostic Novel Class Discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3459– 3468. Yang, M.; Zhu, Y.; Yu, J.; Wu, A.; and Deng, C. 2022. Divide and Conquer: Compositional Experts for Generalized Novel Class Discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14268–14277. Zhang, H.; Cisse, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2017. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412. Zhao, B.; and Han, K. 2021. Novel visual category discovery with dual ranking statistics and mutual knowledge distillation. Advances in Neural Information Processing Systems, 34: 22982–22994. Zhong, Z.; Fini, E.; Roy, S.; Luo, Z.; Ricci, E.; and Sebe, N. 2021a. Neighborhood contrastive learning for novel class discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10867–10875. Zhong, Z.; Zhu, L.; Luo, Z.; Li, S.; Yang, Y.; and Sebe, N. 2021b. Openmix: Reviving known knowledge for discovering novel visual categories in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9462–9470. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7658
2024
850
18,685
AMSP-UOD: When Vortex Convolution and Stochastic Perturbation Meet Underwater Object Detection Jingchun Zhou1*, Zongxin He2*, Kin-Man Lam3, Yudong Wang4, Weishi Zhang1, Chunle Guo5, Chongyi Li5† 1 School of Information Science and Technology, Dalian Maritime University 2 School of Computer Science and Engineering, Huizhou University 3 Department of Electrical and Electronic Engineering, Hong Kong Polytechnic University 4 School of Electrical and Information Engineering, Tianjin University, China 5 VCIP, CS, Nankai University zhoujingchun03@gmail.com, hikari0608@outlook.com, enkmlam@polyu.edu.hk, yudongwang@tju.edu.cn, teesiv@dlmu.edu.cn, {guochunle, lichongyi}@nankai.edu.cn Abstract In this paper, we present a novel Amplitude-Modulated Stochastic Perturbation and Vortex Convolutional Network, AMSP-UOD, designed for underwater object detection. AMSP-UOD specifically addresses the impact of non-ideal imaging factors on detection accuracy in complex underwater environments. To mitigate the influence of noise on object detection performance, we propose AMSP Vortex Convolution (AMSP-VConv) to disrupt the noise distribution, enhance feature extraction capabilities, effectively reduce parameters, and improve network robustness. We design the Feature Association Decoupling Cross Stage Partial (FADCSP) module, which strengthens the association of long and short range features, improving the network performance in complex underwater environments. Additionally, our sophisticated post-processing method, based on Non-Maximum Suppression (NMS) with aspect-ratio similarity thresholds, optimizes detection in dense scenes, such as waterweed and schools of fish, improving object detection accuracy. Extensive experiments on the URPC and RUOD datasets demonstrate that our method outperforms existing state-ofthe-art methods in terms of accuracy and noise immunity. AMSP-UOD proposes an innovative solution with the potential for real-world applications. Our code is available at: https://github.com/zhoujingchun03/AMSP-UOD. Introduction Recently, underwater object detection (UOD) has gained attention in the fields of marine technology, deep-sea exploration, and environmental protection. Precise detection of biological, geological, and man-made structures in deep-sea environments is vital for human society and environmental conservation (Xu et al. 2023) (Zhuang et al. 2022). However, challenges in seawater, such as transparency, color, temperature, and suspended particles, combined with varying marine environments and target object types, reduce the accuracy of object detection. *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Due to light absorption and scattering (Zhou et al. 2023b; Zhang et al. 2022), underwater imaging often suffers from quality degradation compared to object detection in highquality images. This impacts the performance of Convolutional Neural Network (CNN)-based object detectors. The key challenges include 1) the lack of underwater object detection datasets hindering the training of deep learning models, 2) degradation factors, such as light absorption and scattering, leading to low contrast and color distortion (Zhou et al. 2023a) (Guo et al. 2022), 3) the difficulty in extracting rich details from small and clustered underwater objects, and 4) class imbalance, making the challenging for object detectors to learn features for classes with a small-sample size (Fu et al. 2023). To address these challenges, new detectors capable of accurate localization and classification in complex underwater environments are required. This research aims to advance ocean science and deep-sea exploration technology and holds practical value in environmental protection and resource development. In this paper, we propose the AMSP-UOD network, crafted to tackle non-ideal imaging factors in underwater environments. Utilizing the optical imaging model I = H(J, B, t) + N (I represent observed image, J represent raw scene, B represents backscatter, and t represent transmission map), we discern that underwater images combine a degradation function H with noise N. To remove noise, we propose the AMSP-VConv. This strategy not only reduces parameters but also bolsters the network’s robustness. We further implement the FAD-CSP to improve feature extraction in degraded environments. Our post-processing strategy, which relies on NMS, is designed to optimize the detection of dense clusters of underwater objects. Experimental results on the URPC (Liu et al. 2021) and RUOD (Fu et al. 2023) datasets showcase the effectiveness of our method. Overall, AMSP-UOD presents an innovative solution for UOD with potential real-world applications. The main contributions of this paper are as follows: (1) We propose a novel single-stage UOD network. In the backbone, we design the AMSP-VConv to address the impact of noise and other degradations in underwater object The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7659 detection. In the neck, the FAD-CSP boosts long and short distance feature connection, enhancing performance in complex underwater environments. Furthermore, an NMS-based post-processing method is introduced to enhance the detection performance of the network in complex underwater scenarios like dense waterweed clusters and fish schools. (2) Our AMSP strategy refines network through parameter adjustments, enhancing detection by distinguishing between ideal and non-ideal imaging factors. (3) Experimental results on public datasets and UOD competition datasets reveal that our method outperforms state-of-the-art UOD techniques in terms of both detection accuracy and speed. Ablation studies demonstrate that AMSP-VConv possesses superior noise resistance and interpretability, offering a novel solution for noise processing in detection tasks and computer vision. Related Work The UOD task focuses on detecting objects in underwater images. Deep learning has significantly advanced information fusion (Ma et al. 2023), image enhancement (Liu et al. 2023), and object detection (Chen et al. 2022). In many cases, these methods outperform traditional approaches, in terms of speed and accuracy (Liu et al. 2016; Ren et al. 2015; Redmon et al. 2016). However, underwater environments introduce image degradations due to factors like light attenuation. Underwater robots also need efficient algorithms due to limited resources. Existing UOD techniques are either anchor-based or anchor-free, with variations in their approach. Anchor-Based Methods Single-Stage Methods: These methods predict the object’s location and type directly, ensuring faster performance. Examples include SSD (Liu et al. 2016) that leverages feature pyramids for multi-scale perception, RetinaNet (Lin et al. 2017) using Focal Loss for sample weight adjustment, and NAS-FPN (Ghiasi et al. 2019) that refines feature pyramid network structures. While efficient, they can struggle with precise object boundary localization in UOD tasks, especially in challenging conditions or with limited data. Data augmentation is often used to enhance generalization. Multi-Stage Methods: These techniques split detection into two stages: region proposal and object classification with bounding box prediction. Examples include Faster RCNN (Ren et al. 2015), Cascade R-CNN (Cai and Vasconcelos 2018), DetectoRS (Qiao, Chen, and Yuille 2020), and Dynamic R-CNN (Zhang et al. 2020a). They enhance accuracy using cascaded detectors, novel pyramid networks, and balanced learning (Ren et al. 2015; Cai and Vasconcelos 2018; Qiao, Chen, and Yuille 2020; Zhang et al. 2020a). However, their computational demands pose challenges for on-the-go applications. Anchor-Free Methods Key-Point Based Methods: These techniques use keypoints, either predefined or self-learned, for detection, offering finer object boundary detail. Examples are Reppoints (Yang et al. 2019) for learning object-related features, Grid (Tian et al. 2019) for grid-guided detection, and CenterNet (Zhou, Koltun, and Kr¨ahenb¨uhl 2020) and ExtremeNet (Zhou, Zhuo, and Kr”ahenb”uhl 2019) that use multiple key-points. While effective in general OD, their application in UOD is challenging due to limited underwater datasets (Fu et al. 2023), manual annotations, and computational demands conflicting with UOD’s typical scenarios. Center-Point Based Methods: These methods focus on predicting object center points, ideal for dense and fast detections. Notably, YOLO (Redmon et al. 2016) approaches detection as a single regression task, optimizing dense object detection. Enhancements include per-pixel prediction and feature abstraction (Redmon et al. 2016; Tian et al. 2019; Zhu, He, and Savvides 2019; Kong et al. 2020; Liu et al. 2019). However, their scalability for various object sizes is limited, and they may not excel in tasks needing precise boundary localization, like specific underwater robot operations. Proposed AMSP-UOD Network The underwater environment is marked by complexity due to various regular and irregular degradation factors, including marine biological activity, human activity and current movement (Chou et al. 2021). These factors create unpredictable noise patterns, posing challenges to models attempting to perceive and model underwater degradation scenes. Underwater noise is complex compared (Li et al. 2019) to typical noise conditions and requires a higher parameter count to denoise, but this increases the risk of overfitting. Instead of focusing on modeling noise, we propose a novel UOD network, namely ASMP-UOD (in Figure 1). Our approach aims to disrupt noise and reduce parameters, focusing on extracting ideal features rather than increasing the burden of noise analysis. Unlike previous methods that struggled with complex scenarios, ASMP-UOD is designed to better adapt to regular underwater scenes. Anti-Noise Capability of AMSP and VConv Convolution and its variants (Chollet 2017)(Han et al. 2020) are crucial for feature extraction but often struggle in scenarios with noise interference or complex scenarios. The challenge lies in distinguishing between background features and target object features, limiting detection accuracy. To deal these issues, we design a novel AMSP-VConv to mitigate noise interference, enhancing the network’s adaptability in underwater scenarios. Inspired by the vortex phenomenon in turbulent water flows, which disrupts continuity through rapid rotation, AMSP-VConv introduces ’vortices’ in the information flow to break the interference caused by noise. This innovation improves the network’s ability to differentiate background and target features, enhancing detection in complex underwater environments. In Figure 2, we present the complete structure of AMSPVConv. Starting with an input tensor Fin of the size [b, c, h, w] (b: batch size, c: number of channels, h: height, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7660 Figure 1: AMSP-UOD network architecture: AMSP-VConv for underwater noise elimination; FAD-PAN for information analysis, FAD-CSP for semantic feature decoupling; NMS-Similar for merging traditional and Soft-NMS for efficient dense scene detection. Figure 2: AMSP Vortex Convolution. (a) The AMSP-VConv structure, (b) an expanded diagram of VConv, featuring the uniquely designed Shared Conv with BN, complemented by the SiLU activation function. w: width), it is processed by the combination of convolution, batch normalization, and the SiLU activation function (CBS) structure. This structure is designed to capture latent associations. By utilizing a kernel size of size 3 and a step size of 1, yielding an output tensor X of size [b, c//2, h, w]. The transformation can be expressed as follows: X = CBS(Fin) = δ(BatchNormal(Conv(Fin))) (1) where δ represents the SiLU activation function. As illustrated in Equations (2) and (3), we introduce the Amplitude Modulation and Shuffling Perturbation (AMSP) strategy in the subsequent steps. This strategy infuses random perturbations into the original grouped structure of associated features within X, thereby disrupting the association between noise and regular features. It is crucial to highlight that, while the AMSP strategy introduces these perturbations, it does not annihilate the features. Instead, it preserves a majority of the feature associations and induces a random shuffling among channels. This mechanism effectively serves the connection between noise samples and regular features, especially in the higher-level channels. T = AMt(X) =   c1 c2 . . . ct ct+1 ct+2 . . . c2t . . . . . . . . . . . . ckt+1 ckt+2 . . . ckt+t   (2) Y = SPt(T) =   ca0t+1 ca0t+2 . . . ca0t+t ca1t+1 ca1t+2 . . . ca1t+t . . . . . . . . . . . . cakt+1 cakt+2 . . . cakt+t   (3) {a0, a1, . . . , ak} = {0, 1, . . . , k} (4) As depicted in Equations (2) and (3), the process involves two primary operations: Amplitude Modulation (AM) and Shuffling Perturbation (SP). AM is responsible for mapping the information to higher dimensions, SP perturbs these features. Here, we divide the channels into k + 1 groups of t channels each, ci denotes the i-th channel, and Y is the output of the AMSP, which is aligned with the dimensions of the intermediate variable T. Z = Concat(V Conv(Y )) (5) Z ′ = δ(BatchNormal(Z)) (6) The VConv processes the reconstructed result Z to optimize the extracted features. Drawing a parallel with the ideal state of water vortices, vortex convolution comprises multiple spiral lines (group convolutions) with a fixed spacing (shared convolution parameters). These group convolutions capture and extract features according to global and local imaging rules, removing isolated noise. Fout = Concat(X, Z ′) (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7661 Figure 3: FAD-CSP structure. The FAD-CSP module is built on a cross-stage network, comprising an efficient Global Feature-Aware (GFA) and a local decoupling-focused RepBottleneck. The essence of FAD-CSP lies in creating an efficient decoupling network through the interaction of long and short distance features. To ensure the integrity of the correct feature semantic information, we employ residual connections to concatenate the original associated features X and Z ′ to obtain the final output Fout, with shape [b, c, h, w]. This method can better adapt to feature attenuation and noise effects, thereby acquiring complete and correct features of the ideal degradation scenario under the guidance of the gradient optimizer. Feature Association Decoupling CSP In order to extract features at different distances for enhancing adaptability to underwater environments, we introduce a feature association and decoupling module based on a cross-stage network (FAD-CSP). This module is designed to incorporate the novel global feature-aware approach to extract long-range global features, while utilizing the optimized RepBottleneck as a sampling module to capture shortrange local features. Global Feature-Aware Representation: In convolutional operations for global feature processing, a deeper network structure is usually required to extract rich feature information. This often increases the likelihood of the network getting trapped in local optima. To address this issue, we devised an efficient global feature-aware module and seamlessly integrated it into the FAD-CSP network using an attention mechanism. The structure of the proposed global feature-aware module is depicted in Figure 3. For a given input tensor Fin, we processing it through a bar-shaped pooling group, which compresses salient features into a one dimensional space. This method not only trades longer distance feature correlations, but also exhibits much lower computational overhead compared to convolution. This process can be expressed as follows: vh c = AvgPoolh(Fin) + MaxPoolh(Fin) (8) vw c = AvgPoolw(Fin) + MaxPoolw(Fin) (9) The variables vh and vw are introduced into the subsequent stage of global feature-aware processing. Utilizing the AMSP strategy, they undergo a random alternation. This Figure 4: (a) The represents the RepBottleneck structure with n=3, and (b) the detailed design of Bottleneck, a residual structure composed of pointwise convolution and depthwise convolution. procedure generates a distribution map highlighting global prominent features as follows: yf = CBS(AMSP(Catc([vh c , vw c ]))) (10) yh c/r, yw c/r = splitc(yf) (11) where c denotes the channel count of this intermediary value and r is the scaling ratio. The CBS is employed and Catc denotes concatenation by channels to rebuild feature relationships yf, extracting accurate long-distance feature associations. yh c = Conv(yh c/r), yw c = Conv(yw c/r) (12) To capture long-range features from the attention map, we incorporate a weighted gradient flow. This ensures the retention of valuable information in the output and upholds the consistency within the original feasible solution domain. Specifically, the adaptive weighting stems from redistributing two distinct linear feature sets sourced from independent conventional mappings and harnessing the capabilities of the Sigmoid function. This method allows for adaptive weighting of the features, according to the disparities in feature importance within specific regions, ensuring a refined adjustment to feature changes across different areas. Af = Sigmoid(yh c × yw c ) (13) Fout = Af K Fin (14) Ultimately, the global attention Af, obtained by weighting the product of the strip attention maps yh c × yw c followed by the Sigmoid function, is multiplied with the input Fin. This generates an expanded solution domain Fout, reinterpreted by the global feature perception decoupling module. It provides the network with a richer and optimized feature representation. Introducing the attention map allows the network to better understand and process features from different regions while retaining key information. This aids the network in achieving global optima, enhancing the performance of the UOD task. RepBottleneck: This is an efficient residual structure, as shown in Figure 4 (b), which uses a combination of depthwise separable convolutions and residual connections to reduce the number of network parameters, aggregating local The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7662 features within the receptive field. Our RepBottleneck is an optimization over the Bottleneck List, addressing the deficiency in global feature-awareness. RepBottleneck focuses on local representations at a short distance. The proposed RepBottleneck is depicted in Figure 4(a), which interconnects multiple Bottlenecks and ShortCuts, to enhance the degree of association between local features. It is expressed as follows: In = Bottleneck((I0), if n = 1 Bottleneck(Cat(In−1, In−2)), if n ̸= 1 (15) where In represent the output of RepBottleneck and I0 represent the input of RepBottleneck. Eventually, FAD-CSP obtains rich local features, abstract global features, and separated primitive features. As shown in Figure 3, FAD-CSP uses CBS to associate long and short distance features related to the target, decoupling irrelevant degraded features and realizing the improvement of detection accuracy. Non-Maximum Suppression-Similar In dense underwater environments, two primary challenges in detection are overlapping objects with similar features and overlapping bounding boxes for the same target, leading to inaccuracies in traditional NMS methods. While SoftNMS (Bodla et al. 2017) retains more boxes, it increases computational time. To overcome these issues, we propose an NMS method based on aspect ratio similarity, called NMS-Similar. This method combines traditional NMS’s speed with SoftNMS’s precision, using a unique aspect ratio threshold and optimized greedy strategy. The suppression mechanism for each object is as follows: Si = Sie−IoU(M,bi)2/σ (16) L′ = (IoU(bi, L) <= Nt) and (Sim(M, L) > Ns) (17) Sim(M, bi) = ⃗M · ⃗bi ∥⃗M∥∥⃗bi∥ (18) where Si is the current detection box’s confidence, M is the highest confident box, Intersection over Union (IoU) measures overlap between predicted and ground-truth boxes, Nt is the preset IoU threshold, Sim calculates the aspect ratio similarity, ⃗M represents the length and width of M. Ns is the preset similarity threshold, and σ is a Gaussian weighting function. L and L′ represent the remaining and recalculated detection boxes, respectively. Equation (17) adjusts the suppression counts for non-maximum confidence boxes by introducing an aspect ratio threshold to exclude similar detection boxes. The threshold strategy takes into account that object detection boxes for the same object at different scales share similar aspect ratios. During the computation process, similar detection boxes are precluded in advance, reducing the suppression time in dense scenes while ensuring detection accuracy. Experimental Results We elaborate on our experimental setup and comparative analyses. Experiments reveal that our approach significantly enhances the network’s accuracy and resistance to noise, especially in challenging underwater conditions. Implementation Details Our experiments run on an Intel Xeon E5-2650 v4 @ 2.20G CPU and an Nvidia Tesla V100-PCIE-16GB GPU with the Ubuntu 20.04 LTS operating system and Python 3.10 environment built on Anaconda, with a network architecture based on Pytorch 2.0.1 build. The hyperparameters are shown in Table 1. In addition, if not specified, the comparison experiments are performed using the traditional NMS method. Type Setting Type Setting Image size 640 Weights None Batch-size 16 Seeds 0 Optimizer SGD LR 0.01 Epochs 300 Early-stop True Table 1: Hyperparameter settings Evaluation Metrics and Datasets We adopt AP and AP50 as the primary metrics for model accuracy evaluation, with precision (P) and recall (R) as supplementary indicators. To showcase the generalizability of our network, we trained it on the URPC (Zhanjiang) (Liu et al. 2021) dataset, from the 2020 National Underwater Robotics Professional Competition and the extensive RUOD dataset. The URPC dataset contains 5,543 training images across five categories, with 1,200 images from its B-list answers serving as the test set. The RUOD dataset (Fu et al. 2023) contains various underwater scenarios and consists of 10 categories. It includes 9,800 training images and 4,200 test images. Visual Comparisons Figure 5 visualizes the object detection results of different detection frameworks on the URPC (Zhanjiang) dataset. Many of these frameworks struggle to accurately detect smaller objects, with some even mistakenly identifying the background as a target. The Faster R-CNN (Ren et al. 2015), RetinaNet (Lin et al. 2017), and PAA methods exhibit false positives by detecting kelp as seagrass. In contrast, the YOLO methods (Redmon et al. 2016) miss some objects, failing to detect certain starfish. Our method excels in detecting smaller objects without any false positives or missed detections. Quantitative Comparisons In Table 3, the performance of various versions of AMSPUOD on the URPC and RUOD datasets is presented. Notably, while our AMSP-VConv version indicates slightly reduced stability and precision compared to the Our-Standard version in balanced scenarios, it showcases enhanced detection capability in more degenerative conditions (URPC). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7663 Figure 5: Visualization of object detection results of different object detection methods on URPC (Zhanjiang). (a) YOLOv3 (Redmon and Farhadi 2018), (b) YOLOv5s (Jocher 2020), (c) YOLOv6s(Li et al. 2022), (d) YOLOv7-tiny (Wang, Bochkovskiy, and Liao 2023), (e) Faster R-CNN (Girshick 2015), (f) Cascade R-CNN (Cai and Vasconcelos 2018), (g) RetinaNet (Lin et al. 2017), (h) FCOS (Tian et al. 2019), (i) ATSS (Zhang et al. 2020b), (j) TOOD (Feng et al. 2021), (k) PAA (Kim and Lee 2020), (l) Ours-Standard, (m) Ours-AMSP-VConv, (n) Ours-AMSP-VConv + NMS Similar. Baselines Time Memory P R mAP 0.95 0.5 DSC 15s 4.61G 0.824 0.510 0.371 GC 52s 5.10G 0.796 0.637 0.396 VC (w/o SW) 14s 5.36G 0.730 0.694 0.386 VC (w/o AMSP) 14s 4.65G 0.833 0.631 0.397 AMSP-VConv 14s 4.65G 0.845 0.612 0.398 Table 2: Ablation of AMSP-VConv. Time: Inference Time (per epoch), DSC: Depthwise Separable Conv, GC: Ghost Conv, SW: Shared Weight, VC: AMSP-VConv, P: Precision, R: Recall. This observation is also substantiated by subsequent ablation studies. We believe this significant improvement can be attributed to the noise suppression capability of the VConv design combined with the outstanding feature perception ability of FAD-CSP. Especially in intricate underwater environments, our method adeptly boosts the recognition accuracy of waterweeds, which are treated as a small-sample target, to a remarkable 99.3%. Furthermore, the integration of the NMS-Similar strategy imparts a clear enhancement in detection rates for the Vortex version. This strategy efficiently curtails false positives and misses, thus ensuring the integrity and accuracy of object detection. In comparison with the series of YOLO models and other leading detection techniques, our method consistently manifests marked superiority on a foundation of high precision. In conclusion, our method exhibits exemplary efficiency and adaptability in UOD, underscoring its profound potential for real-world underwater applications. Ablation Studies In order to verify the impact of the proposed module on the network performance, we conducted a series of ablation experiments. Ablation of AMSP-VConv: In Table 2, we find that the combination of VConv with the AMSP strategy provides an optimal balance, in terms of precision, recall, and mAP, while maintaining reasonable inference time and memory usage. Compared to Depthwise Separable Convolution (DSC) and Ghost Convolution (GC), AMSP-VConv demonstrates superior performance in complex object detection tasks, particularly in intricate scenarios that need to balance multiple performance metrics. The ablation experiments further reveal the importance of shared parameters and the AMSP strategy for enhancing both accuracy and efficiency. Ultimately, the integration of VConv and the AMSP strategy proves its potential in improving object detection tasks, providing robust support for real-world applications. Underwater scenarios are susceptible to noise interference, and noise robustness is a crucial metric for evaluating UOD methods. Ensuring all operations that influence network metrics are equivalent, Gaussian noise was used to simulate the underwater noise environment, creating multiple noise levels (i.e., the original scenario augmented with Gaussian noise of varying standard deviations). We trained our network using the URPC dataset. As shown in Figure 6, our network’s mAP score remains stable under the influence of noise level 4. In contrast, the mAP@0.5 of YOLOv5s, serving as the Baseline, decreased by 16.3%. In high-noise scenarios, our AMSP-VConv demonstrates superior noise robustness, while the accuracy of Our-Standard, which merely replaces the AMSP-VConv module with standard convolutions, aligns closely with that of the Baseline. This indicates that AMSP-VConv in the Backbone network provides AMSP-UOD with strong noise robustness, validating the effectiveness of AMSP-VConv. It offers an excellent solution for denoising and complex underwater scenarios. Ablation of NMS-Similar: From Table 4, it is evident that NMS-Similar achieves a commendable balance between accuracy and efficiency. Compared to Soft-NMS, NMSSimilar retains similar detection accuracy while significantly reducing computation time. Especially in challenging underwater detection scenarios, where closely located or overlapping objects are frequent, the performance of NMS-Similar stands out, underscoring its immense value in real-world applications. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7664 Method URPC URPC Categories AP50 RUOD AP↑ AP50↑ Ho↑ Ec↑ St↑ Sc↑ Wa↑ AP↑ AP50↑ YOLOv3 29.7 58.9 63.5 83.1 68.1 46.4 33.2 49.1 80.3 YOLOv5s 38.6 66.2 67.3 84.7 76.7 57.2 43.0 53.8 81.4 YOLOv6s 36.1 62.8 61.4 85.2 68.1 49.0 50.1 60.1 84.9 YOLOv7-tiny 35.9 62.2 57.9 84.9 72.3 50.1 66.3 57.9 84.3 Faster R-CNN 31.0 59.0 66.9 85.9 72.1 55.4 14.7 49.1 80.3 Cascade R-CNN 31.6 59.1 67.1 86.0 71.3 56.2 14.7 53.8 81.4 RetinaNet 26.3 51.1 61.3 81.8 66.2 46.2 0.00 48.0 77.8 FCOS 29.2 58.1 61.8 83.5 68.8 53.9 22.3 49.1 80.3 ATSS 29.0 55.6 64.0 84.8 71.4 55.8 2.20 53.9 82.2 TOOD 30.1 56.7 65.0 86.1 72.7 58.3 1.30 55.3 83.1 PAA 34.2 62.3 65.1 85.2 70.9 55.9 34.6 53.5 82.2 Ours (Standard) 45.0 73.4 69.1 86.6 75.3 53.1 83.0 62.1 85.9 Ours (AMSP-VConv) 36.6 74.8 62.9 87.1 72.9 51.6 99.3 61.4 85.3 Ours (AMSP-VConv + NMS-Similar) 40.1 78.5 67.3 87.5 77.5 60.6 99.5 65.2 86.1 Table 3: Comparison with existing methods on the URPC and RUOD datasets. Ho: holothurian’s AP50, Ec: echinus’s AP50, St: starfish’s AP50, Sc: scallop’s AP50, Wa: waterweeds’s AP50. AP: AP@[0.5:0.05:0.95], AP50: AP@0.5. Bolding and underlining is highest, underlining only is second-highest. Baselines Time(ms) mAP0.5 mAP 0.95 0.5 APechinus NMS 14.20 0.748 0.366 0.477 Soft-NMS 337.3 0.785 0.400 0.509 NMS-Similar 46.90 0.785 0.401 0.509 Table 4: Ablation of NMS-Similar Baselines P R mAP0.5 mAP 0.95 0.5 a 0.836 0.610 0.675 0.397 b 0.734 0.625 0.640 0.370 c 0.720 0.658 0.679 0.377 d 0.858 0.612 0.681 0.369 All 0.844 0.681 0.748 0.366 Table 5: Ablation of FAD-CSP Ablation of FAD-CSP: In Table 5, we evaluated the contributions of various components in the FAD-CSP. Four configurations are tested: a) Without the GFA module. b) Replacing the Pooling Groups with Individual Pooling Layers. c) Removing the AMSP strategy from the GFA module in FAD-CSP. d) Replacing Repbottleneck with Bottleneck. Among the tested configurations, using the FAD-CSP method achieves the best results, with the highest mAP of 0.748 and an improved recall rate of 0.681. This underscores the importance of each component in enhancing the detection performance. In particular, removing the GFA module (a) or the AMSP strategy from GFD (c) leads to a decrease in performance, highlighting their critical roles in the framework. Additionally, using Repbottleneck (as opposed to the standard Bottleneck) further bolsters the detection results, Figure 6: Noise robustness ablation for AMSP-VConv. (a) and (b) show mAP@0.5 and mAP@0.5:0.95 under varied noise levels. Numbers 0-10 represent noise (0 + Gaussian noise standard deviation). Level 0 represents original underwater scene. Methods are not pre-trained on noisy images. Blue is AMSP-VConv, red is Standard Conv, green is YOLOv5s model, and purple is depthwise-separated Conv. emphasizing its effectiveness in the context of the FAD-CSP method. Conclusion In this work, we proposed AMSP-UOD, a novel network for underwater object detection, addressing non-ideal imaging factors in complex underwater environments. With our innovative AMSP Vortex Convolution, we enhance feature extraction and network robustness, while our FAD-CSP module improves performance in intricate underwater scenarios. Our method optimizes detection in object-dense areas and outperforms existing state-of-the-art methods on the URPC and RUOD datasets. The practical evaluations highlight the potential applicability of AMSP-UOD to real-world underwater tasks, making it a promising contribution to UOD. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7665 Acknowledgments This work was supported in part by the National Natural Science Foundation of China (No.62301105), the 2022 National Undergraduate Innovation and Entrepreneurship Training Program Project (No.202210577003), National Key Research and Development Program of China (No.2018AAA0100400), China Postdoctoral Science Foundation (No.2021M701780), the High Performance Computing Center of Dalian Maritime University, and the Supercomputing Center of Nankai University. We are also sponsored by CAAI-Huawei MindSpore Open Fund. References Bodla, N.; Singh, B.; Chellappa, R.; and Davis, L. S. 2017. Soft-NMS – improving object detection with one line of code. In 2017 IEEE International Conference on Computer Vision (ICCV), 5562–5570. IEEE. Cai, Z.; and Vasconcelos, N. 2018. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6154–6162. Chen, L.; Zhou, F.; Wang, S.; Dong, J.; Li, N.; Ma, H.; Wang, X.; and Zhou, H. 2022. SWIPENET: Object detection in noisy underwater scenes. Pattern Recognition, 132: 108926. Chollet, F. 2017. Xception: Deep learning with depthwise separable convolutions. In 2017 IEEE International Conference on Computer Vision (ICCV), 1251–1258. IEEE. Chou, E.; Southall, B. L.; Robards, M.; and Rosenbaum, H. C. 2021. International policy, recommendations, actions and mitigation efforts of anthropogenic underwater noise. Ocean & Coastal Management, 202: 105427. Feng, C.; Zhong, Y.; Gao, Y.; Scott, M. R.; and Huang, W. 2021. Tood: Task-aligned one-stage object detection. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 3490–3499. IEEE Computer Society. Fu, C.; Liu, R.; Fan, X.; Chen, P.; Fu, H.; Yuan, W.; Zhu, M.; and Luo, Z. 2023. Rethinking general underwater object detection: datasets, challenges, and solutions. Neurocomputing, 517: 243–256. Ghiasi, G.; Lin, T.-Y.; Pang, R.; and Le, Q. V. 2019. NASFPN: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7036–7045. Girshick, R. 2015. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, 1440–1448. Guo, C.; Wu, R.; Jin, X.; Han, L.; Chai, Z.; Zhang, W.; and Li, C. 2022. Underwater ranker: learn which is better and how to be better. In AAAI Conference on Artificial Intelligence (AAAI)-Oral. Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; and Xu, C. 2020. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1580–1589. Jocher, G. 2020. Ultralytics YOLOv5. Kim, K.; and Lee, H. S. 2020. Probabilistic anchor assignment with iou prediction for object detection. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, 355– 371. Springer. Kong, T.; Sun, F.; Liu, H.; Jiang, Y.; Li, L.; and Shi, J. 2020. Foveabox: Beyound anchor-based object detection. IEEE Transactions on Image Processing, 29: 7389–7398. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; and Tao, D. 2019. An underwater image enhancement benchmark dataset and beyond. IEEE Transactions on Image Processing, 29: 4376–4389. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. 2022. YOLOv6: A singlestage object detection framework for industrial applications. arXiv preprint arXiv:2209.02976. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, 2980–2988. Liu, C.; Li, H.; Wang, S.; Zhu, M.; Wang, D.; Fan, X.; and Wang, Z. 2021. A Dataset And Benchmark Of Underwater Object Detection For Robot Picking. arXiv e-prints, arXiv:2106.05681. Liu, J.; Wu, G.; Luan, J.; Jiang, Z.; Liu, R.; and Fan, X. 2023. HoLoCo: Holistic and local contrastive learning network for multi-exposure image fusion. Information Fusion, 95: 237– 249. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; and Berg, A. C. 2016. SSD: Single shot multibox detector. In European Conference on Computer Vision, 21–37. Springer. Liu, W.; Liao, S.; Ren, W.; Hu, W.; and Yu, Y. 2019. High-level semantic feature detection: A new perspective for pedestrian detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5187– 5196. Ma, L.; Jin, D.; An, N.; Liu, J.; Fan, X.; Luo, Z.; and Liu, R. 2023. Bilevel fast scene adaptation for low-light image enhancement. International Journal of Computer Vision, 1– 19. Qiao, S.; Chen, L.-C.; and Yuille, A. 2020. DetectoRS: Detecting objects with recursive feature pyramid and switchable atrous convolution. In European Conference on Computer Vision, 145–161. Springer. Redmon, J.; Divvala, S.; Girshick, R.; and Farhadi, A. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779–788. Redmon, J.; and Farhadi, A. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster RCNN: Towards real-time object detection with region proposal networks. In Proceedings of the IEEE International Conference on Computer Vision, 91–99. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7666 Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9627–9636. Wang, C.-Y.; Bochkovskiy, A.; and Liao, H.-Y. M. 2023. YOLOv7: Trainable bag-of-freebies sets new state-of-theart for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7464–7475. Xu, S.; Zhang, M.; Song, W.; Mei, H.; He, Q.; and Liotta, A. 2023. A systematic review and analysis of deep learningbased underwater object detection. Neurocomputing, 527: 204–232. Yang, Z.; Liu, S.; Hu, H.; Wang, L.; and Lin, S. 2019. RepPoints: point set representation for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9657–9666. Zhang, H.; Chang, H.; Ma, B.; Wang, N.; and Chen, X. 2020a. Dynamic R-CNN: Towards high quality object detection via dynamic training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6729–6738. Zhang, S.; Chi, C.; Yao, Y.; Lei, Z.; and Li, S. Z. 2020b. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9759–9768. Zhang, W.; Zhuang, P.; Sun, H.-H.; Li, G.; Kwong, S.; and Li, C. 2022. Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement. IEEE Transactions on Image Processing, 31: 3997–4010. Zhou, J.; Li, B.; Zhang, D.; Yuan, J.; Zhang, W.; Cai, Z.; and Shi, J. 2023a. UGIF-Net: an efficient fully guided information flow network for underwater image enhancement. IEEE Transactions on Geoscience and Remote Sensing, 61: 1–17. Zhou, J.; Liu, Q.; Jiang, Q.; Ren, W.; Lam, K.-M.; and Zhang, W. 2023b. Underwater camera: improving visual perception via adaptive dark pixel prior and color correction. International Journal of Computer Vision, 1–19. Zhou, X.; Koltun, V.; and Kr¨ahenb¨uhl, P. 2020. Tracking objects as points. In European Conference on Computer Vision, 474–490. Springer. Zhou, X.; Zhuo, J.; and Kr”ahenb”uhl, P. 2019. Bottom-up object detection by grouping extreme and center points. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 850–859. Zhu, C.; He, Y.; and Savvides, M. 2019. Feature selective anchor-free module for single-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 840–849. Zhuang, P.; Wu, J.; Porikli, F.; and Li, C. 2022. Underwater image enhancement with hyper-laplacian reflectance priors. IEEE Transactions on Image Processing, 31: 5442–5455. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7667
2024
851
18,686
SOGDet: Semantic-Occupancy Guided Multi-View 3D Object Detection Qiu Zhou1*, Jinming Cao2†*, Hanchao Leng3, Yifang Yin4, Yu Kun3, Roger Zimmermann2 1Independent Researcher 2National University of Singapore 3Xiaomi Car 4Institute for Infocomm Research (I2R), A*STAR, Singapore {zhouqiulv, jinming.ccao, hanchao.leng}@gmail.com, yin yifang@i2r.a-star.edu.sg, yukun@xiaomi.com, rogerz@comp.nus.edu.sg Abstract In the field of autonomous driving, accurate and comprehensive perception of the 3D environment is crucial. Bird’s Eye View (BEV) based methods have emerged as a promising solution for 3D object detection using multi-view images as input. However, existing 3D object detection methods often ignore the physical context in the environment, such as sidewalk and vegetation, resulting in sub-optimal performance. In this paper, we propose a novel approach called SOGDet (Semantic-Occupancy Guided Multi-view 3D Object Detection), that leverages a 3D semantic-occupancy branch to improve the accuracy of 3D object detection. In particular, the physical context modeled by semantic occupancy helps the detector to perceive the scenes in a more holistic view. Our SOGDet is flexible to use and can be seamlessly integrated with most existing BEV-based methods. To evaluate its effectiveness, we apply this approach to several state-of-the-art baselines and conduct extensive experiments on the exclusive nuScenes dataset. Our results show that SOGDet consistently enhance the performance of three baseline methods in terms of nuScenes Detection Score (NDS) and mean Average Precision (mAP). This indicates that the combination of 3D object detection and 3D semantic occupancy leads to a more comprehensive perception of the 3D environment, thereby aiding build more robust autonomous driving systems. The codes are available at: https://github.com/zhouqiu/SOGDet. Introduction Autonomous driving has become a burgeoning field for both research and industry, with a notable focus on achieving accurate and comprehensive perception of the 3D environment. Recently, Bird’s Eye View (BEV) based methods (Huang et al. 2021; Li et al. 2022b,a) have attracted extensive attention in 3D object detection due to their effectiveness in reducing computational costs and footprints. The common paradigm is to take the multi-view images as inputs to detect objects, wherein the noticeable work BEVDet (Huang et al. 2021) serves as a strong baseline. BEVDet first extracts image features from multi-view images using a typical backbone network such as ResNet (He et al. 2016). The features are thereafter mapped to the BEV *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. a 3D Object Detection b 3D Semantic Occupancy Prediction c Hybrid Feature barrier bicyc. bus car veh. moto. pedes. cone trailer truck drive flat walk terrain manm. veget. Figure 1: Illustration of 3D object detection and semantic occupancy prediction tasks. On the rightmost legend, the top 10 categories in the blue box are shared for both tasks, and the bottom 6 categories in the green box are exclusively used by semantic occupancy prediction. (a) 3D object detection usually focuses on objects on roads, such as bicycles and cars. In contrast, 3D semantic occupancy prediction (b) concerns more about physical contexts (e.g., sidewalk and vegetation) in the environment. By combining these two (c), we can obtain a more comprehensive perception of the traffic conditions, such as pedestrians and bicycles mainly on the sidewalk and cars and buses co-appearing on drive surface. space with View Transformer (Philion and Fidler 2020), followed by a convolutional network and a target detection head. Inspired by BEVDet, following studies have integrated additional features into this framework, such as depth supervision (Li et al. 2022a) and temporal modules (Huang and Huang 2022). Despite the significant improvement in localizing and classifying specific objects, i.e., cars and pedestrians, most existing methods (Huang et al. 2021; Huang and Huang 2022; Li et al. 2022b,a) neglect the physical context in the environment. These contexts, such as roads, pavements and vegetation, though out of interest for detection, still offer important cues for perceiving the 3D scenes. For example, as shown in Figure 1, cars mostly appear on the drivable surface rather than the sidewalk. To harness such important features for object detection, we notice a recent emerging task The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7668 – 3D semantic-occupancy prediction (Huang et al. 2023; Li et al. 2023; Wei et al. 2023; Wang et al. 2023), that voxelizes the given image and then performs semantic segmentation of the resulting voxels. This task not only predicts the occupancy status but also identifies the objects within each occupied pixel, thereby enabling the comprehension of physical contexts. As shown in Figure 1, object detection and semantic occupancy prediction focuses on dynamic objects and environmental contexts, respectively. Combining these two leads to the hybrid features in Figure 1(c) would provide a more comprehensive description of the scene, such as the poses of cars driving on the drivable surface and the presence of pedestrians on sidewalk or crossings. Motivated by this important observation, we propose a novel approach called SOGDet, which stands for SemanticOccupancy Guided Multi-view 3D Object Detection. To the best of our knowledge, our method is the first of its kind to employ a 3D semantic-occupancy branch (OC) to enhance 3D object detection (OD). Specifically, we leverage a BEV representation of the scene to predict not only the pose and type of 3D objects (OD branch) but also the semantic class of the physical context (OC branch). SOGDet is a plug-andplay approach that can be seamlessly integrated with existing BEV-based methods (Huang et al. 2021; Huang and Huang 2022; Li et al. 2022a) for 3D object detection tasks. Moreover, to better facilitate the OD task, we extensively explore two labeling approaches for the OC branch, wherein the one predicts the binary occupancy label only and the other involves the semantics of each class. Based on these two approaches, we train two variants of SOGDet, namely SOGDet-BO and SOGDet-SE. Both variants significantly outperform the baseline method, demonstrating the effectiveness of our proposed method. We conduct extensive experiments on the exclusive nuScenes (Caesar et al. 2020) dataset to evaluate the effectiveness of our proposed method. In particular, we apply SOGDet to several state-of-the-art backbone networks (He et al. 2016; Liu et al. 2021; Cao et al. 2021) and compare it to various commonly used baseline methods (Huang and Huang 2022; Li et al. 2022a). Our experimental results demonstrate that SOGDet consistently improves the performance of all tested backbone networks and baseline methods on the 3D OD task in terms of nuScenes Detection Score (NDS) and mean Average Precision (mAP). On the flip side, our OC approach surprisingly achieves comparable performance to state-of-the-art methods (Huang et al. 2023). This finding represents another promising side product and is beyond our expectation, as our intention is to design a simple network and sheds little light on it. The above results together highlight the effectiveness of the combination of 3D OD and OC in achieving comprehensive 3D environment understanding, and further enabling the development of robust autonomous driving systems. Related Work 3D Object Detection (OD) constituents an indispensable component in autonomous driving (Arnold et al. 2019; Chen et al. 2017). Prior monocular methods (Ding et al. 2020; Cai et al. 2020; Kumar, Brazil, and Liu 2021; Reading et al. 2021) predict 3D bounding boxes using single-view images. For example, D4LCN (Ding et al. 2020) uses an estimated depth map to enhance image representation. Cai et al. (Cai et al. 2020) used object height prior to invert a 2D structured polygon into a 3D cuboid. However, due to the limitation of scarce data and single-view input, the model demonstrates difficulties in tackling more complex tasks (Huang et al. 2021). To overcome this problem, recent studies (Huang et al. 2021; Huang and Huang 2022; Li et al. 2022a) have been devoted to the development of large-scale benchmarks (Caesar et al. 2020; Sun et al. 2020) with multiple camera views. For example, inspired by the success of FCOS (Tian et al. 2019) in 2D detection, FCOS3D (Wang et al. 2021) treats the 3D OD problem as 2D-version. Based on FCOS3D, PGD (Wang et al. 2022a) presents using geometric relation graph to facilitate the targets’ depth prediction. Benefited from the DETR (Carion et al. 2020) method, some approaches have also explored the validity of Transformer, such as DETR3D (Wang et al. 2022b) and GraphDETR3D (Chen et al. 2022). Unlike the aforementioned methods, BEVDet (Huang et al. 2021) leverages the Lift-Splat-Shoot(LSS) based (Philion and Fidler 2020) detector to perform 3D OD in multiview. The framework is explicitly designed to encode features in the BEV space, making it scalable for multi-task learning, multi-sensor fusion and temporal fusion (Huang and Huang 2022). The framework is extensively studied by following work, such as BEVDepth (Li et al. 2022a) which enhances depth prediction by introducing a camera-aware depth network, and BEVFormer (Li et al. 2022b) which extends BEVDet on spatiotemporal dimension. Our proposed method also builds upon the BEVDet framework. Specifically, we introduce the semantic occupancy branch to guide the prediction of object detectors, a paradigm that has not been studied by existing efforts. 3D Semantic Occupancy Prediction (OC) has emerged as a popular task in the past two years (Cao and de Charette 2022; Huang et al. 2023; Li et al. 2023; Miao et al. 2023; Wei et al. 2023; Wang et al. 2023). It involves assigning an occupancy probability to each voxel in 3D space. The task offers useful 3D representations for multi-shot scene reconstruction, as it ensures the consistency of multi-shot geometry and helps obscured parts to be recovered (Shi et al. 2023). The existing methods are relatively sparse in the literature. MonoScene (Cao and de Charette 2022) is the pioneering work that uses monocular images to infer dense 3D voxelized semantic scenes. However, simply fusing multi-camera results with cross-camera post-processing often leads to sub-optimal results. VoxFormer (Li et al. 2023) devises a two-stage framework to output the full 3D volumetric semantics from 2D images where the first stage uses a sparse collection of depth-estimated visible and occupied voxels, followed by a densification stage that generates dense 3D voxels from the sparse ones. TPVFormer (Huang et al. 2023) performs end-to-end training by using sparse LiDAR points as supervision, resulting in more accurate occupancy predictions. Multi-Task Learning has become a common practice to employ perception tasks in BEV domain. Noteworthy conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7669 Figure 2: The overall network architecture. Our approach includes an image backbone (yellow) to encode multi-view input images to the vision feature, a view transformer (orange) to transform the vision feature into BEV feature, and a task stage comprising OD (blue) and OC (green) branches that respectively predict the OD and OC outputs in the same time. tributions such as BEVFormer (Li et al. 2022b) and BEVerse(Zhang et al. 2022) exemplify this approach by integrating OD and map segmentation to enhance overall perception capabilities. LidarMultiNet (Ye et al. 2023) further extends the paradigm by utilizing OD as an auxiliary task, elevating semantic segmentation performance within the LiDAR context. The adoption of a multi-task framework is gaining prominence due to its ability to exploit the complementary advantages of diverse tasks, surpassing the capabilities of single-task approaches. This trend is increasingly recognized and favored within the industry. Method Overall Architecture and Notations The overall architecture of our proposed method is illustrated in Figure 2 which is composed of three main components: an image backbone, a view transformer, and a task stage that predicts both OC and OD simultaneously. Specifically, the multi-view input images are first encoded by the image backbone, and then aggregated and transformed into the Bird-Eye-View (BEV) feature by the view transformer. With inherent camera parameters, the view transformer conducts depth-aware multi-view fusion and 4D temporal fusion simultaneously. Thereafter, the task stage generates both OC and OD features, which are interacted using a modality-fusion module. We finally predict the OD and OC outputs using their respective features. To ensure the clearance and consistency throughout our presentation, we first define the following notations following the order of data flow within our pipeline. I represents an image group with same height and width from N cameras using the same timestamp. Fimg ∈ RN×C×H×W represents feature map produced by the image backbone, where H, W and C means the height, width and channels of the feature map, respectively. Fd ∈ RN×D×H×W represents depth estimation of the image group I. Fbev ∈RCbev×X×Y represents BEV features extracted by the view transformer, where X × Y and Cbev means the dimensions and the channels of the BEV feature following (Huang and Huang 2022), respectively. Fod and Foc represent task-specific intermediate features of OD and OC branches in task stage. For the camera parameters, we combine the offset vector and rotation matrix to represent the translation T R ∈R4×4 from source coordinate system to target coordinate system. For example, T Rlid cam means a translation from camera coordinate system to lidar coordinate system. And T Rin represents the intrinsic parameters of all cameras. For the output, the OD branch has two outputs: Bounding Box B ∈RM×(3+3+2+2+1) and Heatmap H, where M is the total number of bounding boxes and the second dimension of B represents location, scale, orientation, velocity and attribute respectively. Occ ∈RO×X×Y ×Z represents the OC branch output, which means that for the different grids from voxel grid V ∈RX×Y ×Z, there are O semantic labels in total. And we generate the occupancy voxel grid from point cloud P ∈RK×3 of K points. Image Backbone The image backbone encodes the multi-view input images I into the feature map Fimg. Following previous work (Huang et al. 2021; Huang and Huang 2022), we sequentially concatenate ResNet (He et al. 2016) and FPN (Lin et al. 2017a) as our image backbone to extract the image feature. Moreover, we empirically found that using ShapeConv (Cao et al. 2021) instead of traditional convolutional layers in the image backbone leads to improved accuracy in the OD task without increasing model complexity during inference. In view The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7670 of this, all ResNet-50 and -100 models in our method and baseine are replaced with ShapeConv for a fair comparison. View Transformer The view transformer converts the image feature Fimg to the BEV feature Fbev. We implement this module with the combination of BEVDepth (Li et al. 2022a) and BEVDet4D (Huang and Huang 2022) for better performance, namely BEVDet4D-depth, which jointly conducts depth-aware multi-view fusion and 4D temporal fusion based on BEVDepth and BEVDet4D, respectively. Depth-Aware Multi-View Fusion. Following BEVDepth (Li et al. 2022a), the Fd feature is estimated by a Depth Network based on image feature Fimg and camera parameter T Rin by, Fd = DepthNet(Fimg, T Rin). (1) Here, we use the notation DepthNet(∗, ∗) to refer to the sub-network introduced in (Li et al. 2022a), which is composed of a series of convolutional layers and MLPs. Then the Lift-Solat-Shoot(LSS) (Philion and Fidler 2020) is applied to calculate BEV feature Fbev as follows, Fbev = LSS(Fimg, Fd, T Rlid cam), (2) where LSS(∗, ∗, ∗) is a depth-aware transformation following (Li et al. 2022a) which first lift the image feature Fimg and its depth feature Fd into 3D lidar system by T Rlid cam, then splat 3D feature into 2D BEV plane to obtain Fbev. 4D Temporal Fusion. Let F curr bev and F adj bev represent the BEV feature in the current timestamp and an adjacent timestamp respectively. We then apply a temporal fusion step following (Huang and Huang 2022) to aggregate F curr bev and F adj bev using Equation 3, Fbev = Concat[F curr bev , F adj bev ] (3) where Concat[∗, ∗] represents the concatenation of two matrices along the channel dimension. Task Stage The task stage consists of two branches that take the BEV feature Fbev as input to obtain the Bounding Boxes B and Heatmap H outputs for OD branch and the Occpancy output Occ for OC branch, respectively. On the one hand, the OD branch is our primary task branch, which performs a 10-class object detection on car, truck, etc. On the other hand, the OC branch is to facilitate object detection by generating a 3D geometrical voxel around the ego vehicle. To refine the BEV feature Fbev in both branches, we first apply a 3-layers ResNet (He et al. 2016) to extract intermediate features Fod and Foc in three different resolution, which are 1/2, 1/4, 1/8 of the height, width respectively. A pyramid network (Lin et al. 2017a) is then employed to upsample the features to the same size as the original one. For the OD branch, we use CenterPoint (Yin, Zhou, and Krahenbuhl 2021) to produce the final predicted heatmap H and bounding boxes B from Fod. For the OC branch, a simple 3D-Conv Head (Fang Ming 2023) is used to generate occupancy voxel grid Occ from Foc. Modality-Fusion Module. The modality-fusion module is essential in our method to perform interactions between the above two branches. We define GC→D to adapt the features from OC to OD, and vice versa with GD→C. We employ a weighted average operation parameterized by λ to fuse features from different modalities and empirically set λ = 0.9, Fod = (1 −λ) · GC→D(Foc) + λ · Fod, Foc = (1 −λ) · GD→C(Fod) + λ · Foc. (4) Taking OC to OD as example, the Equation 4 above shows that feature Fod in branch OD are 1 −λ replaced by feature GC→D(Foc) from branch OC. GC→D serves as a filter to reduce the modality gap between OD and OC. The operation takes effect when the BEV feature is upsampled in their own branches each time in the pyramid network (Lin et al. 2017a) mentioned above. We will demonstrate that this strategy can effectively enhance the information that is ignored by their original branch and thus fill the modality gap. a Occupancy coarse labeling b Semantic fine labeling Figure 3: Illustration of the two types of labels. Occupancy Label Generation We leverage two types of supervision signals for the OC branch. One is binary occupancy label BO, whose supervision is binary with 0 and 1 representing empty and occupied voxels, respectively. The other is semantic label SE, containing 16 semantic labels such as barrier, bicycle, etc. Figure 3 illustrates the two types of label. To generate the binary occupancy labels, we consider only the geometry features of each voxel and illustrate this procedure in Algorithm 1. This approach is cost-friendly and require no extra manual annotations. For semantic label, we observe that directly using the sparse semantic occupancy points as ground-truth labels leads to unstable training. Therefore, we follow TPVFormer (Huang et al. 2023) to optimize the supervision voxel generation, where the voxels without semantic labels are masked and ignored. Training Objectives Losses of OD Branch. We adopt the CenterPoint Head (Yin, Zhou, and Krahenbuhl 2021) to produce the final OD bounding box prediction, based on which a Gaussian focal loss (Lin et al. 2017a) and an L1 loss are jointly computed. In the following, we will sequentially elaborate these two loss functions. Gaussian focal loss emphasizes more on the overall difference between predicted and actual values across the entire plane. H denotes the heatmap output by the OD branch, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7671 Algorithm 1: Binary occupancy label generation Data: Point Cloud P, Dimension Bound Xmin, Xmax, Ymin, Ymax, Zmin, Zmax, Resolution RX, RY , RZ Result: Voxel Grid V /* Transform position of points into grid index */ for p ∈P do pX, pY , pZ ←p for axis ∈{X, Y, Z} do if axismin ≤paxis ≤axismax then paxis ←(paxis−axismin) Raxis else P ←P −{p} /* Delete out of bound */ break /* Calculate the scale of output voxel grid */ X ←Xmax−Xmin RX , Y ←Ymax−Ymin RY , Z ←Zmax−Zmin RZ build V ∈RX×Y ×Z /* Fill voxels */ for v ∈V do if index(v) ∈P then v ←1 else v ←0 which is a probability matrix recording the likelihood of each pixel belonging to any of the 10 classes. We then embed the real annotations into a 2D image with the same size as H, forming the ground-truth heatmap c H, namely, a one-hot matrix. The Gaussian focal loss is then computed as, LG = −⌊c H⌋log(H)(1 −H)α −(1 −c H) γlog(1 −H)Hα, (5) where ⌊∗⌋denotes the floor operation, α = 2.0 and γ = 4.0 are parameters of intensity following (Lin et al. 2017b). L1 loss is employed to optimize bounding box statistics, i.e., absolute distance location, scale, orientation, velocity and attribute, from a micro perspective. To this end, we estimate the L1 distance between predicted bounding box B and its ground-truth b B as, L1 = 1 M · M X m |Bm −d Bm|. (6) In this way, the total loss of OD branch is shown as, LOD = LG + µodL1, (7) where µod=0.25 is the weight coefficient of OD branch. Losses of OC Branch. We combine the cross entropy loss Lce with class weight and lov´asz-softmax loss (Berman, Triki, and Blaschko 2018) Llova following (Huang et al. 2023) in OC branch as Equation 8, LOC = Llova + µocLce (8) where µoc=1 for SOGDet-SE and 6 for SOGDet-BO is the weight coefficient of OC branch. We set the same loss weight for all classes in SOGDet-SE and 1:2 for empty and occupied voxels in SOGDet-BO within Lce, respectively. Overall Objective. Combined the above loss functions together, we can define our final objective as below, L = LOD + ωLOC, (9) where ω is the balancing factor between the OC and OD branches. We empirically set ω = 10 to maximize the effectiveness of our multi-task learning framework. Method Venue NDS(%)↑ mAP(%)↑ PETR-Tiny ECCV22 43.1 36.1 BEVDet-Tiny arXiv22 39.2 31.2 DETR3D-R50 CoRL22 37.4 30.3 Ego3RT-R50 ECCV22 40.9 35.5 BEVDet-R50 arXiv22 37.9 29.8 BEVDet4D-R50 arXiv22 45.7 32.2 BEVDepth-R50 AAAI23 47.5 35.1 AeDet-R50 CVPR23 50.1 38.7 SOGDet-BO-R50 50.2 38.2 SOGDet-SE-R50 50.6 38.8 BEVerse-Small arXiv22 49.5 35.2 PETR-R101 ECCV22 42.1 35.7 UVTR-R101 NIPS2022 48.3 37.9 PolarDETR-T-R101 arXiv22 48.8 38.3 BEVFormer-R101 ECCV22 51.7 41.6 BEVDepth-R101 AAAI23 53.5 41.2 PolarFormer-R101 AAAI23 52.8 43.2 AeDet-R101 CVPR23 56.1 44.9 SOGDet-BO-R101 55.4 43.9 SOGDet-SE-R101 56.6 45.8 Table 1: Performance comparison on the nuScenes validation set. As indicated in (Liu et al. 2021), the complexity of Swin-Tiny and -Small are similar to those of ResNet-50 and -101, respectively. Experiments Experimental Setup Dataset and Metrics. We conducted extensive experiments on the nuScenes (Caesar et al. 2020) dataset, which is currently the exclusive benchmark for both 3D object detection and occupancy prediction. Following the standard practice (Huang et al. 2021; Feng et al. 2022), we used the official splits of this dataset: 700 and 150 scenes respectively for training and validation, and the remaining 150 for testing. For OD task, we reported nuScenes Detection Score (NDS), mean Average Precision (mAP), mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error (mAOE), mean Average Velocity Error (mAVE), and mean Average Attribute Error (mAAE). Among them, NDS and mAP are the more representative ones. For OC task, we designed two types of occupancy labeling approaches. For the binary occupancy labeling approach, as we are the first to employ such labeling approach in the literature to the best of our knowledge, we only performed qualitative experiments. For the semantic labeling one, we maintained a consistent experimental protocol with the stateof-the-art method TPVFormer(Huang et al. 2023). Accordingly, we report the mean Intersection over Union (mIoU) of all semantic categories. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7672 Method Venue NDS(%)↑ mAP(%)↑ mATE↓ mASE↓ mAOE↓ mAVE↓ mAAE↓ FCOS3D (Wang et al. 2021) ICCV21 42.8 35.8 0.690 0.249 0.452 1.434 0.124 DD3D (Park et al. 2021) ICCV21 47.7 41.8 0.572 0.249 0.368 1.014 0.124 PGD (Wang et al. 2022a) CoRL22 44.8 38.6 0.626 0.245 0.451 1.509 0.127 BEVDet (Huang et al. 2021) arXiv22 48.2 42.2 0.529 0.236 0.395 0.979 0.152 BEVFormer (Li et al. 2022b) ECCV22 53.5 44.5 0.631 0.257 0.405 0.435 0.143 DETR3D (Wang et al. 2022b) CoRL22 47.9 41.2 0.641 0.255 0.394 0.845 0.133 Ego3RT (Lu et al. 2022) ECCV22 47.3 42.5 0.549 0.264 0.433 1.014 0.145 PETR (Liu et al. 2022) ECCV22 50.4 44.1 0.593 0.249 0.383 0.808 0.132 CMT-C (Yan et al. 2023) ICCV23 48.1 42.9 0.616 0.248 0.415 0.904 0.147 PETRv2 (Liu et al. 2023) ICCV23 55.3 45.6 0.601 0.249 0.391 0.382 0.123 X3KD (Klingner et al. 2023) CVPR23 56.1 45.6 0.506 0.253 0.414 0.366 0.131 SOGDet-BO 57.8 47.1 0.482 0.248 0.390 0.329 0.125 SOGDet-SE 58.1 47.4 0.471 0.246 0.389 0.330 0.128 Table 2: Performance comparison on the nuScenes test set. category-wise IoU (%)↑ Method Venue mIoU(%)↑ barr. bicy. bus car veh. mot. ped. trai. cone truc. driv. flat walk terr. man. veg. TPVFormer CVPR23 59.3 64.9 27.0 83.0 82.8 38.3 27.4 44.9 24.0 55.4 73.6 91.7 60.7 59.8 61.1 78.2 76.5 SOGDet-SE 58.6 57.8 30.7 74.9 74.7 43.7 42.0 44.5 32.7 62.6 63.9 85.9 54.3 54.6 58.9 76.9 80.2 Table 3: Comparison with the State-of-the-Art OC method on the nuScenes val set. Implementation Details. To demonstrate the effectiveness and generalization capabilities of SOGDet, we used several popular architectures (Li et al. 2022a; Huang and Huang 2022). To ensure that any improvements were solely due to our SOGDet, we kept most experimental settings, such as backbone and batch size untouched, and added only the OC branch. Unless otherwise noted, our baseline model is BEVDet4D-depth, which is a fusion of two recent multiview 3D object detectors, BEVDepth (Li et al. 2022a) and BEVDet4D (Huang and Huang 2022). We followed the experimental protocol of AEDet (Feng et al. 2022) and training on eight 80G A100 GPUs with a mini-batch size of 8, for a total batch size of 64, and trained the model for 24 epochs with CBGS (Zhu et al. 2019) using AdamW as the optimizer with a learning rate of 2e-4. Comparison with State-of-the-Art We evaluated the performance of our SOGDet against other state-of-the-art multi-view 3D object detectors on the nuScenes validation and test sets. Table 1 reports the results for the validation set using Swin-Tiny, -Small, ResNet-50 and -101 backbones. As shown in the table, our method achieves highly favorable model performance, with NDS scores of 50.2% and 55.4% for SOGDet-BO and 50.6% and 56.6% for SOGDet-SE on ResNet-50 and -101, respectively. These results surpass current state-of-the-art multi-view 3D object detectors with a large margin, including BEVDepth (Li et al. 2022a) (3.1% improvement in NDS at both ResNet-50 and -100) and AEDet (Feng et al. 2022) (0.5% improvement in NDS at both ResNet-50 and -100). In Table 2, we present the results obtained by SOGDet with the ResNet-101 backbone on the nuScenes test set, where we report the performance of state-of-the-art methods that use the same backbone network for a fair comparison. We follow the same training strategy of existing approaches (Li et al. 2022a; Feng et al. 2022) that utilize both the training and validation sets to retrain the networks and without any test-time augmentation. SOGDet shows improved performance in multi-view 3D OD task with 58.1% NDS and 47.4% mAP, further verifying the effectiveness of our proposed approach. Ablation Study Comparison with the State-of-the-Art OC Method. To further evaluate the effectiveness of our approach, we compared our method with respect to semantic categories with TPVFormer (Huang et al. 2023) and presented the results in Table 3. Backbones from both methods take equivalent complexities. The primary goal of our work is to enhance the 3D OD by integrating 3D OC. Despite its simpleness, results shown in Table 3 demonstrate that our SOGDet are comparable to TPVFormer, a state-of-the-art method specifically designed for the OC task. Moreover, our method even outperforms this baseline in certain categories such as bicycles, vegetation, and others, which indicates that the combination of the two branches can bring benefits for the OC branch as well, serving as another byproduct. Different Baseline Architecture. Our proposed SOGDet is a flexible method that can be seamlessly integrated into most BEV-based multi-view object detection architectures. In order to evaluate the generalization capabilities of our method, we tested its effectiveness on several representative baseline architectures, namely BEVDet (Huang et al. 2021), BEVDet4D (Huang and Huang 2022), BEVDepth (Li et al. 2022a), and BEVDet4D-depth, using the nuScenes validation set. The results in Table 4 show that SOGDet consistently surpasses these baselines under various settings, which demonstrates the validity of our method to generalThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7673 Input images Output GT Hybrid feature Figure 4: Visualization for the OD and OC branches of SOGDet. The input consists of six multi-view images. For both the output and the GT (red box) column, from top to bottom, we sequentially show the predictions of SOGDet-SE for OD, SOGDet-SE for OC and SOGDet-BO for OC. The Hybrid feature is blended from OD and OC branch predictions of SOGDet-SE. ize to different model architectures. BN. Architecture Method mAP(%) NDS(%) Baseline 31.2 39.2 BEVDet SOGDet-SE 32.9 41.5 Baseline 33.8 47.6 Tiny BEVDet4D SOGDet-SE 34.6 48.7 Baseline 35.1 47.5 BEVDepth SOGDet-SE 37.2 48.3 Baseline 37.0 49.0 R50 BEVDet4D-depth SOGDet-SE 38.8 50.6 Table 4: Performance comparison with different baselines. Complexity Analysis. The efficiency concern is highly significant under resource-constrained environments. Pertaining to this aspect, we estimate metrics including floating point operations (FLOPs.) and parameter count (Param.), and show the results in Figure 5. It can be observed that compared with the state-of-the-art method AeDet (Feng et al. 2022), our SOGDet is more efficient especially on the more important metric FLOPs, i.e., 252G v.s. 473G. Further, SOGDet outperforms AeDet by 0.5% in terms of NDS. This indicates that our method achieves a better trade-off between efficiency and model performance. Visualization Figure 4 illustrates qualitative results of our approach on the nuScenes (Caesar et al. 2020) dataset using ResNet-50 as the backbone for both OD and the OC branch. Pertaining to the object detection task, we focus only on occupied voxels, and therefore, locations marked as “empty” are not shown. The hybrid features reveal strong correlations between the physical structures and the location of the detected objects, such as vehicles, bicycles, and pedestrians. For example, vehicles are typically detected in drive surface, while bicycles and pedestrians are often detected on sidewalk. These findings are consistent with the observations and motivations of our paper and demonstrate that the integration of the two branches can lead to a better perception and understanding of the real world. 100 200 300 400 500 0 20 40 60 80 FLOPS. (G) Param.(M) Baseline-R50 AeDet-R50 SOGDet-R50 Figure 5: Parameter count (Param.) and floating-point operations (FLOPs). Conclusion and Future Work The Bird’s Eye View (BEV) based method has shown great promise in achieving accurate 3D object detection using multi-view images. However, most existing BEV-based methods unexpectedly ignore the physical contexts in the environment, which is critical to the perception of 3D scenes. In this paper, we propose the SOGDet approach to incorporate such context using a 3D semantic occupancy approach. In particular, our SOGDet predicts not only the pose and type of each 3D object, but also the semantic classes of the physical contexts for finer-grained detection. Extensive experimental results on the nuScenes dataset demonstrate that our SOGDet consistently improves the model performance of several popular backbone networks and baseline methods. In future work, we plan to explore the application of SOGDet with more auxiliary data inputs, such as lidar and radar, to further help the 3D object detection. Additionally, we believe that integrating 3D semantic-occupancy prediction into other autonomous driving tasks beyond 3D object detection, such as path planning and decision-making, may contribute a promising avenue for future research. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7674 Acknowledgements This work is supported by the Advanced Research and Technology Innovation Centre (ARTIC), the National University of Singapore under Grant (project number: A-8000969-0000). References Arnold, E.; Al-Jarrah, O. Y.; Dianati, M.; Fallah, S.; Oxtoby, D.; and Mouzakitis, A. 2019. A survey on 3d object detection methods for autonomous driving applications. IEEE Transactions on Intelligent Transportation Systems, 20(10): 3782–3795. Berman, M.; Triki, A. R.; and Blaschko, M. B. 2018. The lov´asz-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4413–4421. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11621–11631. Cai, Y.; Li, B.; Jiao, Z.; Li, H.; Zeng, X.; and Wang, X. 2020. Monocular 3d object detection with decoupled structured polygon estimation and height-guided depth estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 10478–10485. Cao, A.-Q.; and de Charette, R. 2022. Monoscene: Monocular 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3991–4001. Cao, J.; Leng, H.; Lischinski, D.; Cohen-Or, D.; Tu, C.; and Li, Y. 2021. Shapeconv: Shape-aware convolutional layer for indoor rgb-d semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 7088–7097. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Chen, X.; Ma, H.; Wan, J.; Li, B.; and Xia, T. 2017. Multiview 3d object detection network for autonomous driving. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 1907–1915. Chen, Z.; Li, Z.; Zhang, S.; Fang, L.; Jiang, Q.; and Zhao, F. 2022. Graph-DETR3D: rethinking overlapping regions for multi-view 3D object detection. In Proceedings of the 30th ACM International Conference on Multimedia, 5999–6008. Ding, M.; Huo, Y.; Yi, H.; Wang, Z.; Shi, J.; Lu, Z.; and Luo, P. 2020. Learning depth-guided convolutions for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition workshops, 1000–1001. Fang Ming, Z. L. 2023. Occupancy Dataset for nuScenes. https://github.com/FANG-MING/occupancy-for-nuscenes. Feng, C.; Jie, Z.; Zhong, Y.; Chu, X.; and Ma, L. 2022. AeDet: Azimuth-invariant Multi-view 3D Object Detection. arXiv preprint arXiv:2211.12501. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Huang, J.; and Huang, G. 2022. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054. Huang, J.; Huang, G.; Zhu, Z.; Ye, Y.; and Du, D. 2021. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view. arXiv preprint arXiv:2112.11790. Huang, Y.; Zheng, W.; Zhang, Y.; Zhou, J.; and Lu, J. 2023. Tri-Perspective View for Vision-Based 3D Semantic Occupancy Prediction. arXiv preprint arXiv:2302.07817. Klingner, M.; Borse, S.; Kumar, V. R.; Rezaei, B.; Narayanan, V.; Yogamani, S.; and Porikli, F. 2023. X3KD: Knowledge Distillation Across Modalities, Tasks and Stages for Multi-Camera 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13343–13353. Kumar, A.; Brazil, G.; and Liu, X. 2021. Groomed-nms: Grouped mathematically differentiable nms for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8973– 8983. Li, Y.; Ge, Z.; Yu, G.; Yang, J.; Wang, Z.; Shi, Y.; Sun, J.; and Li, Z. 2022a. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092. Li, Y.; Yu, Z.; Choy, C.; Xiao, C.; Alvarez, J. M.; Fidler, S.; Feng, C.; and Anandkumar, A. 2023. Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion. arXiv preprint arXiv:2302.12251. Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; and Dai, J. 2022b. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX, 1–18. Springer. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017a. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2117–2125. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017b. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. Liu, Y.; Wang, T.; Zhang, X.; and Sun, J. 2022. Petr: Position embedding transformation for multi-view 3d object detection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVII, 531–548. Springer. Liu, Y.; Yan, J.; Jia, F.; Li, S.; Gao, A.; Wang, T.; Zhang, X.; and Sun, J. 2023. Petrv2: A unified framework for 3d The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7675 perception from multi-camera images. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Lu, J.; Zhou, Z.; Zhu, X.; Xu, H.; and Zhang, L. 2022. Learning ego 3d representation as ray tracing. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVI, 129– 144. Springer. Miao, R.; Liu, W.; Chen, M.; Gong, Z.; Xu, W.; Hu, C.; and Zhou, S. 2023. Occdepth: A depth-aware method for 3d semantic scene completion. arXiv preprint arXiv:2302.13540. Park, D.; Ambrus, R.; Guizilini, V.; Li, J.; and Gaidon, A. 2021. Is pseudo-lidar needed for monocular 3d object detection? In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3142–3152. Philion, J.; and Fidler, S. 2020. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, 194–210. Springer. Reading, C.; Harakeh, A.; Chae, J.; and Waslander, S. L. 2021. Categorical depth distribution network for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8555–8564. Shi, Y.; Jiang, K.; Li, J.; Wen, J.; Qian, Z.; Yang, M.; Wang, K.; and Yang, D. 2023. Grid-Centric Traffic Scenario Perception for Autonomous Driving: A Comprehensive Review. arXiv preprint arXiv:2303.01212. Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. 2020. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2446– 2454. Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9627–9636. Wang, T.; Xinge, Z.; Pang, J.; and Lin, D. 2022a. Probabilistic and geometric depth: Detecting objects in perspective. In Conference on Robot Learning, 1475–1485. PMLR. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 913–922. Wang, X.; Zhu, Z.; Xu, W.; Zhang, Y.; Wei, Y.; Chi, X.; Ye, Y.; Du, D.; Lu, J.; and Wang, X. 2023. OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic Occupancy Perception. arXiv preprint arXiv:2303.03991. Wang, Y.; Guizilini, V. C.; Zhang, T.; Wang, Y.; Zhao, H.; and Solomon, J. 2022b. Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning, 180–191. PMLR. Wei, Y.; Zhao, L.; Zheng, W.; Zhu, Z.; Zhou, J.; and Lu, J. 2023. SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving. arXiv preprint arXiv:2303.09551. Yan, J.; Liu, Y.; Sun, J.; Jia, F.; Li, S.; Wang, T.; and Zhang, X. 2023. Cross modal transformer via coordinates encoding for 3d object dectection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Ye, D.; Zhou, Z.; Chen, W.; Xie, Y.; Wang, Y.; Wang, P.; and Foroosh, H. 2023. Lidarmultinet: Towards a unified multi-task network for lidar perception. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 3231–3240. Yin, T.; Zhou, X.; and Krahenbuhl, P. 2021. Centerbased 3d object detection and tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11784–11793. Zhang, Y.; Zhu, Z.; Zheng, W.; Huang, J.; Huang, G.; Zhou, J.; and Lu, J. 2022. Beverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving. arXiv preprint arXiv:2205.09743. Zhu, B.; Jiang, Z.; Zhou, X.; Li, Z.; and Yu, G. 2019. Classbalanced grouping and sampling for point cloud 3d object detection. arXiv preprint arXiv:1908.09492. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7676
2024
852
18,687
Test-Time Adaptation via Style and Structure Guidance for Histological Image Registration Shenglong Zhou1, Zhiwei Xiong1, 2*, Feng Wu1, 2 1 University of Science and Technology of China 2 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center slzhou96@mail.ustc.edu.cn, zwxiong@ustc.edu.cn, fengwu@ustc.edu.cn Abstract Image registration plays a crucial role in histological image analysis, encompassing tasks like multi-modality fusion and disease grading. Traditional registration methods optimize objective functions for each image pair, yielding reliable accuracy but demanding heavy inference burdens. Recently, learning-based registration methods utilize networks to learn the optimization process during training and apply a one-step forward process during testing. While these methods offer promising registration performance with reduced inference time, they remain sensitive to appearance variances and local structure changes commonly encountered in histological image registration scenarios. In this paper, for the first time, we propose a novel test-time adaptation method for histological image registration, aiming to improve the generalization ability of learning-based methods. Specifically, we design two operations, style guidance and shape guidance, for the test-time adaptation process. The former leverages style representations encoded by feature statistics to address the issue of appearance variances, while the latter incorporates shape representations encoded by HOG features to improve registration accuracy in regions with structural changes. Furthermore, we consider the continuity of the model during the test-time adaptation process. Different from the previous methods initialized by a given trained model, we introduce a smoothing strategy to leverage historical models for better generalization. We conduct experiments with several representative learning-based backbones on the public histological dataset, demonstrating the superior registration performance of our test-time adaptation method. Introduction Image registration is an important task in computer vision, particularly within the realm of medical image analysis (Chakravarty et al. 2006; Li et al. 2022; Chen et al. 2023). The goal of image registration is to establish a transformation that aligns a pair of images (i.e., a fixed image and a moving image), thereby enabling a wide range of clinical applications. In the field of histological image analysis, the utilization of various stains during histology sample preparation can offer valuable information, and each stain reveals distinct tissue properties. Their fusion can benefit tasks such *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as grading, classification, and 3D reconstruction. However, different preparation processes and the use of consecutive tissue slices introduce complex and inevitable deformations. Therefore, non-rigid registration becomes essential to facilitate further processing. Traditional methods solve the registration task by formulating it as an optimization problem for each image pair. Though traditional methods provide reliable registration accuracy, one obvious limitation is that the optimization can be computationally expensive. Recently, learning-based methods (Dalca et al. 2018; Mok and Chung 2020; Hu et al. 2022a; Zhou et al. 2023; Liu et al. 2023) utilize networks to learn the optimization process from the training image pairs, thus regarding the registration task as a mapping from an image pair to a deformation field during testing. Along with the development of network structures from direct designs such as U-Net (Dalca et al. 2018; Zhou et al. 2019), to progressive designs such as Pyramid (Hu et al. 2020; Mok and Chung 2020) and Cascade (Zhao et al. 2019a; Hu et al. 2022b), learning-based methods achieve a promising registration performance with a reduced inference burden. In the field of histological image registration, learningbased methods also attract research attention. As mentioned in (Borovec et al. 2020), TUB proposes a supervised convolutional neural network that relies on manually labeled key points. DeepHistReg (Wodzinski and M¨uller 2021) proposes an unsupervised method by designing a pyramid-based nonrigid registration network, which does not demand any manual annotations. It is worth mentioning that, different from biomedical datasets such as MRI or CT, the registration of histological images usually suffers from the following challenges as shown in Fig. 1. First, there are appearance variances between histological images due to multiple stains. These appearance variances not only exist between the fixed image and the moving image, but also between training images and test images, even with pre-processes such as gray translation or color normalization. Such appearance variances can significantly affect the robustness and generalization of learning-based registration methods. Second, there are local structure changes such as repetitive textures and missing sections in histological images. Therefore, learningbased methods are hard to learn and may perform poorly when encountering changed structures in test images. How to design robust and generalizable learning-based registraThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7677 Figure 1: Challenges of the histological image registration. Above are the appearance variances due to multiple staining, and below are the local structure changes. tion methods to solve the above challenges is an important problem. SFG (Ge et al. 2022) proposes to introduce dense SIFT features and automatic key points for histological image registration, aiming to handle the challenges of structural features. But this method still focuses on the training time and ignores the model’s potential during the test time. Different from previous methods, we explore the test-time adaptation for histological image registration, aiming to improve the generalization and robustness. Test-time adaptation is a useful approach to improving the model’s generalization ability, which has been explored in several computer vision tasks including classification (Sun et al. 2020; Wang et al. 2020) and segmentation (Hu et al. 2021). In terms of medical image registration, test-time adaptation further tunes the given trained model on each test image pair, which can be regarded as seeking a middle ground (Zhu et al. 2021) between traditional optimization-based methods and pure learning-based methods. SSMSR (Zhu et al. 2021) introduces the test-time adaptation for MRI/echocardiogram registration with a straightforward multi-scale design. But this method does not consider the challenges of histological image registration, and thus cannot handle appearance variances and local structure changes well. Meanwhile, SSMSR ignores the continuity of models during the test-time adaptation process, thus restricting further improvement of registration performance. In this paper, for the first time, we propose a novel testtime adaptation method for histological image registration, named SGTTA. We design two operations, style guidance and structure guidance, for solving challenges in histological images. Style guidance aims to handle appearance variances, and the core idea is to transfer the style representation from training images to test images, which can help narrow the style gap between them. Specially, we extract the features from the encoder branch of the trained model, then calculate the statistics (i.e., mean and standard deviation values) of features by instance normalization (IN), and regard them as the style representation. When conducting the test-time adaptation for image pairs, style guidance combines the style representation from training images with the test images’ features by adaptive instance adaptation (AdaIN (Huang and Belongie 2017)). It is worth mentioning that, feature statistics from training images do not leak the complete information of images (raw images cannot be recovered from the statistics), which can protect the privacy of biomedical datasets. Structure guidance aims to handle local structure changes, and the core idea is to introduce the structural constraints when conducting the test-time adaptation. We utilize the HOG (Dalal and Triggs 2005) descriptors as the structure representation for each test image pair. Then structure guidance constrains the similarity of HOG descriptors between the fixed image and the warped moving image. Furthermore, we consider the continuity of models during the test-time adaptation process. Different from the previous methods initialized by a given trained model for each test image pair, we introduce a smoothing strategy to leverage historical models. The smoothing strategy combines the model from the last test image pair and the given trained model. The model from the last test image pair contains the learned parameters in the test domain, and the given trained model provides a strong registration ability, which can decrease error accumulation and catastrophic forgetting. Therefore, the combination of them can obtain better generalization performance. To evaluate the effectiveness of SGTTA, we conduct comprehensive experiments on the public histological dataset with representative learningbased backbones including U-Net, Pyramid, and Cascade. Both quantitative and qualitative results demonstrate the superior performance of our SGTTA. We summarize the main contributions as follows: • We explore the test-time adaptation for histological image registration for the first time, aiming to improve the generalization of learning-based methods. • We propose a novel test-time adaptation method, by designing style guidance and structure guidance to handle appearance variances and local structure changes. • We introduce a smoothing strategy to leverage historical models, considering the continuity of models during the test-time adaptation process. • We conduct comprehensive experiments on the public histological dataset with representative backbones (UNet, Pyramid, and Cascade), demonstrating better registration performance of our SGTTA. Related Work Biomedical Image Registration Traditional methods solve the registration task by formulating it as an optimization problem for each image pair. Numerous traditional methods have been developed for nonrigid image registration, including B-spline deformationbased methods (Song et al. 2013), elastic deformation-based model (Du Bois d’Aische et al. 2005), large deformation diffeomorphic metric image matching algorithm (Ceritoglu et al. 2010), and greedy diffeomorphic algorithm (Venet et al. 2021). ANHIR (Borovec et al. 2020) also describes many traditional methods for histological image registration. Recently, deep neural networks have been applied to biomedical image registration (Hu et al. 2022a). VoxelMorph (Balakrishnan et al. 2018) adopts U-Net to generate the deformation field directly, which saves considerable The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7678 Model Style Guidance Model Fixed Moving STN Warped Moving Eq.3 HOG Eq.10 HOG 𝜙𝜙 Training Time Test Time Structure Guidance U-Net U-Net Cascade Cascade Pyramid Pyramid 𝐹𝐹𝑠𝑠𝐶𝐶 𝐹𝐹𝑠𝑠𝐶𝐶 𝐹𝐹𝑠𝑠𝑃𝑃 𝐹𝐹𝑠𝑠𝑃𝑃 𝐹𝐹𝑠𝑠𝑈𝑈 𝐹𝐹𝑠𝑠𝑈𝑈 HOG Model Continuity Figure 2: Overview of our SGTTA, which consists of style guidance, structure guidance, and model continuity. inference time compared with traditional methods. DualPRNet (Hu et al. 2019) proposes a dual-stream pyramid structure to generate the deformation field in a coarse-tofine manner, and LapIRN (Mok and Chung 2020) adopts an image Laplacian pyramid to generate and refine deformation fields. Recursive cascaded network (Zhao et al. 2019a) takes U-shape networks as its sub-networks, and analyzes the effect in different cascading stages. For histological image registration, Pyramid structures (Ge et al. 2022; Wodzinski and M¨uller 2021) and Cascade structures (Borovec et al. 2020) have also been adopted. But these methods focus on the training time, and we explore the test-time adaptation for histological image registration for the first time. Test-time Adaptation Test-time adaptation methods can update a model with the distributional information provided by a single or batch of test data. TTT (Sun et al. 2020) adapts the feature extractor at test time by leveraging an auxiliary self-supervised task of rotation prediction. TTT++ (Liu et al. 2021) improves TTT by further aligning the first- and second-order statistics of the training and test data. Tent (Wang et al. 2020) proposes to adapt the affine parameters in batch normalization layers at test time by minimizing the entropy of model predictions. T3A (Iwasawa and Matsuo 2021) adjusts the classifier of a trained source model by computing a pseudo-prototype representation of different classes using unlabeled test data. In the field of image registration, SSMSR (Zhu et al. 2021) introduces the test-time adaptation for MRI/echocardiogram registration with a multi-scale design. Differently, we propose a novel test-time adaptation method for solving challenges in histological image registration. Methodology Preliminaries and Notations Given a pair of histological images as the fixed image If and the moving image Im, nonrigid registration aims to obtain the deformation field ϕ. The warped moving image Im(ϕ) is aligned to If by the deformation field ϕ. According to (Balakrishnan et al. 2019), the image registration problem can be formulated as the minimization of differences between Im(ϕ) and If, which is subject to a smoothness constraint on the deformation field. The specific formula is shown as ˆϕ = arg min ϕ LS(I, Im(ϕ)) + λLR(ϕ), (1) where LS is a reconstruction loss measuring the dissimilarity between two images, LR constrains the smoothness of the deformation field, and λ is a regularization parameter balancing the trade-off between the reconstruction and smoothness losses. The smoothness term is defined as LR(ϕ) = X p∈Ω n X i=1 ∥∇ϕi(p)∥2 , (2) where p is the coordinate, n is the number of pixels, ∇is the spatial gradients, and Ωis the neighbouring region. Learning-Based Registration Following common learning-based registration methods, we model the deformation field ϕ through a network N with learnable parameters θ, which receives If and Im as input and generates ϕ as output. The whole process can be formulated as ϕ = N(If, Im; θ). Therefore, the determination of the deformation field is treated as a learning problem, seeking to identify the optimal parameters θ that minimize the loss function presented in Eq. (1). By the way, many metrics can be used to measure the dissimilarity LS in Eq. (1), and we choose the negative normalized local cross-correlation as the reconstruction loss in this paper. We choose several advanced registration backbones as the network N, including U-Net, Pyramid, and Cascade. For the U-Net backbone, we follow the design in VoxelMorph (Dalca et al. 2018). U-Net has the encoder part and the decoder part, where the encoder part generates four features The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7679 {F u s}4 s=1 according to the spatial scale. For the Pyramid backbone, we follow the design in DeepHistReg (Wodzinski and M¨uller 2021) and RDN (Hu et al. 2022b) roughly. Pyramid also has the encoder part and the decoder part, while the decoder part generates the deformation fields in a coarseto-fine manner. For consistency, we design the encoder in Pyramid to generate four features {F p s}4 s=1 according to the spatial scale. For the Cascade backbone, we follow the design in RDN (Hu et al. 2022b) and RCN (Zhao et al. 2019a) roughly, which generates the deformation field in a recursive manner. We denote the features in encoder of Cascade as {F c s}4 s=1, and omit different subnetworks for convenience. Test-Time Adaptation for Registration Given the training dataset including image pairs {Itr f , Itr m}, we first learn the model’s parameters by minimizing the loss function Eq. 1 and obtain the given trained model θtr. In the common paradigm, we can directly apply θtr on the test dataset with a one-step forward process. The benefit of this paradigm is efficient inference but at the cost of performance drop. The main reason for this is the inherent characteristics of each image pair, posing challenges for learned models to generalize effectively to new test images. Particularly, when dealing with histological images with appearance variances and structural changes, a notable performance gap between the training and test datasets becomes obvious. We introduce test-time adaptation to improve the generalization of learning-based registration. Under this paradigm, given the test image pair {Ite f , Ite m}, the network parameters θte are initialized by θtr and further optimized as θte = arg min θ E  LS(Ite f , Ite m(ϕ); θ) + λLR(ϕ; θ)  . (3) Test-time adaptation not only alleviates the drawbacks in traditional registration methods including high cost in optimization, long running time, and poor performance due to local optimality (KingmaandJ 2015), but also improves the performance of pure learning-based methods by further adapting on test image pairs. Style Guidance for Test-Time Adaptation Appearance variances usually exist in histological image registration, so we propose style guidance to solve this challenge during the test-time adaptation process. The core idea is to transfer the style representation from training images to test images, which can help narrow the style gap between images. How to define the style representation is the first question. In the field of style transfer, it is well-known that convolutional feature statistics can represent the style information of an image, such as channel-wise mean and variance (Gatys, Ecker, and Bethge 2016). Following (Ulyanov, Vedaldi, and Lempitsky 2017), image style can be removed by instance normalization (IN). For an image I, the feature of I can be defined as F ∈RC×H×W , where H and W are spatial dimensions, and C is the number of channels. Therefore, IN can be formulated as: IN (F ) = γ F −µ σ + β (4) Instance Normalization AdaIN Style Guidance = ∗" + 1 −" ∗ = ∗" + 1 −" ∗ &!"# &$% '$% &$" '$" '!"# !!"# " !! "# Figure 3: The illustration of style guidance. where γ, β are learnable affine transformation parameters, and µ, σ ∈RC are the channel-wise mean and standard deviation of feature map calculated as µ = 1 HW H X h=1 W X w=1 F chw, (5) σ = v u u t 1 HW H X h=1 W X w=1 (F chw −µ)2 + ϵ, (6) where ϵ is a constant for numerical stability. Inspired by the above style transfer designs, we choose feature statistics (mean and standard deviation) as the style representation. Moreover, AdaIN is proposed to convert one image style to another one, which replaces the affine parameters by the specific style statistics (˜µ, ˜σ). The formulation of AdaIN is defined as AdaIN (F , (˜µ, ˜σ)) = ˜σ F −µ σ + ˜µ. (7) Considering the simplicity, we choose the AdaIN to transfer the style representation from training images to test images. Specically, given the trained model θtr, we use it to extract the features of the encoder part on the training dataset. In fact, there are different types of encoded features based on different backbones, F tr(u) s /F tr(p) s /F tr(c) s , and we denote them as F tr s for convenience. Then we calculate the feature statistics (mean and standard deviation) of features following IN, and average them as the style representation (µtr, σtr). During the test-time adaptation process, given one test image pair, we extract the encoded features F te s and calculate the style representation (µte, σte) in a similar way. Style guidance combines the test style representation with ones from the training dataset, aiming to transfer the style representations from training images. The combination is shown as µnew = αµtr + (1 −α)µte, (8) σnew = ασtr + (1 −α)σte, (9) where α is the hyperparameter to control the style transfer. Finally, we utilize the AdaIN to obtain the new encoded features ˜F te s = AdaIN F te, (µnew, σnew)  , as shown in figure 3. After that, new encoded features are passed into the next encoder block or decoder part. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7680 Structure Guidance for Test-Time Adaptation Local structure changes usually exist in histological image registration, so we propose structure guidance to solve this challenge during the test-time adaptation process. As discussed in (Ge et al. 2022), structural representations are robust to staining images and retaining the anatomical structures is helpful for diagnosis in histological images (Miranda et al. 2012). The core idea of structure guidance is to introduce the structural constraints when conducting the test-time adaptation. The first question is to determine the structure representations. In fact, there are many structurebased descriptions such as SIFT (Lowe 1999), shape context (Belongie, Malik, and Puzicha 2002), and HOG (Dalal and Triggs 2005). Considering the efficiency and lightweight during the test-time adaptation process, we choose the HOG as the structure representation. Specifically, given the test image pair {Ite f , Ite m}, we can obtain the deformation field ϕ by the given model. According to the previous statements, we use Eq. 3 to find the optimal parameters. Here, we add another constraint based on the structure representations. We extract structure representations of the fixed image Ite f as Hte f , along with the representations of the warped moving image Ite m(ϕ) as Hte m(ϕ). Furthermore, considering the complicated local structures, we provide multi-scale structure representations for better performance. To obtain the multi-scale representations, we downsample the images into several scales by bilinear interpolation and extract HOG descriptors in each scale. We can regard the multi-scale structure representations of the fixed image {Hte f(s)}4 s=1 and the warped moving image {Hte m(s)(ϕ)}4 s=1 as the structure guidance for the constraint during the test-time adaptation process. The constraint can be formulated as Lsg = −  Hte f(s) −Hte f(s)  ·  Hte m(s)(ϕ) −Hte m(s)(ϕ) 2  Hte f(s) −Hte f(s) 2 ·  Hte m(s)(ϕ) −Hte m(s)(ϕ) 2 , (10) where ∗means the local means operation. Model Continuity during Test-time Adaptation We further consider the continuity of the model during the test-time adaptation process. Given the test image pair {Ite f , Ite m} at time point t (it is reasonable to assume that pairs of images appear sequentially over time), we define three models to present our smoothing strategy. First, we define θ ′ t as the historical model, then we define θte∗ t as the initialization model, finally we define θte t as the obtained model after the test-time adaptation process. To consider the model’s continuity, we propose a smoothing strategy to leverage historical models as θ ′ t+1 = wθ ′ t + (1 −w)θte t , (11) where w is a smoothing factor. For the next time point t + 1, the previous method just applies the given trained model θtr as the initialized model θte∗ t+1 = θtr. Differently, we initialize the model with the combination of the given trained model and the continuous model as θte∗ t+1 = kθtr +(1−k)θ ′ t+1. Then, starting from the θte∗ t+1, the model θte t+1 can be obtained after test-time adaptation process. Experiments Datasets and Metrics. We conduct experiments on the public histological dataset ANHIR for comparison. SGTTA focuses on the test-time stage and is complementary to learning-based methods in the training stage, so it is feasible for us to choose the dataset for evaluation. For fair comparison and efficient experiments, we use the images containing public landmarks in the raw ANHIR dataset as our dataset, and we split them into 115 pairs for training and 115 pairs for testing. The public landmarks represent obvious structures in the images. Based on landmarks, we use the relative target registration error of landmarks (rTRE) for each pair of images as the evaluation metric. We calculate the median, average, and maximum of all rTRE values in an image pair. At the case level, there is aggregation by the median or the average. The metrics are medianmedian rTRE (MMrTRE), average-median rTRE (AMrTRE), median-average rTRE (MArTRE), average-average rTRE (AArTRE), median-maximum rTRE (MMxrTRE) and average-maximum rTRE (AMxrTRE). Robustness is evaluated by the relative number of successfully registered landmarks. Baseline Methods. First, we implement six traditional methods as our main comparison methods. Following the guidance from ANHIR, we implement bUnwarpJ (ArgandaCarreras et al. 2006), RVSS (Arganda-Carreras et al. 2006), NiftyReg (Rueckert et al. 1999), Elastix (Klein et al. 2009), ANTs (Avants et al. 2008), and DROP (Glocker et al. 2011). Second, considering that SGTTA is complementary to learning-based methods, we implement three advanced registration backbones (U-Net, Pyramid, and Cascade) as the comparison methods. Specifically, as mentioned before, we denote the U-Net backbone as HistRegU. For the Pyramid backbone, we follow the design in DeepHistReg (Wodzinski and M¨uller 2021) and RDN (Hu et al. 2022b) roughly and denote it as HistRegP. For the Cascade backbone, we follow the design in VTN (Zhao et al. 2019b) and RCN (Zhao et al. 2019a) roughly and denote it as HistRegC. Implementation. All the learning-based methods are implemented on PyTorch on 4 cards of NVIDIA TITAN XP. We apply the same pre-processing step and follow (Wodzinski and M¨uller 2021) to conduct the rotation prediction and affine registration, so we focus on the nonrigid registration problem. For a fair comparison, we do not use any pretrained models and apply the same training schedule for all learning-based methods. Specifically, during the training stage, we set the batch size as 1 and the number of epochs as 100. We set the regularization parameter λ as 30 following (Wodzinski and M¨uller 2021). For other hyperparameters, we set α as 0.2, w as 0.99, and k as 0.8 empirically. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7681 Method Average rTRE Median rTRE Max rTRE Robustness Time [min] Average Median Average Median Average Median Average Median Average bUnwarpJ 0.0472 0.0193 0.0463 0.0192 0.1035 0.0524 0.7866 0.9525 10.57 RVSS 0.0269 0.0107 0.0278 0.0087 0.0648 0.0394 0.8455 1.0000 5.25 NiftyReg 0.0433 0.0243 0.0434 0.0237 0.1097 0.0502 0.7624 0.8974 0.14 Elastix 0.0411 0.0144 0.0374 0.0094 0.0898 0.0418 0.8698 0.9866 3.50 ANTs 0.0397 0.0132 0.0391 0.0096 0.0872 0.0422 0.7998 0.9882 48.24 DROP 0.0325 0.0092 0.0327 0.0057 0.0804 0.0387 0.8971 1.0000 3.99 HistRegU 0.0254 0.0084 0.0257 0.0062 0.0784 0.0327 0.9534 0.9948 0.01 HistRegU + SGTTA 0.0196 0.0051 0.0189 0.0037 0.0509 0.0237 0.9820 1.0000 0.12 HistRegC 0.0223 0.0072 0.0214 0.0041 0.0611 0.0254 0.9647 1.0000 0.03 HistRegC + SGTTA 0.0174 0.0041 0.0155 0.0026 0.0484 0.0203 0.9812 1.0000 0.67 HistRegP 0.0207 0.0067 0.0223 0.0039 0.0545 0.0258 0.9748 1.0000 0.02 HistRegP + SGTTA 0.0161 0.0032 0.0149 0.0022 0.0467 0.0217 0.9823 1.0000 0.39 Table 1: Comparison results with the traditional methods, U-Net learning-based method (HistRegU), Pyramid learning-based method (HistRegP), Cascade learning-based method (HistRegC), and our SGTTA. Results Comparison with Baseline Methods SGTTA is complementary to learning-based methods, so we apply our method for all three learning-based methods. We do not modify any training details about them, and tune the model on the test datasets using our method. According to the results in Table 1, SGTTA consistently boosts the performance of learning-based methods. Specifically, comparing HistRegU+SGTTA with HistRegU, all the metrics are improved, indicating the better registration performance of SGTTA. In terms of robustness, SGTTA improves both average and median robustness which verifies SGTTA improves the generalization ability of the learning-based method. Compared with HistRegU, HistRegP and HistRegC achieves better registration performance due to the inherent decomposition (Hu et al. 2022b). Even though HistRegP and HistRegC have a higher start point, SGTTA still improves them with an obvious gain. Specifically, SGTTA improves HistRegC from 0.0214 to 0.0155 and HistRegP from 0.023 to 0.0149 in terms of AMrTRE. Meanwhile, AmaxrTRE and MMaxrTRE are improved obviously for both HistRegC and HistRegP through SGTTA. Though HistRegC and HistRegP have achieved a 1.0 median robustness, SGTTA still improves the average robustness from 0.9647 to 0.9812 or from 0.9748 to 0.9823, which is promising for clinic scenarios. By the way, the learning-based method equipped with SGTTA shows a large advantage in registration accuracy compared with all traditional methods. Due to the further optimization of the network parameters, the inference time of SGTTA is relatively longer. But in fact, SGTTA is faster than most traditional methods, while the performance is obviously better. Also, compared with pure learning-based methods, the increased inference time is acceptable considering the improvement of registration performance. Moreover, We conduct the statistical significance test, demonstrating the significant improvements by SGTTAs, where Components Metrics Style Structure Continuity AArTRE MArTRE 0.0254 0.0084 ✓ 0.0231 0.0071 ✓ ✓ 0.0209 0.0059 ✓ ✓ ✓ 0.0196 0.0051 Table 2: Ablation studies of SGTTA about network components. Style is style guidance, Structure is structure guidance, and continuity is model continuity. HistRegU+SGTTA outperforms HistRegU with p-values below 5e-4 for AArTRE and 5e-3 for MArTRE. Visualization Comparison We take different images as examples to show the visualization quality of SGTTA in Fig 4. In the image comparisons, we depict the distances between landmarks, and we observed that SGTTA results in better landmark alignment in most regions. This finding indicates that SGTTA can further enhance the accuracy of image structure registration, thereby demonstrating its effectiveness in improving the generalization capability of learning-based methods. Ablation Studies In this section, we delve deep into the effect of our design in SGTTA. We take HistRegU as an example where the impacts are consistent with the other two backbones, and we use AArTRE and MArTRE as the metrics. Effect of Components. We conduct ablation experiments to verify the effectiveness of our design in SGTTA, including style guidance, structure guidance, and model continuity. The detailed results are shown in Table 2. All three designs improve registration accuracy. Specifically, style guidThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7682 HistRegU HistRegU + SGTTA HistRegP HistRegP + SGTTA HistRegU HistRegU + SGTTA HistRegC HistRegC + SGTTA Figure 4: Visualization results of our SGTTA with pure learning-based methods in ANHIR dataset. Position Metric 1st 2nd 3rd 4th AArTRE MArTRE ✓ 0.0207 0.0062 ✓ ✓ 0.0203 0.0057 ✓ ✓ ✓ 0.0198 0.0053 ✓ ✓ ✓ ✓ 0.0196 0.0051 Table 3: Ablation studies of SGTTA about encoded features’ positions in style guidance ance improves AArTRE from 0.0254 to 0.0231, indicating that transferring the style representations between the training dataset and the test dataset is helpful for generalization. And structure guidance further boosts the test-time adaptation performance, which is consistent with the claims in (Ge et al. 2022). Finally, model continuity based on a smoothing strategy improves registration accuracy. Though model continuity does not give as significant an improvement as the other two, it is easy to implement and does not have much extra burden, so it is meaningful for the test-time adaptation process of registration. Effect of Feature Position in Style Guidance. In style guidance, we combine the style representations from the training images with features from the test images by AdaIN, and we can choose the position of the combined features. As mentioned before, each backbone has four encoded features according to the spatial scale, we involve these four features in the style guidance in default. Here, we analyze the effect of encoded features’ position, and the results are shown in Table 3. The results show that even with the first encoded feature lonely, the performance is improved, indicating the effectiveness of the style guidance. Along with introducing more encoded features, the registration performance is improved gradually. However, the latter the features’ position is, the less impact on the registration performance. We think the reason is early features usually capture low-level information such as style while later features usuScale Metric 1 1/2 1/4 1/8 AArTRE MArTRE ✓ 0.0219 0.0071 ✓ ✓ 0.0206 0.0059 ✓ ✓ ✓ 0.0199 0.0055 ✓ ✓ ✓ ✓ 0.0196 0.0051 Table 4: Ablation studies of SGTTA about HOG descriptors’ scale in structure guidance ally encode high-level information such as semantic content. So early features bring in more impact on the style guidance, thus influencing the performance. Effect of HOG Scale in Structure Guidance. In structure guidance, we utilize the HOG descriptors as the structure representations to constrain the fixed image and warped moving image. In the default setting, we use multiple HOG descriptors with 4 spatial scales including {1, 1/2, 1/4, 1/8}. Here, we analyze the impact of different scales and show the results in Table 4. From the results, we can determine that multi-scale HOG is better than singlescale HOG, and the reason is that rough and fine structure representations are both important in the registration problem as mentioned in (Ge et al. 2022). Conclusion In this paper, for the first time, we propose a novel testtime adaptation method for histological image registration, named SGTTA. We design two operations, style guidance and structure guidance, for solving the challenges of appearance variances and local structure changes in histological images. Furthermore, we consider the continuity of the model and propose a smoothing strategy to leverage historical models. We conduct experiments on the public histological dataset with representative backbones, such as U-Net, Pyramid, and Cascade, demonstrating the superior performance of SGTTA. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7683 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 62021001. References Arganda-Carreras, I.; Sorzano, C. O.; Marabini, R.; Carazo, J. M.; Ortiz-de Solorzano, C.; and Kybic, J. 2006. Consistent and elastic registration of histological sections using vectorspline regularization. In Computer Vision Approaches to Medical Image Analysis: Second International ECCV Workshop, CVAMIA 2006 Graz, Austria, May 12, 2006 Revised Papers 2, 85–95. Springer. Avants, B. B.; Epstein, C. L.; Grossman, M.; and Gee, J. C. 2008. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis, 12(1): 26–41. Balakrishnan, G.; Zhao, A.; Sabuncu, M. R.; Guttag, J.; and Dalca, A. V. 2018. An unsupervised learning model for deformable medical image registration. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9252–9260. Balakrishnan, G.; Zhao, A.; Sabuncu, M. R.; Guttag, J.; and Dalca, A. V. 2019. VoxelMorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging, 38(8): 1788–1800. Belongie, S.; Malik, J.; and Puzicha, J. 2002. Shape matching and object recognition using shape contexts. IEEE transactions on pattern analysis and machine intelligence, 24(4): 509–522. Borovec, J.; Kybic, J.; Arganda-Carreras, I.; Sorokin, D. V.; Bueno, G.; Khvostikov, A. V.; Bakas, S.; Eric, I.; Chang, C.; Heldmann, S.; et al. 2020. ANHIR: automatic non-rigid histological image registration challenge. IEEE transactions on medical imaging, 39(10): 3042–3052. Ceritoglu, C.; Wang, L.; Selemon, L. D.; Csernansky, J. G.; Miller, M. I.; and Ratnanather, J. T. 2010. Large deformation diffeomorphic metric mapping registration of reconstructed 3D histological section images and in vivo MR images. Frontiers in human neuroscience, 4: 895. Chakravarty, M. M.; Bertrand, G.; Hodge, C. P.; Sadikot, A. F.; and Collins, D. L. 2006. The creation of a brain atlas for image guided neurosurgery using serial histological data. Neuroimage, 30(2): 359–376. Chen, Y.; Huang, W.; Zhou, S.; Chen, Q.; and Xiong, Z. 2023. Self-supervised neuron segmentation with multiagent reinforcement learning. In Proceedings of the ThirtySecond International Joint Conference on Artificial Intelligence, 609–617. Dalal, N.; and Triggs, B. 2005. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), volume 1, 886–893. Ieee. Dalca, A. V.; Balakrishnan, G.; Guttag, J.; and Sabuncu, M. R. 2018. Unsupervised learning for fast probabilistic diffeomorphic registration. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 729–738. Springer. Du Bois d’Aische, A.; De Craene, M.; Geets, X.; Gregoire, V.; Macq, B.; and Warfield, S. K. 2005. Efficient multimodal dense field non-rigid registration: alignment of histological and section images. Medical image analysis, 9(6): 538–546. Gatys, L. A.; Ecker, A. S.; and Bethge, M. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2414–2423. Ge, L.; Wei, X.; Hao, Y.; Luo, J.; and Xu, Y. 2022. Unsupervised histological image registration using structural feature guided convolutional neural network. IEEE Transactions on Medical Imaging, 41(9): 2414–2431. Glocker, B.; Sotiras, A.; Komodakis, N.; and Paragios, N. 2011. Deformable medical image registration: setting the state of the art with discrete methods. Annual review of biomedical engineering, 13: 219–244. Hu, B.; Zhou, S.; Xiong, Z.; and Wu, F. 2020. Self-recursive Contextual Network for Unsupervised 3D Medical Image Registration. In International Workshop on Machine Learning in Medical Imaging, 60–69. Springer. Hu, B.; Zhou, S.; Xiong, Z.; and Wu, F. 2022a. CrossResolution Distillation for Efficient 3D Medical Image Registration. IEEE Transactions on Circuits and Systems for Video Technology. Hu, B.; Zhou, S.; Xiong, Z.; and Wu, F. 2022b. Recursive Decomposition Network for Deformable Image Registration. IEEE Journal of Biomedical and Health Informatics, 26(10): 5130–5141. Hu, M.; Song, T.; Gu, Y.; Luo, X.; Chen, J.; Chen, Y.; Zhang, Y.; and Zhang, S. 2021. Fully test-time adaptation for image segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part III 24, 251–260. Springer. Hu, X.; Kang, M.; Huang, W.; Scott, M. R.; Wiest, R.; and Reyes, M. 2019. Dual-stream pyramid registration network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 382–390. Springer. Huang, X.; and Belongie, S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, 1501–1510. Iwasawa, Y.; and Matsuo, Y. 2021. Test-time classifier adjustment module for model-agnostic domain generalization. Advances in Neural Information Processing Systems, 34: 2427–2440. KingmaandJ, D. 2015. L. Ba,“ADAM: Amethodforstochasticoptimization,”. In Proc. 3rd Int. Conf. Learn. Representations, 1–15. Klein, S.; Staring, M.; Murphy, K.; Viergever, M. A.; and Pluim, J. P. 2009. Elastix: a toolbox for intensity-based medical image registration. IEEE transactions on medical imaging, 29(1): 196–205. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7684 Li, M.; Zhou, S.; Chen, C.; Zhang, Y.; Liu, D.; and Xiong, Z. 2022. Retinal Vessel Segmentation with Pixel-Wise Adaptive Filters. In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), 1–5. IEEE. Liu, X.; Zhang, Y.; Zhou, S.; Xiong, Z.; and Sun, X. 2023. Electron Microscopy Image Registration Using Correlation Volume. In 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), 1–5. IEEE. Liu, Y.; Kothari, P.; Van Delft, B.; Bellot-Gurlet, B.; Mordan, T.; and Alahi, A. 2021. Ttt++: When does selfsupervised test-time training fail or thrive? Advances in Neural Information Processing Systems, 34: 21808–21820. Lowe, D. G. 1999. Object recognition from local scaleinvariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, 1150– 1157. Ieee. Miranda, G. H. B.; Barrera, J.; Soares, E. G.; and Felipe, J. C. 2012. Structural analysis of histological images to aid diagnosis of cervical cancer. In 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, 316–323. IEEE. Mok, T. C.; and Chung, A. C. 2020. Large Deformation Diffeomorphic Image Registration with Laplacian Pyramid Networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 211–221. Springer. Rueckert, D.; Sonoda, L. I.; Hayes, C.; Hill, D. L.; Leach, M. O.; and Hawkes, D. J. 1999. Nonrigid registration using free-form deformations: application to breast MR images. IEEE transactions on medical imaging, 18(8): 712–721. Song, Y.; Treanor, D.; Bulpitt, A. J.; and Magee, D. R. 2013. 3D reconstruction of multiple stained histology images. Journal of pathology informatics, 4(2): 7. Sun, Y.; Wang, X.; Liu, Z.; Miller, J.; Efros, A.; and Hardt, M. 2020. Test-time training with self-supervision for generalization under distribution shifts. In International conference on machine learning, 9229–9248. PMLR. Ulyanov, D.; Vedaldi, A.; and Lempitsky, V. 2017. Improved texture networks: Maximizing quality and diversity in feedforward stylization and texture synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6924–6932. Venet, L.; Pati, S.; Feldman, M. D.; Nasrallah, M. P.; Yushkevich, P.; and Bakas, S. 2021. Accurate and robust alignment of differently stained histologic images based on Greedy diffeomorphic registration. Applied Sciences, 11(4): 1892. Wang, D.; Shelhamer, E.; Liu, S.; Olshausen, B.; and Darrell, T. 2020. Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726. Wodzinski, M.; and M¨uller, H. 2021. DeepHistReg: Unsupervised deep learning registration framework for differently stained histology samples. Computer methods and programs in biomedicine, 198: 105799. Zhao, S.; Dong, Y.; Chang, E. I.; Xu, Y.; et al. 2019a. Recursive cascaded networks for unsupervised medical image registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10600–10610. Zhao, S.; Lau, T.; Luo, J.; Eric, I.; Chang, C.; and Xu, Y. 2019b. Unsupervised 3D end-to-end medical image registration with volume tweening network. IEEE journal of biomedical and health informatics, 24(5): 1394–1404. Zhou, S.; Hu, B.; Xiong, Z.; and Wu, F. 2023. Self-Distilled Hierarchical Network for Unsupervised Deformable Image Registration. IEEE Transactions on Medical Imaging. Zhou, S.; Xiong, Z.; Chen, C.; Chen, X.; Liu, D.; Zhang, Y.; Zha, Z.-J.; and Wu, F. 2019. Fast and accurate electron microscopy image registration with 3D convolution. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 478–486. Springer. Zhu, W.; Huang, Y.; Xu, D.; Qian, Z.; Fan, W.; and Xie, X. 2021. Test-time training for deformable multi-scale image registration. In 2021 IEEE International Conference on Robotics and Automation (ICRA), 13618–13625. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7685
2024
853
18,688
Reducing Spatial Fitting Error in Distillation of Denoising Diffusion Models Shengzhe Zhou1, Zejian Li1, Shengyuan Zhang2, Lefan Hou2, Changyuan Yang3, Guang Yang3, Zhiyuan Yang3, Lingyun Sun2 1School of Software Technology, Zhejiang University 2 College of Computer Science and Technology, Zhejiang University 3 Alibaba Group {zhoujj7248, zejianlee, zhangshengyuan, houlefan, sunly}@zju.edu.cn, {changyuan.yangcy, adam.yzy}@alibaba-inc.com, qingyun@taobao.com Abstract Denoising Diffusion models have exhibited remarkable capabilities in image generation. However, generating high-quality samples requires a large number of iterations. Knowledge distillation for diffusion models is an effective method to address this limitation with a shortened sampling process but causes degraded generative quality. Based on our analysis with biasvariance decomposition and experimental observations, we attribute the degradation to the spatial fitting error occurring in the training of both the teacher and student model. Accordingly, we propose Spatial Fitting-Error Reduction Distillation model (SFERD). SFERD utilizes attention guidance from the teacher model and a designed semantic gradient predictor to reduce the student’s fitting error. Empirically, our proposed model facilitates high-quality sample generation in a few function evaluations. We achieve an FID of 5.31 on CIFAR-10 and 9.39 on ImageNet 64×64 with only one step, outperforming existing diffusion methods. Our study provides a new perspective on diffusion distillation by highlighting the intrinsic denoising ability of models. Introduction Diffusion-based (DPMs) (Ho, Jain, and Abbeel 2020; SohlDickstein et al. 2015; Song and Ermon 2019) and scorebased (Song et al. 2021b) generative models have recently achieved outstanding performance in synthesizing highquality images. They have already shown comparable or even superior results to GAN (Goodfellow et al. 2020) in multiple fields such as 3D generation (Luo and Hu 2021; Zhou, Du, and Wu 2021), text-to-image generation (Rombach et al. 2022; Ruiz et al. 2023), image restoration (Lugmayr et al. 2022; Wang et al. 2022), controllable image editing (Kawar et al. 2022; Couairon et al. 2022) and graph generation (Huang et al. 2023). A major problem with diffusion models is the slow sampling speed, as it often requires hundreds of iterations to achieve satisfactory generative quality. To address this issue, there are two mainstream improvements: fast sampling schemes without extra training (Song, Meng, and Ermon 2020; Lu et al. 2022a; Liu et al. 2022; Kong and Ping 2021) and trained acceleration schemes (Salimans and Ho 2022; Dockhorn, Vahdat, and Kreis 2022; Zhang, Zhao, and Lin Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The process of Spatial Fitting-Error Reduction Distillation model. In the first step, ˆxs is predicted by the teacher model Tη from xt with attention guidance. The target function R further calculates the target value xtarget 0 based on ˆxs in the second step. The student model Sθ tries to regress the target value with semantic gradient predictor in the final step. 2022). The former aims to reduce errors resulting from largestep sampling by utilizing better numerical methods for integration, whereas the latter enhances performance through additional model training and fine-tuning. With the advent of large-scale pre-training models, the latter approach has become more notable. Among trained schemes, diffusion models based on knowledge distillation have demonstrated considerable potential for fast sampling. An illustrative example is Progressive Distillation model (PD) (Salimans and Ho 2022), where the student model learns the two-step inference output of the teacher model in a single step without compressing the model size. However, these diffusion models based on distillation face degraded generation quality given limited distillation sampling step. In this paper, we investigate scalable enhancements for the diffusion distillation model to improve the quality of the images generated within few steps or even a single step. We begin by reframing the process of diffusion distillation models and designing a multi-step training framework for distillation. We then conduct analysis with bias-variance decomposition and preliminary experiments to identify the fitting errors of both the teacher and student models. Our results show that fitting errors have a broad impact on model performance within our framework. Especially, we observe a positive correlation between the self-attention maps of the diffusion model and the spatial fitting error in the predicted noise. Based on our observations, we propose Spatial FittingThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7686 Error Reduction Distillation model (SFERD). SFERD utilizes internal and external representations to reduce the fitting error of the teacher and the student models, respectively. First, we design attention guidance to improve the denoised prediction using intrinsic information in the self-attention maps of the teacher model. We define the spatial regions having high self-attention scores as “Risky Regions”. According to the observed correlation, these regions exhibits the high fitting error of the teacher model, which is further inherited by the student model. Therefore, by reducing the error in Risky Regions, we expect to enhance the quality of denoised prediction in the student model. This method does not require additional supervised information or extra auxiliary classifiers, and it can be combined with various diffusion models as well. Inspired by the classifier-based gradient guidance (Dhariwal and Nichol 2021), we also introduce a semantic gradient predictor and reformulate the training loss for the student model. The design is to reduce the student model’s fitting error by providing additional information for image reconstruction from a learned latent space. Empirically, SFERD efficiently reduces the fitting error of the student model, leading to superior performance as compared to other distillation models on CIFAR-10 (Krizhevsky 2009) and ImageNet 64×64 (Deng et al. 2009). Notably, it achieves single-step FID scores of 9.39 and 5.31 for ImageNet 64×64 and CIFAR-10 respectively. Furthermore, finetuning pre-trained diffusion model itself with our proposed method also results in improved performance. Project link: https://github.com/Sainzerjj/SFERD. Preliminary We briefly review backgrounds of diffusion distillation. Detailed content is deferred to Appendix F. Diffusion models typically include the forward process and the backward denoising process. Ho et al. (Ho, Jain, and Abbeel 2020) define the forward diffusion process as q(xt|xt−1) = N(xt; √1 −βtxt−1 , βtI), where {βt}T t=1 is a variance schedule used to control the noise intensity. Since the process satisfies Markov conditions, the forward process can be expressed as q(x1:T |x0) = QT t=1 q(xt|xt−1) . q(xt|x0) = N(xt; √¯αtx0 , (1 −¯αt)I) (1) where ¯αt = 1 −βt , ¯αt = Qt i=1 αi. The sampling procedure becomes pθ(xt−1|xt) = N(xt−1; µθ(xt, t) , Σθ(xt, t)). Its training objective is to minimize the upper bound of the variance of the negative log-likelihood, allowing for various predicted parameters, such as ϵ-prediction (Song et al. 2021a), v-prediction (Ho et al. 2022), x0-prediction (Salimans and Ho 2022). To ensure a uniform representation, we define the training loss in distillation as: Et∼U[0,1] , x0∼p(x) , xt∼q(xt|x0)[ω(λt)∥ˆx0(xt , t; θ) −x0∥2 2] (2) where ˆx0 denotes denoise prediction, λt = log[¯αt/(1 −¯αt)] represents the signal-to-noise ratio (Kingma et al. 2021), and different choices of the weighting function ω(·) correspond to different predicted variable θ. DDIM (Song, Meng, and Ermon 2020) breaks the limitations of Markov chains and allows for more stable and fast reverse sampling, the deterministic process becomes: xt−1 =√¯αt−1 xt −√1 −¯αt · ϵθ(xt, t) √¯αt  + p 1 −¯αt−1 · ϵθ(xt, t) (3) DDIM has been demonstrated to be a first-order discrete numerical solution (Lu et al. 2022a) of ordinary differential equation (ODE): xt = r ¯αt ¯αs xs −1 2 √¯αtϵθ(xs, s) Z t s dλδ dδ r 1 −¯αδ ¯αδ dδ | {z } A + O (λt −λs)2 | {z } B (4) here λδ = log[¯αδ/(1 −¯αδ)]. If ϵθ(xs, s) is assumed to be constant from s to t, the first term (A) of Eq.(4) is equivalent to Eq.(3) (Lu et al. 2022a). Such assumption brings highorder approximation error formalized in the second term (B), which leads to the discretization error in sampling process. Generalizing Diffusion Distillation Model The Process of Diffusion Distillation Model The distillation of diffusion models is the process that trains the student model Sθ to approximate the corresponding target distribution in a single step, bypassing the costly multi-step sampling process. Based on this comprehension, we divide the training process of the diffusion distillation model into three steps (Figure 1), formulated in Eq.(5): ˆxs = T (t−s) η (xt, s, t) := S(1) θ (xt, s, t) (5) ˜xtarget 0 = R(ˆxti, s, ti) , ti ∈[s , t −1] (6) Here Tη and Sθ denote the teacher model and the student model, respectively, whose superscripts represent the corresponding number of sampling steps. Note that our framework can be generalized to different diffusion models. The sampling in Eq.(5) may use different numerical solvers g for the teacher or the student, and we will introduce the choice of g in implementation details. In the first step, we sample ˆxti with the teacher model Tη in the same ODE generation path as xt. The process ensures the consistency of the target distribution (Song et al. 2023), which is crucial for effective distillation (Hinton, Vinyals, and Dean 2015). In the second step, a target value ˜xtarget 0 is obtained for the student’s learning with the target function R as Eq. 6. The target value is typically the denoised prediction of ˆxs. R is defined differently given various distillation principles. It can be the previous teacher from the first step (Salimans and Ho 2022; Meng et al. 2022; Luhman and Luhman 2021), another newly proposed teacher (Dockhorn, Vahdat, and Kreis 2022) or a competent student (Song et al. 2023). Tη is a pre-trained diffusion generative model, whose network parameters η are often fixed during the distillation process. Finally, in the third The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7687 (a) (b) (c) Figure 2: (a) shows the training MSE loss of conditional and unconditional diffusion models based on ϵ-prediction. (b) shows the ℓ2 distances of the predicted real samples ˆxt 0 between the baseline teacher model and the student models in each time step (1 → 0). Both unconditional and conditional student models are compared here. (c) shows the ℓ2 distances from ˆxt 0 given by different student models in each time step (1 →0) to the final generated sample ˇx1000 0 given by the baseline teacher model with the same initial noise. Examples in these figures are based on models trained on CIFAR-10 (Krizhevsky 2009). step, the student model Sθ is trained to fit the target value ˜xtarget 0 as Eq.(7) choosing appropriate ω(λt). LS = Et,x0,xt[ω(λt)∥ˆxstudent 0 (xt , t) −˜xtarget 0 ∥2 2] (7) Such three-step division of the distillation process provides a new insight into identifying the errors that may occur during the process and how to mitigate them. We argue that the common errors arising in the training of diffusion distillation models mainly originate from the sampling error of the teacher model in the first step, as well as the fitting error of the student model in the third step. Sampling error in the teacher model The sampling error of the teacher model can be divided into the fitting error in training and the discretization error in sampling. Without loss of generality, we consider teacher model of ϵ-prediction, whose fitting error is reflected by the mismatch between the predicted noise ϵη and the real noise ϵ. This error is caused by the model’s limited capacity or the divergence during training. On the other hand, the discretization error is largely introduced in the sampling when the step size is large and the high-order variation of ϵη is omitted, as in Eq.(4). Fitting error in the student model The error is generally caused by the student model’s inability to regress the denoised prediction ˜xtarget 0 when the loss in Eq.(7) fails to converge to zero. To further investigate the main cause, we examine denoised prediction given student models with different total steps and visualize errors in Figure 2c. The model with 1024 total steps is the teacher model, while others are students obtained through Progressive Distillation method. The curves depict the mean square error between the intermediate denoised samples and the final sample generated by the teacher. Empirical evidence shows that the error increases as the number of sampling steps decreases. The gap widens substantially when the student models sample only 4 or 8 steps. Furthermore, this error typically occurs during the middle or final stages of sampling. Previous works have mainly reduced the discretization error of the teacher model efficiently (Lu et al. 2022a,b). In this study, we focus on reducing the fitting errors of both the teacher and student models to pave a new way for the distillation of diffusion models. The Exploration of Reducing Fitting Error Reducing fitting error of the student Figure 2a shows the training loss of the conditional diffusion model is lower than that of the unconditional diffusion model. We also find that the conditional student model gives a denoised prediction ˆx0 closer to the teacher’s prediction with better quality than the unconditional student during the sampling process (Figure 2b). These results suggest that semantic information (like labels) reduces the fitting error in the diffusion model. We will embed semantic information in a learned latent space for error reduction. Reducing fitting error of the teacher In this part, we present our findings that the fitting error is dominated by the prediction variation, and the variation is correlated to the selfattention map spatially. Firstly, we perform a bias-variance decomposition of the ϵ-prediction loss, which measures the fitting error (Eq.(8)). The first term (A) represents the prediction variance across the sampling procedure, while the second component (B) represents bias. With the training data, we estimate the fitting error and prediction variance values. As visualized in Figure 3a, the fitting error is mainly determined by the variance spanning the entire global range, and the bias plays a minor role. Besides, the error is also positively correlated with the variance (see Appendix C.1 for more details). Therefore, the fitting error can be largely reduced by restricting the prediction variance. Such insight conceptually coincides with the consistency assumption in Consistency Models (Song et al. 2023). Ex0,t,ϵ∥ϵη(xt, t) −ϵ∥= Ex0,t,ϵ∥ϵη(xt, t) −Etϵη(xt, t)∥ | {z } A + Ex0,ϵ∥Etϵη(xt, t) −ϵ∥ | {z } B (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7688 (a) (b) (c) Figure 3: (a) The correlation trend between the values of ∥ϵη(xt, t) −ϵ∥(fitting error) and ∥ϵη(xt, t) −Etϵη(xt, t)∥(variance) at t = 399 using ϵ-prediction pre-trained diffusion model on ImageNet, given different x0’s and ϵ’s. (b) Visualization of attention maps and predicted noise variance on ImageNet diffusion. The left image in each group is the original image. The first row is the annotated attention maps while the second the noise variance at different t. (c) The Pearson correlation between the attention maps from different resolutions (8, 16, 32) and the predicted noise variance during sampling (1 →0). Using ϵ-prediction pre-trained diffusion model on ImageNet. By analyzing the spatial configuration of the prediction variance, we find high variance tends to occur in regions with high self-attention scores. Inspired by (Baranchuk et al. 2021; Tumanyan et al. 2022; Kwon, Jeong, and Uh 2022), we opt to utilize attention modules from the decoder part of UNet (Ronneberger, Fischer, and Brox 2015) in the diffusion model to extract self-attention maps Al t, which have been experimentally demonstrated to contain more representations. Specially, we perform global average pooling and upsampling on Al t to match the resolution of ˆxt. As illustrated in Figure 3b, the predicted noise variance and the regions with high self-attention scores exhibit similar spatial distributions throughout the sampling process. By investigating the attention maps, we suggest that main subjects are mostly generated during the middle stages of sampling, while visual details are added during the final stages. This finding is supported by the experimental view of (Baranchuk et al. 2021; Wu et al. 2022). Consequently, a well-trained method should prioritize enhancing features in different granularity that require emphasis at various timesteps. We conduct Pearson correlation analysis between the attention maps and the spatial prediction variance during the sampling process (Figure 3c). As time t goes from 1 to 0, the correlation between the variance and all attention maps at different resolutions gradually increases. Notably, the attention score maps resolved at 32×32 display the strongest positive correlation. This finding implies the potential use of attention maps in error reduction. Method Based on the observations in the previous section, we propose Spatial Fitting-Error Reduction Distillation of denoising diffusion models (SFERD). SFERD uses attention guidance and an extrinsic semantic gradient predictor to reduce the fitting error of the teacher and the student models, respectively. To better illustrate the improvements, we use DDPM process, DDIM sampler g and ϵ-prediction pre-trained teacher model Tη by default in this section. Moreover, our methods can be extended to other samplers and predicted parameters including v-prediction or x0-prediction (Salimans and Ho 2022; Ho et al. 2022) easily. Teacher Model with Attention Guidance Our approach focuses on identifying and optimizing high attention score regions (“Risky Regions”), which are strongly correlated with the teacher’s fitting error in generated images. The process visualized in Figure 4 involves three key operations, including Gaussian blurring, attention injection and attention guidance sampling. Gaussian blurring The real image x0 is first sampled by a forward process to obtain xt (Step 1 in Figure 4). Then, given xt and with reparameterization, the denoised prediction ˆxt 0 is predicted by the teacher model Tη at time t (Step 2 in Figure 4). Next, we utilize Gaussian blur to introduce interference for the construction of unbalanced information. Specially, we deliberately destroy the Risky Regions from ˆxt 0 with Gaussian blur. This allows us to extract and optimize them later. Finally, we employ inverse DDIM to generate ˜xt = √¯αtB(ˆxt 0) + √1 −¯αtϵt η (Step 3 in Figure 4). B(·) denotes the Gaussian blur operation. The main reason for using Gaussian blur is twofold. Conceptually, Gaussian blur can reduce trivial details in images while preserving the original manifold of the generated image. Empirically, further experiments demonstrate in most time during sampling, the blurred denoised prediction is closer to the final generated sample than the original denoised prediction (Appendix E). Attention injection This operation is to further highlight the Risky Regions since ˜xt after Gaussian blurring is globally The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7689 Figure 4: The illustration of attention guidance in the teacher model. blurred instead of being applied to the region only. Let f l t denote the hidden feature fed to the attention blocks in layer l at time t. After N heads of self-attention blocks, the output self-attention map Al t can be expressed as: Al t = softmax(Ql,a t (Kl,a t )T / √ d) where Ql,a t = f l tW l,a Q , Kl,a t = f l tW l,a K (9) Here a ∈[0, N −1] denotes an attention head, and d is the output dimension of queries Ql,a t and keys Kl,a t . After that, we upsample Al t to the image size and extract the regions with high attention scores with Eq.(10). ψ = I(Al t > k) , ˜xattn t = (1 −ψ) ⊙xt + ψ ⊙˜xt (10) Here ψ denotes a Boolean matrix where a pixel is 1 if its attention value is over a given threshold k and 0 otherwise. ⊙denotes the Hadamard product. ˜xattn t is identical to xt in regions with low self-attention scores but becomes blurred in regions with high scores (Step 4 in Figure 4). Attention guidance sampling In this operation, the denoised prediction ˜xattn 0 is calculated from the teacher model with ˜xattn t (Steps 5, 6 in Figure 4). Together with ˆxt 0, the improved denoised prediction ˜xteacher 0 can be obtained. Both calculations are in Eq.(11). Subsequently, the DDIM sampler conditioned on xt and ˜xteacher 0 is applied to get ˆxt−1 (Step 7, 8 in Figure 4). Repeating the above three operations until ˆxti is obtained. Finally, we get ˜xtarget 0 by Eq.(6). ˜xattn 0 = ˜xattn t −√1 −¯αtϵη(˜xattn t , t) √¯αt ˜xteacher 0 = ˆxt 0 + w × (ˆxt 0 −ˆxattn 0 ) (11) Here w denotes the attention guidance strength. (ˆxt 0 −ˆxattn 0 ) contains the semantic information differences for guidance highlighted by the high attention scores, helping the teacher to improve the quality of denoised prediction. The approach Figure 5: The training process of distillation with semantic gradient predictor. Note that the original student model is trained. relies entirely on the diffusion model’s intrinsic representations and provides a new perspective of guidance. It is unsupervised and still supports the extension of incorporating external conditions for better performance. Student Model with Semantic Gradient Predictor The gradients with semantic information calculated by the classifier have been validated for their ability to compensate for information bias and loss that arise during sampling (Dhariwal and Nichol 2021). To reduce the fitting error of the trained distillation student model, we introduce a learned semantic encoder in the student model, which provides a latent vector containing more intact reconstruction information. In detail, we integrate a semantic encoder zsem = Eφ(x0) and a predictor Gτ(xt, zsem, t) for the student to learn representations from the real image x0. This would help in fitting to ˜xtarget 0 . Previous work (Dhariwal and Nichol 2021) introduce an extra label y in the conditional The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7690 Model NFE CIFAR-10 32×32 ImageNet 64×64 FID (↓) IS (↑) FID (↓) IS (↑) SFERD-PD (ours) 1 7.54 8.61 14.85 36.55 PD (Salimans and Ho 2022) 1 14.85 7.96 19.23 22.73 SFERD-CD (ours) 1 5.31 9.24 9.39 47.19 CD (Song et al. 2023) 1 8.21 8.41 13.87 29.98 DFNO∗(Zheng et al. 2023) 1 4.12 / 8.35 / ViTGAN (Lee et al. 2021) 1 6.66 9.30 / / DiffAugment-BigGAN (Zhao et al. 2020) 1 5.61 9.16 / / TransGAN (Jiang, Chang, and Wang 2021) 1 9.26 9.02 / / SFERD-PD (ours) 2 6.37 8.92 7.53 45.72 PD (Salimans and Ho 2022) 2 7.64 8.85 9.71 25.37 SFERD-CD (ours) 2 4.19 9.45 6.08 54.05 CD (Song et al. 2023) 2 6.26 9.17 8.24 32.60 SFERD-PD (ours) 4 3.44 9.32 5.93 55.19 PD (Salimans and Ho 2022) 4 4.28 9.25 7.22 30.72 SFERD-CD (ours) 4 2.68 9.79 4.41 57.98 CD (Song et al. 2023) 4 3.39 9.71 5.81 35.41 SFERD-EDM (ours) 35 2.12 9.88 / / EDM (Karras et al. 2022) 35 2.41 9.83 / / SFERD-EDM (ours) 79 / / 2.43 74.73 EDM (Karras et al. 2022) 79 / / 2.99 39.09 SFERD-DDIM (ours) 1024 2.27 9.80 2.82 69.17 DDIM (Song, Meng, and Ermon 2020) 1024 2.58 9.76 3.34 37.55 Table 1: Sample quality on CIFAR-10 and ImageNet 64×64. SFERD-* represents the implementation of the corresponding model within SFERD framework. For example, SFERD-PD and SFERD-RD represent the models that refer to ideas of Progressive Distillation (PD) and Consistency Distillation (CD) and imply within SFERD, respectively. Both attention guidance method and semantic encoding-based gradient predictor are introduced. diffusion model. When we replace class label conditions y with the learned latent equation zsem: pθ,φ(xt−1|xt, zsem) ≈N(xt−1; µθ(xt, t) + Σθ(xt, t)· ∇xt log pτ(zsem|xt) , Σθ(xt, t)) (12) Based on Eq.(12), the training objective of the distillation model can be reformulated as Eq.(14). L(θ, φ, τ) =Ex0,t,ϵ  ω(λt) Σθ(xt, t) · ∇xt log pτ(zsem|xt) − eµt(xt, ˜xtarget 0 ) −µθ(xt, t)  2  (13) =Ex0,t,ϵ  ω(λt) (1 −¯αt)Σθ(xt, t) √¯αt−1βt · ∇xt log pτ(zsem|xt) − ˜xtarget 0 −ˆxstudent 0 (xt, t)  2  (14) where Σθ = σ2 t I = 1−¯αt−1 1−¯αt βtI. Eq.(14) uses the gradient ∇xt log pτ(zsem|xt) with semantic information to compensate for the fitting error with ˜xtarget 0 . We design a predictor Gτ(xt, zsem, t) to approximate ∇xt log pτ(zsem|xt). Figure 5 shows the training network and data flow. By default, Gτ is not trained directly to fit ∇xt log pτ(zsem|xt) but combined with the distillation model. The trained student model is frozen during the first half of the training epochs and jointly optimized with Gτ and Eφ using a low learning rate in the latter half. We find in this case the optimized Gτ produces gradients close to ∇xt log pτ(zsem|xt). We validate the crucial role of incorporating zsem in ensuring the effectiveness of the improved method through ablation experiments. The student model can be combined with Gτ to perform a few-step sampling using the formula similar to the classifier guidance. In addition, we also train another DDIM sampler pω(zt−1 sem|zt sem) to sample zsem in the latent space, following the approach as (Preechakul et al. 2022). Both Eφ and Gτ are independent of the original distillation model and can be integrated into the trained distillation process, achieving less training time. Related Work The goal of our methods is to accelerate sampling while maintaining high image quality, in line with many existing works. For instance, Watson et al. (2021, 2022) exploit using traditional dynamic programming methods to accelerate sampling, Zhang et al. (2022) proposes the use of an exponential integrator (DEIS) to speed up sampling. The use of knowledge The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7691 distillation in diffusion models can be traced back as far as the DDIM-based one-step denoising model (DS) implemented in (Luhman and Luhman 2021), whose main drawback is that the full steps of the original teacher model have to be fully applied for each training. The Classifier-Guided Distillation (CFD) proposed in (Sun et al. 2022) uses a classifier to extract the sharpened feature distribution of the teacher into the students and uses the KL divergence as training loss, allowing it to focus on the focal image features. Higher-Order Denoising Diffusion Solvers (GENIE) (Dockhorn, Vahdat, and Kreis 2022), on the other hand, achieve accelerated sampling by adding a prediction module capable of receiving information from distillation Higher Order Solvers to the network backbone. Progressive distillation (PD) (Salimans and Ho 2022) and guided distillation (GD) (Meng et al. 2022) implement unconditional and conditional distillation diffusion models, respectively, at the cost of halving the number of sampling steps per iteration, in addition to GD which extends the training to the latent space and implements random sampling. Consistency Distillation (CD) (Song et al. 2023) exploits the self-consistency of the ODE generation process by minimizing the difference between two noisy data points on the same ODE path to achieve few-step distillation. Our two improvement methods can be applied to the majority of diffusion distillation models above. Experiments Experimental Setting We mainly examine SFERD on two standard image generation benchmarks, including CIFAR-10 and class-conditional ImageNet 64×64. We measure the performance using Fr´echet Inception Distance (FID) (Heusel et al. 2017) and Inception Score (IS) (Salimans et al. 2016). All results are computed from 50,000 sampled generated images and averaged over 3 random seeds. All students are uniformly initialized by the corresponding teachers. Few-steps Image Generation Our distillation framework allows different configurations of diffusion process (like Variance Preserving or Exploding (Song et al. 2021b)), numerical solver, total diffusion timesteps and so on. For fair comparisons, we choose to implement SFERD by referring to the ideas of Progressive Distillation models (PD) (Salimans and Ho 2022) and Consistency Distillation models (CD) (Song et al. 2023). Specifically, PD can be interpreted as setting the target function R in SFERD to Tη, while R is set to the student model Sθ− updated by the exponential moving average (EMA) in CD. In our experiments, PD is applied to the teacher network from unconditional ADM (Dhariwal and Nichol 2021) using DDPM noise schedule and DDIM (Euler) sampler, and CD is applied to the teacher from EDM (Karras et al. 2022) and 2nd Heun sampler. We unify the metric function to ℓ2 distance. We pretrain all teachers using the configurations specified in the original papers. The initial teacher model of PD is set to 1024 sampling steps, whereas for CD, it is set to 18 or 40 steps. By default, the students of the previous distillation stage serve as the teachers for the next stage in SFERD-PDs. We compare our SFERD with PD and CD on the CIFAR10 and ImageNet 64×64. Results in Table 1 demonstrate that the performance of all models improved as the sampling steps increased. Notably, SFERD-PD and SFERD-CD achieved better performance than their baseline models, and SFERDCD displays superior performance across all datasets and sampling steps. Our improved methods can be applied not only to mainstream diffusion distillation models but also to enhance the performance of pre-trained models directly through fine-tuning. Specifically, we apply our methods on pre-trained unconditional ADM and EDM directly, aligning the distillation steps of the student with that of the teacher, which can be easily achieved by setting s = t−1. The results shown in Table 1 indicate superior performances of ADM and EDM, which are improved through the integration of attention guidance and semantic gradient predictor. Ablation Studies We conduct ablation experiments on the design of critical hyperparameters in the training of SFERD. All ablations are performed on the conditional ImageNet 64×64 using SFERDPD (no semantic gradient predictor in the student) with 4 sampling steps unless otherwise stated. Attention threshold. In order to determine the attention threshold, we compute the scales of 0.8, 0.9, 1.0, 1.1 and 1.2. The best metrics are obtained when the ψ is 1.0. A threshold that is too high or too low can deteriorate the performance. Attention guidance strength. We evaluate the effect of attention guidance strength, calculating the scales from 0 to 0.5. The best FID was achieved when w = 0.3. Gaussian blur strength. We evaluate the effect of Gaussian blur strength σ on performance. We test the strength values of 1, 3, 5, and obtain the best FID at σ = 3. Denoising ability. Specially, we randomly select 500 real images x0 from ImageNet 64×64 and perform forward diffusion on them to obtain xt. We then compute the ℓ2 distances between x0 and ˆxt 0 generated by pre-trained 4-sampling-step CD, CD with attention guidance and SFERD-CD, averaging over 500 random images. The results indicate that both enhancements of SFERD obviously reduce the ℓ2 distances of the denoised prediction at each timestep. Moreover, the FID of CD, CD with attention guidance, and SFERD-CD are presented in order: 5.81, 5.27, and 4.41, respectively. Conclusion In conclusion, we propose the Spatial Fitting-Error Reduction Distillation model (SFERD) for Denoising Diffusion Models. SFERD effectively enhances performance using intrinsic and extrinsic representations, generating high-quality samples in only few steps. The core idea behind SFERD is to reduce the fitting error of the student and the teacher in distillation. It is achieved through the independent use of attention guidance we proposed for the teacher and an external semantic gradient predictor in the student model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7692 Acknowledgements This paper is funded by National Key R&D Program of China (2022YFB3303301), National Natural Science Foundation of China (Grant No. 62006208), and Youth Program of Humanities and Social Sciences of the Ministry of Education (No.23YJCZH338). This paper is also supported by AlibabaZhejiang University Joint Research Institute of Frontier Technologies. The corresponding author of the paper is Zejian Li. We are grateful to Jionghao Bai and Jingran Luo for supplementing the appendix. We also thank to Chenye Meng and Haoran Xu for their helpful comments and discussion, and to Jiahui Zhang, Qi Liu, Ying Zhang, and Yibo Zhao for proofreading the draft. References Baranchuk, D.; Voynov, A.; Rubachev, I.; Khrulkov, V.; and Babenko, A. 2021. Label-Efficient Semantic Segmentation with Diffusion Models. In International Conference on Learning Representations. Couairon, G.; Verbeek, J.; Schwenk, H.; and Cord, M. 2022. Diffedit: Diffusion-based semantic image editing with mask guidance. arXiv preprint arXiv:2210.11427. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. ImageNet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition, 248–255. Dhariwal, P.; and Nichol, A. 2021. Diffusion Models Beat GANs on Image Synthesis. Advances in Neural Information Processing Systems, 34: 8780–8794. Dockhorn, T.; Vahdat, A.; and Kreis, K. 2022. Genie: Higherorder denoising diffusion solvers. Advances in Neural Information Processing Systems, 35: 30150–30166. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative Adversarial Networks. Communications of the ACM, 63(11): 139–144. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. GANs Trained by a Two-time-scale Update Rule Converge to a Local Nash Equilibrium. Advances in Neural Information Processing Systems, 30: 6626–6637. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Ho, J.; Chan, W.; Saharia, C.; Whang, J.; Gao, R.; Gritsenko, A.; Kingma, D. P.; Poole, B.; Norouzi, M.; Fleet, D. J.; et al. 2022. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. Advances in Neural Information Processing Systems, 33: 6840–6851. Huang, H.; Sun, L.; Du, B.; and Lv, W. 2023. Conditional Diffusion Based on Discrete Graph Structures for Molecular Graph Generation. In AAAI Conference on Artificial Intelligence. Jiang, Y.; Chang, S.; and Wang, Z. 2021. TransGAN: Two pure transformers can make one strong GAN, and that can scale up. Advances in Neural Information Processing Systems, 34: 14745– 14758. Karras, T.; Aittala, M.; Aila, T.; and Laine, S. 2022. Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems. Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; and Irani, M. 2022. Imagic: Text-based real image editing with diffusion models. arXiv preprint arXiv:2210.09276. Kingma, D.; Salimans, T.; Poole, B.; and Ho, J. 2021. Variational diffusion models. Advances in Neural Information Processing Systems, 34: 21696–21707. Kong, Z.; and Ping, W. 2021. On Fast Sampling of Diffusion Probabilistic Models. In ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models. Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images. Kwon, M.; Jeong, J.; and Uh, Y. 2022. Diffusion models already have a semantic latent space. arXiv preprint arXiv:2210.10960. Lee, K.; Chang, H.; Jiang, L.; Zhang, H.; Tu, Z.; and Liu, C. 2021. ViTGAN: Training GANs with Vision Transformers. In International Conference on Learning Representations. Liu, L.; Ren, Y.; Lin, Z.; and Zhao, Z. 2022. Pseudo Numerical Methods for Diffusion Models on Manifolds. In International Conference on Learning Representations. Lu, C.; Zhou, Y.; Bao, F.; Chen, J.; Li, C.; and Zhu, J. 2022a. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps. In Advances in Neural Information Processing Systems. Lu, C.; Zhou, Y.; Bao, F.; Chen, J.; Li, C.; and Zhu, J. 2022b. DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models. arXiv preprint arXiv:2211.01095. Lugmayr, A.; Danelljan, M.; Romero, A.; Yu, F.; Timofte, R.; and Gool, L. V. 2022. RePaint: Inpainting using Denoising Diffusion Probabilistic Models. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11451–11461. Luhman, E.; and Luhman, T. 2021. Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed. arXiv preprint arXiv:2101.02388. Luo, S.; and Hu, W. 2021. Diffusion probabilistic models for 3d point cloud generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2837–2845. Meng, C.; Gao, R.; Kingma, D. P.; Ermon, S.; Ho, J.; and Salimans, T. 2022. On Distillation of Guided Diffusion Models. In NeurIPS 2022 Workshop on Score-Based Methods. Preechakul, K.; Chatthee, N.; Wizadwongsa, S.; and Suwajanakorn, S. 2022. Diffusion Autoencoders: Toward a Meaningful and Decodable Representation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10619–10629. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution Image Synthesis with Latent Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–10695. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Part III 18, volume 9351, 234–241. Springer. Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22500–22510. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; and Chen, X. 2016. Improved techniques for training GANs. Advances in Neural Information Processing Systems, 29: 2226– 2234. Salimans, T.; and Ho, J. 2022. Progressive Distillation for Fast Sampling of Diffusion Models. In International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7693 Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep Unsupervised Learning Using Nonequilibrium Thermodynamics. In International Conference on Machine Learning, 2256–2265. PMLR. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising Diffusion Implicit Models. In International Conference on Learning Representations. Song, Y.; Dhariwal, P.; Chen, M.; and Sutskever, I. 2023. Consistency Models. arXiv preprint arXiv:2303.01469. Song, Y.; Durkan, C.; Murray, I.; and Ermon, S. 2021a. Maximum likelihood training of score-based diffusion models. Advances in Neural Information Processing Systems, 34: 1415–1428. Song, Y.; and Ermon, S. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. Advances in Neural Information Processing Systems, 32: 11895–11907. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2021b. Score-based Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations. Sun, W.; Chen, D.; Wang, C.; Ye, D.; Feng, Y.; and Chen, C. 2022. Accelerating Diffusion Sampling with Classifier-based Feature Distillation. arXiv preprint arXiv:2211.12039. Tumanyan, N.; Geyer, M.; Bagon, S.; and Dekel, T. 2022. Plug-andPlay Diffusion Features for Text-Driven Image-to-Image Translation. arXiv preprint arXiv:2211.12572. Wang, S.; Saharia, C.; Montgomery, C.; Pont-Tuset, J.; Noy, S.; Pellegrini, S.; Onoe, Y.; Laszlo, S.; Fleet, D. J.; Soricut, R.; et al. 2022. Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image Inpainting. arXiv preprint arXiv:2212.06909. Wu, Q.; Liu, Y.; Zhao, H.; Kale, A.; Bui, T. M.; Yu, T.; Lin, Z.; Zhang, Y.; and Chang, S. 2022. Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models. ArXiv, abs/2212.08698. Zhang, Z.; Zhao, Z.; and Lin, Z. 2022. Unsupervised representation learning from pre-trained diffusion probabilistic models. Advances in Neural Information Processing Systems, 35: 22117–22130. Zhao, S.; Liu, Z.; Lin, J.; Zhu, J.-Y.; and Han, S. 2020. Differentiable augmentation for data-efficient GAN training. In Proceedings of the 34th International Conference on Neural Information Processing Systems, 7559–7570. Zheng, H.; Nie, W.; Vahdat, A.; Azizzadenesheli, K.; and Anandkumar, A. 2023. Fast sampling of diffusion models via operator learning. In International Conference on Machine Learning, 42390– 42402. PMLR. Zhou, L.; Du, Y.; and Wu, J. 2021. 3d shape generation and completion through point-voxel diffusion. In IEEE/CVF International Conference on Computer Vision, 5826–5835. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7694
2024
854
18,689
SAMFlow: Eliminating Any Fragmentation in Optical Flow with Segment Anything Model Shili Zhou, Ruian He, Weimin Tan*, Bo Yan* School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University slzhou19@fudan.edu.cn, rahe16@fudan.edu.cn, wmtan@fudan.edu.cn, byan@fudan.edu.cn Abstract Optical Flow Estimation aims to find the 2D dense motion field between two frames. Due to the limitation of model structures and training datasets, existing methods often rely too much on local clues and ignore the integrity of objects, resulting in fragmented motion estimation. Through theoretical analysis, we find the pre-trained large vision models are helpful in optical flow estimation, and we notice that the recently famous Segment Anything Model (SAM) demonstrates a strong ability to segment complete objects, which is suitable for solving the fragmentation problem. We thus propose a solution to embed the frozen SAM image encoder into FlowFormer to enhance object perception. To address the challenge of in-depth utilizing SAM in non-segmentation tasks like optical flow estimation, we propose an Optical Flow Task-Specific Adaption scheme, including a Context Fusion Module to fuse the SAM encoder with the optical flow context encoder, and a Context Adaption Module to adapt the SAM features for optical flow task with Learned Task-Specific Embedding. Our proposed SAMFlow model reaches 0.86/2.10 clean/final EPE and 3.55/12.32 EPE/F1all on Sintel and KITTI-15 training set, surpassing Flowformer by 8.5%/9.9% and 13.2%/16.3%. Furthermore, our model achieves state-of-the-art performance on the Sintel and KITTI-15 benchmarks, ranking #1 among all two-frame methods on Sintel clean pass. Introduction Optical flow is a fundamental task in computer vision, which estimates the pixel-level correspondences between frames. As an important paradigm to exploit video temporal continuity, it has applications in many video-related downstream tasks, such as frame interpolation (Huang et al. 2022b), video inpainting (Gao et al. 2020) and action recognition (Sun et al. 2018b). With the advent of advanced neural network architectures, many powerful optical flow estimation models have been proposed (Dosovitskiy et al. 2015; Sun et al. 2018a; Teed and Deng 2020; Jiang et al. 2021a; Huang et al. 2022a). *Corresponding author: Weimin Tan and Bo Yan. This work is supported by NSFC (GrantNo.: U2001209 and 62372117) and Natural Science Foundation of Shanghai (21ZR1406600). Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: (a) Examples of fragmentation in optical flow estimation. We observe that SAM is able to segment the whole object. Thus, we propose our SAMFlow to eliminate fragmentation. (b)Visualization of the context similarity with the query point. Although leaps and bounds have been made, existing optical flow estimation methods are still limited by two factors: 1) Scarcity of well-labeled datasets. Since it is difficult to obtain pixel-level motion annotations in the real world, optical flow datasets are usually constructed using artificial synthesis schemes. For example, some works (Dosovitskiy et al. 2015; Sun et al. 2021) try to construct datasets from images and generate motions with simple 2D transformations. At the same time, other works (Mayer et al. 2016; Gaidon et al. 2016) generate datasets of virtual scenes with the 3D rendering engine. Compared with natural scenes, these synthetic datasets have limited diversity and realism, resulting The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7695 in insufficient training of existing optical flow models. 2) Lack of high-level understanding. Human perception of motion is closely linked to the understanding of objects. Instead, optical flow models only focus on the local low-level clues, leading to incorrect “fragmentation” results. Here, fragmentation refers to erroneous fragmented optical flow predictions for the same object. Figure 1 (a) give examples of fragmentation in optical flow caused by occlusion and complex lighting/textures. Some previous studies (Sun et al. 2022; Jiang et al. 2021a) also try to solve this problem by using larger receptive fields or global motion aggregation. However, these simple structural improvements cannot fundamentally eliminate fragmentation. The recent pre-trained large vision models that have received considerable attention are highly suitable for addressing the aforementioned two challenges. (i) (Shi et al. 2023) and (Dong, Cao, and Fu 2023) have shown that pretraining with data and supervision beyond optical flow can strengthen optical flow estimation, which implies that pretrained large vision models can leverage a wide range of unlabeled image and video data to circumvent the problem of insufficient optical flow datasets. (ii) The visual representation learned through pre-training contains the high-level understanding we need. Therefore, fusion with large vision models may further enhance optical flow estimation. Among the large vision models, Segment Anything Model (SAM) (Kirillov et al. 2023) is one of the most suitable for optical flow estimation. As shown in Figure 1 (a), SAM can segment entire objects under occlusion and other confusing environments, which is exactly the solution to fragmentation in optical flow. Here, we propose using SAM’s features as the SAM image encoder occupies most of the SAM parameters and knowledge. However, it is challenging to effectively harness SAM features for non-segmentation tasks such as optical flow estimation due to the absence of task-specific knowledge. As illustrated in Figure 1(b), while SAM’s feature yields a superior similarity map compared to FlowFormer, it losses numerous details, posing an obstacle to optical flow estimation. Therefore, we propose an Optical Flow Task-Specific Adaptation scheme to address the challenge. First, we fuse the SAM encoder with the task-specific encoder with a Context Fusion Module (CFM). Next, we introduce a Context Adaption Module (CAM) to inject more task-specific knowledge of optical flow into the fused features via Two-Way Attention (TWA) blocks and Learned Task-Specific Embedding (LTSE) tokens. With the above designs, our proposed SAMFlow achieves remarkable performance, reaching 0.86/2.10 clean/final EPE on Sintel (Butler et al. 2012) training set and 3.55/12.32 EPE/F1-all on KITTI-15 (Geiger et al. 2013) training set, surpassing Flowformer by 8.5%/9.9% and 13.2%/16.3%. Furthermore, we upload our fine-tuned models to the benchmark sites of Sintel and KITTI-15, which shows significant superiority, ranking #1 among all twoframe methods on Sintel clean pass. In summary, our contributions are as follows: • For the first time, we investigate the feasibility of utilizing pre-trained SAM in optical flow estimation, and we thus propose SAMFlow, a novel approach aimed at enhancing the accuracy of optical flow estimation by effectively addressing issues of fragmentation. • To prevent the task mismatch from affecting the accuracy, we propose an Optical Flow Task-Specific Adaptation scheme by introducing the CFM to fuse the SAM encoder with the optical flow context encoder, and the CAM to further adapt for optical flow estimation, improving the effectiveness of SAMFlow significantly. • Our SAMFlow achieves state-of-the-art performance on both generalization and dataset-specific evaluations, surpassing Flowformer with a large margin and ranking #1 among all two-frame methods on the clean pass of the Sintel benchmark. Related Works Optical Flow Optical flow has been studied for many years as a fundamental vision task. Traditional methods such as (Lucas and Kanade 1981) and (Horn and Schunck 1981) regard optical flow estimation as an energy optimization task and use human-designed data and prior terms as optimization objectives, which cannot satisfy the complex motion in natural images. In recent years, benefiting from the emergence of deep learning and large-scale synthetic optical flow datasets, most high-performance optical flow estimation methods learn optical flow automatically in an end-toend manner. Model design and data collection replace the data and prior term, and become the focus of today’s optical flow algorithm researchers. For the model, the researchers successively introduced convolutional network as FlowNet (Dosovitskiy et al. 2015), multi-scale network as PWC-Net (Sun et al. 2018a), recurrent network as RAFT (Teed and Deng 2020), Transformer as FlowFormer (Huang et al. 2022a) and other structures, making the inherent learning ability of the optical flow model enhanced gradually. Meanwhile, reducing model time-consuming is also a research direction like (Cheng et al. 2023). For data, from Chairs (Dosovitskiy et al. 2015), Things (Mayer et al. 2016) to later AutoFlow (Sun et al. 2021), Spring (Mehl et al. 2023), the diversity and authenticity of synthetic datasets are increasing, while the richness of real datasets is also slowly increasing (KITTI (Geiger et al. 2013) and HD1K (Kondermann et al. 2016)). These efforts open up the possibility of increasingly powerful optical flow estimation models. However, the scarcity of data and the limitation of model design are still the core problems of optical flow. Pre-trained Large Vision Model The introduction to the pre-trained large vision model can be divided into the model architectures and the pre-training methods. We start by introducing the model architectures. In the early years, researchers use convolutional neural networks (CNN) as the basic architecture of computer vision, proposing VGG (Simonyan and Zisserman 2014), ResNet (He et al. 2016), etc. Recently, inspired by the success of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7696 Figure 2: The overview of our SAMFlow, which utilizes the frozen SAM image encoder to boost the object perception of the optical flow model FlowFormer. We design two modules for in-depth utilizing SAM, including: (a) the CFM, which fuses SAM features with FlowFormer encoder, and (b) the CAM, which adapts the features with the Learned Task-Specific Embedding. Transformer in natural language processing, Vision Transformer (ViT) (Dosovitskiy et al. 2020) is proposed, which has a stronger representational ability and can show obvious advantages under large-scale datasets. Next, we introduce the pre-training methods. The early pre-training models using labeled data of pretext tasks, such as classification on ImageNet (Krizhevsky, Sutskever, and Hinton 2012). To use large-scale unlabeled data, researchers propose self-supervised pre-training methods, including contrastive learning (Chen et al. 2020), autoencoding (Vincent et al. 2008), etc. A recent highlight paper (He et al. 2022) proposes Masked Auto-Encoder (MAE), which improves the traditional auto-encoder by dropping some patches when encoding to force the model to understand the image content. Segment Anything Model Segment Anything Model (SAM) (Kirillov et al. 2023) is a prompt-based segmentation model. The structure of SAM is divided into three parts: image encoder, prompt encoder, and decoder. The image encoder is a variant of ViT (Dosovitskiy et al. 2020), which has a large number of parameters, while the prompt encoder and the decoder are lightweight. SAM is fine-tuned from MAE with a large amount of labeled segmentation data. This work also presents an impressive training data generation scheme: it uses manual labeling/correction and model learning/prediction as two complementary processes, which can create billions of segmentation labels with low labor costs. Large-scale labeled training data endows SAM with robust understanding and segmentation capabilities, which we find suitable for eliminating fragmentation in optical flow estimation. Proposed Method Theoretical Analysis Optical flow estimation methods aim to find the mapping ζ : (I1, I2) ⇒F, where I1 and I2 are two adjacent frames from a video, and F is the 2D optical flow field. From a probabilistic point of view, an optical flow network can be expressed as: F ∗= ζθ(I1, I2) = argmax F p(F | I1, I2) (1) where F ∗is the estimated most likely optical flow, ζθ is the optical flow network with parameters θ, and p(F | I1, I2) is the posterior distribution of optical flow. In order to facilitate a comprehensive analysis, we use Bayes’ theorem to extend p(F | I1, I2): p(F | I1, I2) = p(F)p(I1 | F)p(I2 | I1, F) p(I1, I2) (2) = p(I1)p(F | I1)p(I2 | I1, F) p(I1, I2) To find the optimal F, we omit the unrelated term p(I1) and p(I1, I2), and take the logarithm to separate the multiplication terms. Thus, we get another formation of F ∗as in Formula 3, which consists of a cost query term and a context term. F ∗= argmax F {log p(I2 | I1, F) | {z } cost query + log p (F | I1) | {z } context } (3) The cost query term encompasses the interrelation between I2 and I1 with F. To achieve this, optical flow models such as (Sun et al. 2018a; Teed and Deng 2020; Jiang et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7697 2021a) construct the 4D cost volumes and make cost queries with flow guidance. The contextual term provides an alternative information source for optical flow estimation, requiring the model to comprehend the image context more deeply. Earlier approaches like (Teed and Deng 2020; Jiang et al. 2021a; Sun et al. 2022) overlook this aspect, with the extracted contextual features being constrained to local cues. In contrast, our endeavor involves integrating pre-trained large vision models to enhance higher-level understanding. However, not all pre-trained large vision models prove apt for optical flow estimation, as certain models exclusively capture global semantics at the cost of losing spatial detail features, resulting in limited contributions to optical flow. Empirically, SAM is a suitable candidate with its capability to generate pixellevel outputs akin to optical flow, as shown in Figure 1. Overview As shown in Figure 2, we redesign the context feature extraction process of the backbone model FlowFormer by utilizing the image encoder of SAM, which has powerful object perception to solve the fragmentation of optical flow estimation. We call the proposed new model as SAMFlow. Moreover, as SAM does not acquire task-specific prior knowledge related to optical flow, we design an Optical Flow TaskSpecific Adaption scheme, which includes a Context Fusion Module and a Context Adaption Module. In the following two subsections, we first introduce some minor modifications to unlock the resolution requirement of the SAM encoder; then, we introduce the CFM and CAM in detail. Modifications for Resolution The image encoder of SAM is a ViT that only accepts a fixed input resolution of 1024 × 1024. Considering the memory and time cost, this resolution is unable to use in the optical flow training framework. Therefore, we slightly modify the SAM encoder to unlock the resolution limitation. The details are provided in our supplementary material. Optical Flow Task-Specific Adaption Context Fusion Module: Utilizing pre-trained large vision models for dissimilar tasks faces the challenge of knowledge unmatching. For example, the local details are essential clues for optical flow, while they are dropped for understanding tasks. Thus, we propose the CFM to combine the high-level understanding of SAM and the low-level clues for optical flow by using the SAM encoder and FlowFormer encoder simultaneously. As shown in Figure 2(a), we first concatenate the SAM and Flowformer features. Subsequently, we mix them with two residual convolutional blocks. In the former block, the features will be processed by two branches: the main branch contains two 3x3 convolutional layers, which fuse the features and reduce channels, and the other branch uses depthwise convolution directly to reduce the channels number and keep it consistent to the main branch. We add the results of the two branches as the output of the block. The latter residual block has almost the same structure except Figure 3: Our Context Adaptation Module to adapt SAM features for optical flow with the Learned Task-Specific Embedding. For the sake of brevity, only one Two-Way Attention is shown. PE is the positional embedding. for depth-wise convolution since there is no difference in channel numbers between the input and output. Overall, this module can be represented by Formula 4, 5, 6 and 7. ΦS = ES(I), ΦF = EF (I) (4) Φ∥= ΦS ∥ΦF (5) ¯ΦC = Conv2(Conv1(Φ∥)) + ∆(Φ∥) (6) ΦC = Conv4(Conv3(Φ∥)) + ¯ΦC (7) where ES and EF are the SAM encoder and FlowFormer Encoder, ΦS and ΦF are the extracted features, ∥is concatenation operator, Convk corresponds to the k-th convolution layer. ∆is depth-wise convolution. Φ∥and ¯ΦC are intermediate variables, and ΦC is the output of the CFM. The normalization and activation layers are omitted for brevity. Context Adaption Module: To better utilize the taskspecific knowledge to accomplish task adaptation of optical flow, we propose the Context Adaption Module, as shown in Figure 2(b) and 3. Inspired by Perceiver IO (Jaegle et al. 2021) and the mask decoder of SAM, we make the following design in the Context Adaption Module: we use Learned Task-Specific Embedding (LTSE) tokens to store some taskspecific priors of optical flow, and use Two-Way Attention (TWA) blocks to inject those priors into the context feature for adaptation. The LTSE is implemented as a set of learnable offsets of shape K ×D, which will be automatically optimized during the training process. We empirically set K to 3 and D to 256. Meanwhile, each TWA contains four steps: 1) Embedding Reorganize: as shown in Formula 8, a self-attention layer is used to reorganize the embedding of optical flow estimation task ΩT , which is the LTSE for the first TWA block. ΩT = ΩT + Att1(ΩT , ΩT , ΩT + PE) (8) where PE is the positional embedding. Att1 is the first attention layer, which requires query, key, and value to be fed in order. We omit the normalization and activation layers The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7698 Training Stage Method Sintel(train) KITTI-15(train) clean final EPE F1 C+T HD3 3.84 8.77 13.17 24.0 LiteFlowNet 2.48 4.04 10.39 28.5 PWC-Net 2.55 3.93 10.35 33.7 LiteFlowNet2 2.24 3.78 8.97 25.9 S-Flow 1.30 2.59 4.60 15.9 RAFT 1.43 2.71 5.04 17.4 FM-RAFT 1.29 2.95 6.80 19.3 GMA 1.30 2.74 4.69 17.1 GMFlow 1.08 2.48 GMFlowNet 1.14 2.71 4.24 15.4 CRAFT 1.27 2.79 4.88 17.5 SKFlow 1.22 2.46 4.47 15.5 FlowFormer 0.94 2.33 4.09 14.72 FlowFormer++ 0.90 2.30 3.93 14.13 Ours 0.87 2.11 3.44 12.28 Table 1: Generalization performance evaluation on Sintel and KITTI-15 train sets. here for brevity and do not expand the attention layer in detail. ΩT is the intermediate result of this step. 2) Context-based Embedding Adaption: as shown in Formula 9, we use a cross-attention layer to adapt the embedding with the context feature for better handling the input cases. ˆΩT = ¯ΩT + Att2(¯ΩT , ΦC, ΦC + PE) (9) where ˆΩis the adaptation result of this step. 3) Embedding Update: as shown in Formula 10, we use a Multi-layer Perceptron (MLP) to update the query. ΩU = ˆΩT + MLP(ˆΩT ) (10) where MLP is the Multi-layer Perceptron, and ΩU is the updated embedding. 4) Feature Adaption: we use the updated embedding to adapt the context feature for optical flow tasks with a crossattention layer, as shown in Formula 11. ΦA C = ΦC + Att3(ΦC, ΩU, ΩU + PE) (11) where ΦA C is the adapted context feature under the guidance of optical flow task-specific queries. Finally, we use an addition operation to blend the results of the two modules. Experiment Settings Training Settings We follow the setup of previous work (Huang et al. 2022a) and divide the training into two stages: C+T-Stage and C+T+S+K+H-stage. To speed up training, we skip the stage of training on the Chairs dataset by using FlowFormer-things checkpoint as initialization, and the SAM encoder is kept frozen during training. Test Settings For testing, we adopt the tiling strategy (Jaegle et al. 2021) to bridge the resolution gap between training and testing data. Training Stage Method Sintel(test) KITTI-15(test) clean final F1-all C+T+S +K+H PWC-Net+ 3.45 4.60 7.72 VCN 2.81 4.40 6.30 MaskFlowNet 2.52 4.17 6.10 S-Flow 1.50 2.67 4.64 RAFT 1.94 3.18 5.10 RAFT* 1.61 2.86 5.10 FM-RAFT 1.72 3.60 6.17 GMA 1.40 2.88 5.15 GMA* 1.39 2.47 5.15 GMFlow 1.74 2.90 9.32 GMFlowNet 1.39 2.65 4.79 CRAFT 1.45 2.42 4.79 SKFlow* 1.28 2.23 4.84 FlowFormer 1.16 2.09 4.68 FlowFormer++ 1.07 1.94 4.52 Ours 1.00 2.08 4.49 Table 2: Benchmark evaluation on Sintel and KITTI-15 test sets. The models with * adopt the warm-start strategy proposed in (Teed and Deng 2020). Methods Sintel (train) Occ. Sintel (test) Occ. clean final clean final RAFT 5.36 7.09 9.65 14.68 GMA 4.25 6.22 7.96 12.50 SKFlow 3.44 4.52 7.25 11.42 FlowFormer 2.76 3.60 7.16 11.30 FlowFormer++ 2.54 3.41 6.64 10.63 Ours 2.24 2.99 5.97 10.60 Table 3: Evaluation in occluded area of Sintel train and test sets. Quantitative Comparison We first use the model trained in the C+T-stage for evaluating the generalization performance on the training sets of Sintel and KITTI. Then, we uploaded the results of the C+T+S+K+H-stage model and the K-stage model to the Sintel Benchmark website and the KITTI Benchmark website to compare the dataset-specific accuracy with the SOTA methods, including HD3 (Yin, Darrell, and Yu 2019),LiteFlowNet (Hui, Tang, and Loy 2018), PWC-Net (Sun et al. 2018a), PWC-Net++ (Sun et al. 2019), LiteFlowNet2 (Hui, Tang, and Loy 2020), S-Flow (Zhang et al. 2021), RAFT (Teed and Deng 2020), FM-RAFT (Jiang et al. 2021b), GMA (Jiang et al. 2021a), GMFlow (Xu et al. 2022), GMFlowNet (Zhao et al. 2022), CRAFT (Sui et al. 2022), SKFlow (Sun et al. 2022), FlowFormer (Huang et al. 2022a) and FlowFormer++ (Shi et al. 2023). Generalization Performance As shown in Table 1, for the C+T-stage, our model achieves the best performance on all metrics on the training set of Sintel and KITTI-15 datasets. The EPEs of our SAMFlow on Sintel clean and fiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7699 Figure 4: An example of the fragmentation attack, where our SAMFlow shows robustness over FlowFormer. Figure 5: The average EPE of sintel clean and final pass under the fragementation attack with different masked ratios. nal pass reach 0.86 and 2.10. SAMFlow also achieve 3.55 EPE and 12.32 F1 on KITTI-15 datasets. It is worth noting that FlowFormer uses two different model checkpoints with different training patch-size to obtain better performance on Sintel and KITTI. In contrast, our method uses the same checkpoint when evaluating both datasets. Nevertheless, our SAMFlow still easily surpasses the performance of FlowFormer, reducing Sintel clean/final EPE and KITTI15 EPE/F1 by 8.5%/9.9% and 13.2%/16.3%, respectively. Comparison on Benchmarks Table 2 also proves the dataset-specific performance of our SAMFlow. On the Sintel test set, our method achieves 1.00 clean EPE and 2.08 final EPE. Meanwhile, on the KITTI-15 test set, SAMFlow achieves 4.49 F1-all. Compared with FlowFormer, our method has achieved all-around improvement and can also defeat the new SOTA method FlowFormer++ on Sintel clean pass and KITTI-15. This demonstrates that our method brings significant accuracy improvements for optical flow estimation. The results can also be found on Sintel and KITTI-15 benchmark websites, where our SAMFlow rank #1 among all two-frame methods on Sintel clean pass. Evaluation in Occluded Area We give the comparison under occluded area, one of the significant sources of fragmentation, on the Sintel train set and the Sintel benchmark (test) as shown in Table 3. Compared with Flowformer, our method has improved by 18.84%/17.94% and 16.62%/6.19% on Sintel train and test, respectively. At the same time, it surpasses Flowformer++ on the Sintel benchmark and reaches the best. Fragmentation Attack To further demonstrate the ability of our method to eliminate fragmentation, we design the fragmentation attack, which splits the images into discrete parts using a grid-style mask, as shown in Figure 4. By controlling the thickness and density of the mask grid, we can mask out the images of the Sintel dataset at different ratios, creating different degrees of fragmentation. As shown in Figure 5, we compare the robustness of GMFlow, SKFlow, FlowFormer, and our model under fragmentation attack with 0%, 20%, 30% and 40% masked ratios. Attacks greater than 40% are meaningless because too much information has been lost, so we ignore these cases. It can be observed that SKFlow is greatly affected by fragmentation attacks. Its EPE increases sharply when adding 20% mask. GMFlowNet and FlowFormer are also trapped in isolated local clues caused by fragmentation, showing a noticeable performance hit. With the object perception with SAM encoder, our method exhibits strong robustness by finding the relation between the image content in different grids, achieving better results than FlowFormer. Visualization Figure 6 shows two examples from Sintel and KITTI-15 for qualitative comparison, corresponding to fragmentation caused by occlusion and complex lighting/textures, respectively. We visualize the optical flow fields by mapping them to the color space and the context features by computing the feature similarity between all pixels and the chosen query points. We can find that FlowFormer context similarity is unordered and cannot guarantee the integrity of moving objects, resulting in the missing leg of the girl in the first example and the holes of the car in the second example. With the perception of objects, our SAMFlow gives better context features with apparent related objects and boundaries, thus enhancing the accuracy of optical flow greatly. Ablation Study We conduct a series of ablation studies to validate our SAMFlow, and the results are shown in Table 4. Encoders and Modules: We compare different context feature settings to prove the effectiveness of our designs. The baseline model is FlowFormer, which only has the optical flow task-specific encoder. We first try using the SAM encoder instead of the FlowFormer encoder and find it brings some performance improvements on Sintel. However, the effect on authentic images (KITTI-15) is limited due to the lack of priors for optical flow estimation. Subsequently, we add our CFM, which fuses the FlowFormer and SAM features with residual convolutional blocks, significantly reducing the errors on KITTI-15 datasets. We further add our CAM to inject more task-specific knowledge into the context feature, which boosts the optical flow accuracy and achieves the best performances for both datasets. A CAM-only model was also added to the experiments to illustrate that both modules are necessary. SAM Model Scale: We try the SAM image encoders of different SAM scales, including SAM-H, SAM-B and a tiny version MobileSAM (Zhang et al. 2023). The baseline model The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7700 Figure 6: Two examples from Sintel and KITTI-15 for qualitative comparison corresponding to fragmentation caused by occlusion and complex lighting/textures, respectively. Methods Sintel(train) KITTI-15(train) clean final EPE F1 FlowFormer Enc. 0.94 2.33 4.09 14.72 SAM Enc. 0.89 2.17 4.11 14.37 CFM 0.89 2.11 3.83 13.43 CAM 0.90 2.10 3.81 13.23 CFM + CAM 0.87 2.11 3.44 12.28 w/o. SAM 0.94 2.33 4.09 14.72 MobileSAM 0.88 2.19 3.78 13.22 SAM-B 0.88 2.26 3.57 12.45 SAM-H 0.87 2.11 3.44 12.28 VIT 1.08 2.38 4.01 13.47 MAE 1.01 2.35 3.91 13.56 DINO 0.94 2.26 4.07 13.76 Table 4: Ablation study of encoder type, modules, and the scale of SAM encoder. We bold the best value in each group. is also listed, named w/o. SAM. All our models outperform the baseline (FlowFormer), which once again proves the effectiveness of our proposed method. Moreover, we find that larger encoders show better results in general. However, there is an exception on the final pass of the Sintel dataset, where the MobileSAM encoder performs better than SAMB encoder. This may be due to the different architectures of MobileSAM and SAM-B encoders, which cause them to behave differently under some specific scenes. Other Pre-trained Models Besides SAM, we also try ViT (Dosovitskiy et al. 2020), MAE (He et al. 2022) and DINO (Zhang et al. 2022). However, as shown in Table 4, they do not work well. The reasons are two-fold: on the one hand, their tasks does not need good representations of spatial content; on the other hand, the pre-trained ViT and MAE suffer from a limitation of very low-resolution (224×224 or 384 × 384), making them poorly suited for larger inputs. Runtime Analysis There might be doubts about the computational cost of utilizing the SAM encoder. In order to analyze it, we compare the performance and runtime of our three models of different scales with FlowFormer and FlowFormer++, and the Figure 7: Runtime and accuracy comparison between Flowformer, FlowFormer++, and our models with different SAM encoders, including SAM-B, SAM-H, and MobileSAM (MSAM). The x-axis is the average time of 100 runs of 384 × 1024 inputs, and the y-axis is the f1 score on KITTI. results are presented in Figure 7. It can be found that our method can balance performance and runtime requirements by controlling the scale of the SAM encoder. Compared with FlowFormer, our SAMFlow w/. MSAM(MobileSAM) only slightly increases in runtime but shows a considerable drop in F1. Meanwhile, all of our three models are superior to FlowFormer++ in speed and performance, proving the practical significance of our method. Conclusion This paper focuses on the challenging fragmentation issues for optical flow estimation. We first give theoretical analysis for applying SAM feature in optical flow estimation. Thus, we propose SAMFlow, which incorporates SAM into the optical flow estimation network. Next, to address the unmatched task-specific knowledge between SAM and optical flow estimation, we introduce an Optical Flow Task-Speicifc Adaptation scheme, including the CFM and CAM. In experiments, we demonstrate the effectiveness of SAMFlow for fragmentation elimination and its superiority in terms of optical flow estimation accuracy, which achieves state-of-theart performance, and ranks #1 among all two-frame methods on Sintel clean pass. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7701 References Butler, D. J.; Wulff, J.; Stanley, G. B.; and Black, M. J. 2012. A naturalistic open source movie for optical flow evaluation. In Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VI 12, 611–625. Springer. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR. Cheng, R.; He, R.; Jiang, X.; Zhou, S.; Tan, W.; and Yan, B. 2023. Context-Aware Iteration Policy Network for Efficient Optical Flow Estimation. arXiv:2312.07180. Dong, Q.; Cao, C.; and Fu, Y. 2023. Rethinking Optical Flow from Geometric Matching Consistent Perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1337–1347. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; and Brox, T. 2015. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, 2758–2766. Gaidon, A.; Wang, Q.; Cabon, Y.; and Vig, E. 2016. Virtual Worlds as Proxy for Multi-Object Tracking Analysis. In CVPR. Gao, C.; Saraf, A.; Huang, J.-B.; and Kopf, J. 2020. Flowedge guided video completion. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part XII 16, 713–729. Springer. Geiger, A.; Lenz, P.; Stiller, C.; and Urtasun, R. 2013. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11): 1231–1237. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16000–16009. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Horn, B. K.; and Schunck, B. G. 1981. Determining optical flow. Artificial intelligence, 17(1-3): 185–203. Huang, Z.; Shi, X.; Zhang, C.; Wang, Q.; Cheung, K. C.; Qin, H.; Dai, J.; and Li, H. 2022a. Flowformer: A transformer architecture for optical flow. In European Conference on Computer Vision, 668–685. Springer. Huang, Z.; Zhang, T.; Heng, W.; Shi, B.; and Zhou, S. 2022b. Real-time intermediate flow estimation for video frame interpolation. In European Conference on Computer Vision, 624–642. Springer. Hui, T.-W.; Tang, X.; and Loy, C. C. 2018. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8981–8989. Hui, T.-W.; Tang, X.; and Loy, C. C. 2020. A lightweight optical flow cnn—revisiting data fidelity and regularization. IEEE transactions on pattern analysis and machine intelligence, 43(8): 2555–2569. Jaegle, A.; Borgeaud, S.; Alayrac, J.-B.; Doersch, C.; Ionescu, C.; Ding, D.; Koppula, S.; Zoran, D.; Brock, A.; Shelhamer, E.; et al. 2021. Perceiver IO: A General Architecture for Structured Inputs & Outputs. In International Conference on Learning Representations. Jiang, S.; Campbell, D.; Lu, Y.; Li, H.; and Hartley, R. 2021a. Learning to estimate hidden motions with global motion aggregation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9772–9781. Jiang, S.; Lu, Y.; Li, H.; and Hartley, R. 2021b. Learning optical flow from a few matches. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16592–16600. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. Kondermann, D.; Nair, R.; Honauer, K.; Krispin, K.; Andrulis, J.; Brock, A.; Gussefeld, B.; Rahimimoghaddam, M.; Hofmann, S.; Brenner, C.; et al. 2016. The hci benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 19–28. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. Lucas, B. D.; and Kanade, T. 1981. An iterative image registration technique with an application to stereo vision. In IJCAI’81: 7th international joint conference on Artificial intelligence, volume 2, 674–679. Mayer, N.; Ilg, E.; Hausser, P.; Fischer, P.; Cremers, D.; Dosovitskiy, A.; and Brox, T. 2016. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4040–4048. Mehl, L.; Schmalfuss, J.; Jahedi, A.; Nalivayko, Y.; and Bruhn, A. 2023. Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4981–4991. Shi, X.; Huang, Z.; Li, D.; Zhang, M.; Cheung, K. C.; See, S.; Qin, H.; Dai, J.; and Li, H. 2023. Flowformer++: Masked cost volume autoencoding for pretraining optical flow estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1599–1610. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7702 Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Sui, X.; Li, S.; Geng, X.; Wu, Y.; Xu, X.; Liu, Y.; Goh, R.; and Zhu, H. 2022. Craft: Cross-attentional flow transformer for robust optical flow. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 17602–17611. Sun, D.; Vlasic, D.; Herrmann, C.; Jampani, V.; Krainin, M.; Chang, H.; Zabih, R.; Freeman, W. T.; and Liu, C. 2021. Autoflow: Learning a better training set for optical flow. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10093–10102. Sun, D.; Yang, X.; Liu, M.-Y.; and Kautz, J. 2018a. Pwcnet: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8934–8943. Sun, D.; Yang, X.; Liu, M.-Y.; and Kautz, J. 2019. Models matter, so does training: An empirical study of cnns for optical flow estimation. IEEE transactions on pattern analysis and machine intelligence, 42(6): 1408–1423. Sun, S.; Chen, Y.; Zhu, Y.; Guo, G.; and Li, G. 2022. Skflow: Learning optical flow with super kernels. Advances in Neural Information Processing Systems, 35: 11313–11326. Sun, S.; Kuang, Z.; Sheng, L.; Ouyang, W.; and Zhang, W. 2018b. Optical flow guided feature: A fast and robust motion representation for video action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1390–1399. Teed, Z.; and Deng, J. 2020. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part II 16, 402–419. Springer. Vincent, P.; Larochelle, H.; Bengio, Y.; and Manzagol, P.-A. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, 1096–1103. Xu, H.; Zhang, J.; Cai, J.; Rezatofighi, H.; and Tao, D. 2022. Gmflow: Learning optical flow via global matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8121–8130. Yin, Z.; Darrell, T.; and Yu, F. 2019. Hierarchical discrete distribution decomposition for match density estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6044–6053. Zhang, C.; Han, D.; Qiao, Y.; Kim, J. U.; Bae, S.-H.; Lee, S.; and Hong, C. S. 2023. Faster Segment Anything: Towards Lightweight SAM for Mobile Applications. arXiv preprint arXiv:2306.14289. Zhang, F.; Woodford, O. J.; Prisacariu, V. A.; and Torr, P. H. 2021. Separable flow: Learning motion cost volumes for optical flow estimation. In Proceedings of the IEEE/CVF international conference on computer vision, 10807–10817. Zhang, H.; Li, F.; Liu, S.; Zhang, L.; Su, H.; Zhu, J.; Ni, L.; and Shum, H.-Y. 2022. DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection. In The Eleventh International Conference on Learning Representations. Zhao, S.; Zhao, L.; Zhang, Z.; Zhou, E.; and Metaxas, D. 2022. Global matching with overlapping attention for optical flow estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17592–17601. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7703
2024
855
18,690
Efficient Lightweight Image Denoising with Triple Attention Transformer Yubo Zhou1*, Jin Lin1*, Fangchen Ye1, Yanyun Qu1†, Yuan Xie2† 1School of Informatics, Xiamen University, Fujian, China 2School of Computer Science and Technology, East China Normal University, Shanghai, China ybzhou@stu.xmu.edu.cn, yxie@cs.ecnu.edu.cn, yyqu@xmu.edu.cn Abstract Transformer has shown outstanding performance on image denoising, but the existing Transformer methods for image denoising are with large model sizes and high computational complexity, which is unfriendly to resource-constrained devices. In this paper, we propose a Lightweight Image Denoising Transformer method (LIDFormer) based on Triple Multi-Dconv Head Transposed Attention (TMDTA) to boost computational efficiency. LIDFormer first implements Discrete Wavelet Transform (DWT), which transforms the input image into a low-frequency space, greatly reducing the computational complexity of image denoising. However, the low-frequency image lacks fine-feature information, which degrades the denoising performance. To handle this problem, we introduce the Complementary Periodic Feature Reusing (CPFR) scheme for aggregating the shallow-layer features and the deep-layer features. Furthermore, TMDTA is proposed to integrate global context along three dimensions, thereby enhancing the ability of global feature representation. Note that our method can be applied as a pipeline for both convolutional neural networks and Transformers. Extensive experiments on several benchmarks demonstrate that the proposed LIDFormer achieves a better trade-off between high performance and low computational complexity on realworld image denoising tasks. Introduction Image denoising is an important task in image restoration and is widely applied to many scenarios (Anwar, Khan, and Barnes 2020). With the rise of deep learning, image denoising methods have made great progress (Tai et al. 2017; Chen and Pock 2016; Zhou et al. 2020; Mao, Shen, and Yang 2016; Ulyanov, Vedaldi, and Lempitsky 2018; Cheng et al. 2021). However, the existing models mostly require high computational complexity in order to obtain good performance, which may hinder the widespread application of methods on resource-limited devices such as mobile phones, robotics, and some edge devices. Efficient and lightweight denoising methods attract more and more attention. With the rising up of deep learning, convolutional neural networks are used for image denoising. The method (Xu, *These authors contributed equally. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 20 40 60 80 100 120 140 160 GFLOPs(G) 39.2 39.3 39.4 39.5 39.6 39.7 Average…PSNR(dB) LIDFormer (ours) LiNAFNet (ours) InvDN (CVPR2021) DeamNet (CVPR2021) VDN (CVPR2019) DANet (ECCV2020) (AAAI2023) ADFNet Figure 1: Performance and FLOPs cost of LIDFormer compared to other popular efficient and lightweight denoising methods on SIDD. The LiNAFNet is formed by applying the module from LIDFormer to NAFNet (Chen et al. 2022a). Our method achieves a better trade-off between performance and FLOPs cost on image denoising tasks. Yang, and Jiang 2017; Yuan, Liu, and Liang 2023) reduces computations and storage costs by utilizing the sparse nature of images. It has achieved remarkable results in denoising effect and computational speed. Moreover, the approach (Yu et al. 2018) based on the joint loss function is a new idea proposed in recent years, which improves the denoising effect by simultaneously considering the local and global information of the image. Meanwhile, the method (Jin et al. 2019) based on depthwise separable convolution is widely used in image denoising tasks. It improves the denoising efficiency and accuracy by separating spatial and channel dimensions while reducing model parameters and computational costs. This method has been shown to be effective in many image denoising tasks, especially for practical applications and large-scale data. Although the above methods have accelerated the denoising process in different aspects, the computational efficiency of current lightweight image denoising models still has resource barriers compared with advanced semantic tasks such The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7704 as image classification. Therefore, in order to narrow the gap with advanced semantic tasks and realize the computational efficiency of image denoising algorithms adapted to practical devices, image denoising methods with less than 5 GFLOPs of computational cost are worth exploring and designing. In response to the above problems, we propose a Lightweight Image Denoising Transformer method (LIDFormer) based on Discrete Wavelet Transform (DWT) (Mallat 1989) and Triple Multi-Dconv Head Transposed Attention (TMDTA), which aims to produce excellent performance while being computationally efficient. To be specific, our proposed lightweight feature module utilizes DWT (Mallat 1989) to losslessly transform the input image into a low-resolution space composed of high-frequency and lowfrequency information sets. Notably, DWT (Mallat 1989) is an established lossless frequency-domain transformation function that is not involved in model training, so it can be considered a non-computationally consuming module. Moreover, Complementary Periodic Feature Reusing (CPFR) is introduced to mitigate the loss of information due to low resolution. Through continuous complementary residual connection, CPFR combines the historical feature with the current feature in a weighted and complementary way. It also avoids the discarding of valid features due to the refinement of the feature information as the network level goes deeper. In particular, the complementary residual connections are learnable channel attention functions. From another point of view, the multi-head self-attention (MHSA) proposed by Transformer (Vaswani et al. 2017) can effectively refine characteristic information and overcome the “short-range” effect of local convolution. However, since the global pixel-based computation of self-attention is too large (proportional to the resolution of the features), it is usually not directly applicable to image restoration tasks. The feature lightweighting strategy propounded by LIDFormer allows all the computations of features to be performed in low-scale space, thus making global self-attention possible for resource-constrained devices. Based on the above discussion, LIDFormer introduces TMDTA, namely horizontal self-attention, vertical self-attention and channel-wise selfattention, for collaborative computing. Finally, LIDFormer achieves a computational cost close to image classification with 2.8 GFLOPs. More intuitively, as shown in Fig. 1, LIDFormer significantly outperforms the majority of popular efficient image denoising methods and has much lower computational complexity than these approaches. We summarize the main contributions of this work as follows: • We propose an efficient and lightweight image denoising method based on DWT and TMDTA (namely LIDFormer). Our LIDFormer provides a novel pipeline to reduce computational complexity, and it is a universal and generalizable efficient method. • We design the Complementary Periodic Feature Reusing module (CPFR), which can effectively overcome the problem of compact and insufficient feature information caused by feature lightweighting. The reused effect can solve the issue of catastrophic forgetting to a certain extent and effectively retain low-frequency information. • We introduce a Triple Multi-Dconv Head Transposed Attention module (TMDTA) to improve the performance of conventional multi-head self-attention based on feature pixels in a multi-dimensional and lightweight manner. • Extensive experiments demonstrate that our LIDFormer achieves a better trade-off between performance and computational complexity. The pipeline can also be generalized to different image denoising methods. Related Work Deep Learning-based Image Denoising Image denoising tasks aim to restore a high-quality image from the noisy observation (Chen et al. 2022a). In recent years, with the rise of deep learning technology, CNN-based network architectures (Tai et al. 2017; Chen and Pock 2016; Zamir et al. 2020, 2021; Zhang et al. 2020, 2017; Cheng et al. 2021) have achieved significant success in the field of image denoising, and their performance is far superior to that of traditional restoration methods (Dabov et al. 2008; Gu et al. 2014; Xu et al. 2017; Yair and Michaeli 2018; He, Sun, and Tang 2010). These deep networks have different characteristics in their designs, and most of them (Wang et al. 2022; Yue et al. 2020; Zamir et al. 2021; Zhang et al. 2021) are based on the UNet (Ronneberger, Fischer, and Brox 2015) architecture, which uses skip-connections to fuse the pixel-level features of the image with semanticlevel features for better restoration results. As Transformer-based models (Vaswani et al. 2017; Fedus, Zoph, and Shazeer 2022; Radford et al. 2018) have achieved excellent performance in the NLP domain, more and more vision applications, both high-level tasks (Graham et al. 2021; Liu et al. 2021b; Carion et al. 2020; Xie et al. 2021) and low-level tasks (Liang et al. 2021a; Kumar, Weissenborn, and Kalchbrenner 2020; Zamir et al. 2022; Wang et al. 2022), have tried to introduce it recently due to its strong capability of modeling long-range relations. Most of them have achieved better results compared to convolutional networks. The Vision Transformer (ViT) (Dosovitskiy et al. 2020) divides an image down into a series of patches (local windows) and discovers how they relate to one another. Benefiting from the powerful multi-head selfattention mechanism, its ability to calculate long-distance information interaction is particularly outstanding. Some existing works (Zamir et al. 2022; Chen et al. 2021, 2022b) have achieved promising performance by applying the ViT architecture to image denoising while alleviating the prohibitively expensive training complexity. Vision Transformers have shown their strong potential as an alternative to the previously dominant CNNs (Liang et al. 2021b). Recently, Restormer (Zamir et al. 2022) is proposed as a highperformance Transformer model for image denoising. It introduces a gating mechanism based on depth-wise convolutions to perform controlled feature transformation. Although this method achieves state-of-the-art denoising performance, it also sacrifices a large amount of computational cost. In this paper, we propose a computationally friendly method named The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7705 DWT PFR Blocks PFR Blocks PFR Blocks H W C 4 4   H W 2C 8 8   H W 8C 16 16   PFR Blocks PFR Blocks H W 16C 32 32   C C Skip Connections TMDTA PFR Blocks GDFN C UP UP TMDTA Triple Multi-Dconv Head Transposed Attention Gated Dconv Feed-Forward Network GDFN DWT DWT Discrete Wavelet Transform UP UP Pixel-Shuffle Upsample Depth-Downsample Depth-Upsample C PDconv Point-wise+Depth-wise Convolution Element-wise Addition Concatenation S S S Channel Split S H W C 4 4   H W 2C 4 4   H W 4C 8 8   H W 2C 8 8   H W 4C 16 16   H W 16C 32 32   H W 4C 16 16   H W 4C 16 16   H W 8C 16 16   H W 2C 8 8   H W 4C 8 8   H W C 4 4   H W 2C 4 4   0F Norm Layer Normalization R Reshape Matrix Multiplication   Qs Ks As Vs        DWT PFR Blocks PFR Blocks PFR Blocks H W C 4 4   H W 2C 8 8   H W 8C 16 16   PFR Blocks PFR Blocks H W 16C 32 32   C C Skip Connections TMDTA PFR Blocks GDFN C UP TMDTA Triple Multi-Dconv Head Transposed Attention Gated Dconv Feed-Forward Network GDFN DWT Discrete Wavelet Transform UP Pixel-Shuffle Upsample Depth-Downsample Depth-Upsample C PDconv Point-wise+Depth-wise Convolution Element-wise Addition Concatenation S S S Channel Split S H W C 4 4   H W 2C 4 4   H W 4C 8 8   H W 2C 8 8   H W 4C 16 16   H W 16C 32 32   H W 4C 16 16   H W 4C 16 16   H W 8C 16 16   H W 2C 8 8   H W 4C 8 8   H W C 4 4   H W 2C 4 4   0F Norm Layer Normalization R Reshape Matrix Multiplication   Qs Ks As Vs        H W C 4 4   TMDTA Norm Q K V Qs Ks Vs  As  R R R R 1x1  PDconv PDconv PDconv h w × c h × c h × w w c h × c w × c h × w h h w w c c h w × c h × c h × w Figure 2: Illustration of our proposed LIDFormer. First, the input image is transformed into a low-resolution frequency-domain space using DWT. Then, the CPFR module is used to combine features from historical and current periods, effectively multiplexing the features and avoiding the issue of shallow features being forgotten due to the filtering of depth information. Additionally, LIDFormer incorporates TMDTA to capture global feature information in three dimensions, which approximates traditional high-computation full-pixel self-attention. LIDFormer for image denoising. Our method reduces the computational workload of existing models without compromising the capability of denoising. Efficient and Lightweight Image Denoising Although the performance of the image denoising methods mentioned above improves significantly, they mostly suffer from high computational costs, which do not favor resource-constrained devices such as smart phones. To relieve the computation burden and improve efficiency, there are emerging efforts to design efficient and lightweight image denoising approaches. Zhang et al. (Zhang, Zuo, and Zhang 2018) propose a new CNN model based on DnCNN (Zhang et al. 2017) , namely FFDNet, for rapid, effective, and adjustable discriminative denoising. FFDNet uses downsampled sub-images, which significantly speeds up training and testing while also expanding the receptive area. Yue et al. (Yue et al. 2019) utilize a new variational inference method (VDN) to fast infer both the underlying clean image and the noise distribution from an observed noisy image in a unique Bayesian framework. DANet (Yue et al. 2020) approximates the joint distribution from two different factorized forms in a dual adversarial manner. The joint distribution theoretically contains more complete information underlying the data set, which significantly reduces the time required to collect clean-noisy image pairs. Zou et al. (Zou et al. 2023) make contributions for efficient image denoising by a lightweight network and a novel distillation algorithm with retargeting supervision. Another related work is Thunder (Zhou et al. 2022), which leverages the RGB thumbnail instead of the feature subspace to accelerate the denoising process. More specifically, it adopts the subspace projection method to guarantee the denoising effect while refining the thumbnail. Unlike Thunder (Zhou et al. 2022), our method yields better denoising effects with faster calculation efficiency by incorporating the DWT module and the TMDTA module. Method Overview of LIDFormer As shown in Fig. 2, LIDFormer consists of three main components: (1) A feature lightweighting module based on DWT, which maps a given noisy image x from RGB space to low-resolution frequency-domain space through a double discrete wavelet transform (DWT); (2) A Complementary Periodic Feature Reusing (CPFR) module, which performs non-linear operations on low-resolution frequency-domain features; (3) A Triple Multi-Dconv Head Transposed Attention (TMDTA) module, which introduces three-dimensional co-computation of horizontal, vertical and channel selfattention. First of all, the input image is transformed into a low-resolution frequency-domain space using DWT to alleviate the computational bottleneck. Then, the function of feature multiplexing is realized through the CPFR module. CPFR can effectively combine the features of different periods and avoid the problem of shallow features being forgotten due to the filtering of depth information. Besides, TMDTA is utilized to obtain the global information of features in three dimensions and approximately replaces the traditional high-computation full-pixel self-attention. Note that the upsampling module is implemented directly by using the conventional non-computationally intensive “pixel-shuffle” operation, which compresses the feature channel, and then the compressed part is filled in the channel to achieve lossless amplification feature resolution. The specific process is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7706 DWT DWT Transformer DWT DWT DWT IWT IWT IWT IWT (H,W,3) (H,W)×3 (H/2,H/2)×3×4 (H/4,H/4)×3×4×4 (H/4,H/4)×3×4×4 (H/2,H/2)×3×4 (H,W)×3 (H,W,3) Figure 3: Illustration of the double DWT feature lightweighting network. Here, we choose Transformer-based models as the backbone network. A color image with three RGB channels is used as the initial input. The channel of middle input is increased by 16 times compared to the original image by a double DWT, while the resolution is decreased by 16 times. expressed as: (B, C × γ2 , H, W) −→(B, C, H × γ , W × γ). The above process can be expressed as: F0 = fDW T (x), Fn = fUnetCP F R(F0), IRestored = x + fUP (Fn). (1) Among them, fDW T (•) means that the double discrete wavelet transform performs frequency-domain compression on the input noise image x, and fUnetCP F R(•) means the noise extraction function for low-resolution features. DWT-based Feature Lightweighting At present, the denoising models based on the deep network mainly rely on the noise extraction method for image denoising. The details are as follows: y = x + η, (2) where y denotes the noisy image, x denotes the denoised clear image, both represented as vectors, and η is the noise distribution of the noisy image. Moreover, in most deep learning-based models, η is usually learned from the noisy image in an end-to-end image denoising task, as follows: η = F(y), (3) where F(•) denotes the noise extractor in the denoising process. As shown in Fig. 2, before noise extraction, the original input image will be converted to the frequency-domain space through the DWT-based resolution compression module, and the input image resolution will be compressed to 1 16 of the original size by the double DWT, which can greatly reduce the computational complexity while keeping the number of channels and model parameters unchanged. The specific low-resolution frequency-domain compression module can be described as: η = F(DWT(y)), (4) where DWT(•) denotes the double DWT transform function, which is an established lossless frequency-domain transform function. What needs to be emphasized here is that our method uses the classic Haar (Mallat 1989) as the discrete wavelet and aims to decompose the input image X ∈RH×W ×3 into 48 low-resolution frequency-domain sub-features fi ∈R H 4 × W 4 , i ∈[1, . . . , 48]. The image denoising process based on the DWT makes features lightweight, as shown in Fig. 3. The input image is a color one with three RGB channels. Through two first-order DWT transformations (i.e., double DWT), the feature channels of middle input are expanded by 16 times compared to the original image, while the feature resolution is reduced by 16 times, achieving a lossless feature lightweighting effect. Complementary Periodic Feature Reusing Although feature lightweighting based on the double DWT greatly reduces the computational complexity of the model, the intermediate feature is compressed 16 times compared to the original model without lightweighting, resulting in a serious shortage of feature information. Therefore, LIDFormer proposes the Complementary Periodic Feature Reusing (CPFR) module, which aims to reuse historical features and fill in the shallow historical feature information lost during the learning process of compact features. First, the compact features are expanded by a factor of two in the channel dimension, and a simple linear feature embedding is done by using a generalized 3 × 3 convolution (CONV3 below) to double the information space of compact features: F0 = CONV 3(DWT(x)). (5) Then, as shown in Fig. 4, in order to make full use of the extended feature information and perform effective historical feature reusing, our method calculates the features of the next stage while retaining the feature information of the previous stage. The simple CPFR is expressed as follows: fn =  T · Fn(fn−1) + (1 −T) · fn−1, n = 2 × k fn−1, else (6) where fn denotes the upper or lower half of the extended feature, Fn(•) represents the processing unit (i.e., TMDTA), and T is the complementary coefficient. In the experiment of this method, the value of T is set to 0.5. Complementary Adaptive Channel Attention Since the information of each feature is constantly changing and the information of deep and shallow features is not uniform at each pixel, it is not friendly to set the value of the complementary coefficient rigidly. To address the above The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7707 PFR Block CA TMDTA CA  CA f f f  2f 1f 2f CA Figure 4: Illustration of the Complementary Periodic Feature Reusing (CPFR) module. CA denotes the channel attention function. f1 and f2 denote the historical feature and the current feature, respectively. problem, CPFR uses Complementary Adaptive Channel Attention (CACA) to compensate for the deficiency of hard complementary coefficients, which is represented as follows: gn = Fn(fn−1) fn =  F CA n1 (gn) · gn + F CA n2 (fn−1) · fn−1, n = 2 × k fn−1, else (7) where F CA n represents the channel attention function, as shown in the CA module in Fig. 4, which consists of simple convolution, activation, pooling, and other basic constructions. More importantly, the convolution calculation is processed on the pooled single-point multi-channel features; that is, the overall calculation is done with the feature resolution of only one, which is almost negligible compared to the overall feature calculation. In addition, CPFR imposes a complementary constraint on adaptive channel attention and construes this part with a simple MSE loss. The effectiveness of the complementary constraint has been demonstrated through experiments. The complementary constraint is shown below: LCA = n X i=1 F CA i1 (Fi(fi−1)) + F CA i2 (fi−d) −ONEs 2 , (8) where n denotes the number of computing units and ONEs denotes a matrix that elements with values of one in the same dimension as the outputs of F CA(fi−d), achieving pixellevel complementarity constraints. Triple Multi-Dconv Head Transposed Attention In addition to the above approaches, LIDFormer considers a very significant issue: the limitation of Transformer in image restoration lies in the huge computational complexity caused by the demand to complete high-resolution correlation calculations between various pixels. As shown in Fig. 3, the pixel magnification of intermediate features has been scaled by 16 times, and the computation can be reduced by 256 times if the traditional self-attention mech* * * * * * * * * * * * * * * * Layer Norm PDConv Height Attention Width Attention Channel Attention C Projection GDFN C H Height Attention MHA-H * * * * * * * * * * * * * * * * * * * C H Width Attention MHA-W MHA-W MHA-W MHA-W * * * * * * * * * * * * * * * * * * * C H Channel Attention MHA-C MHA-C MHA-C MHA-C * * * MHA-H MHA-H MHA-H Figure 5: Illustration of the Triple Multi-Dconv Head Transposed Attention (TMDTA) moudle. The attention of characteristic pixels is decomposed into three directions of self-attention for cooperative computation: horizontal selfattention, vertical self-attention, and channel self-attention. anism of full pixels is adopted. Even so, when it comes to higher resolution images, there is still the problem of “high computational complexity”. To this end, considering the information redundancy of full-pixel self-attention, LIDFormer proposes Triple Multi-Dconv Head Transposed Attention (TMDTA). It decomposes the attention of characteristic pixels into three directions of self-attention for cooperative computation: horizontal self-attention, vertical selfattention, and channel self-attention. As shown in Fig. 5, the input features first pass through the “Layer Norm + PDConv” layer to generate the locally enriched query(Q), key(K) and value(V ). The Layer Norm (LN) denotes the regular layer normalization, and the PDConv denotes the combination of Pointwise Convolution (PWConv) and Depthwise Convolution (DWConv). Then, the query(Q) and key(K) are reshaped in three-dimensional directions, resulting in the horizontal queryH(QH) and keyH(KH), the vertical queryW (QW ) and keyW (KW ), and the queryC(QC) and keyC(KC), respectively. Then, matrix multiplication is performed on them respectively to generate three transposed attention matrices with sizes of RH×H, RW ×W and RC×C, instead of the regular attention matrix RHW ×HW of characteristic pixels (Vaswani et al. 2017; Dosovitskiy et al. 2020). It is worth noting that all three processes are transformed from query(Q), key(K) and are synergistically related to each other. In general, the process definition of TMDTA is as follows: X′ = Wp Atention(Qs, Ks, Ys) + X, Atention(Qs, Ks, Vs) = Concat(AH, AW, AC), AH = VH × Softmax(KH × QH/αH), AW = VW × Softmax(KW × QW/αW ), AC = VC × Softmax(KC × QC/αC), (9) where X and X′ denote the input and output features; Qi ∈ (RW C×H, RHC×W , RHW ×C), Ki ∈ (RH×W C, RW ×HC, RC×HW ), Vi ∈ (RW C×H, RHC×W , RHW ×C) denotes the horizontal, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7708 Baseline DWT CPFR CACA TMDTA GFLOPs PSNR ✓ × × × × 140 40.02 ✓ ✓ × × × 8.75 39.55 ✓ ✓ ✓ × × 2.82 39.55 ✓ ✓ ✓ ✓ × 2.83 39.58 ✓ ✓ ✓ ✓ ✓ 2.83 39.62 Table 1: Ablation experiments are conducted with different modules of the LIDFormer. vertical, and channel reshaping by the generated query(Q), key(K) and value(V ), respectively; αi denotes a learnable scaling parameter to control the size of the dot product of Qi and Ki before applying the activation function. In the above expression, i ∈[H, W, C]. Experiments Implementation Details To ensure the fairness of the comparison between methods, our method and conventional denoising methods adopt the same classic denoising dataset SIDD (Abdelhamed, Lin, and Brown 2018) for model training. Moreover, the trained model is evaluated on two publicly available datasets, SIDD (Abdelhamed, Lin, and Brown 2018) and DND (Plotz and Roth 2017). In our work, the Adam optimizer with β1 = 0.9, β2 = 0.999 and L1 loss are utilized to train the model. The training process takes 300K iterations with the learning rate being initially set to 3e-4. And the learning rate will gradually decrease to 1e-6 by using cosine annealing technique (Loshchilov and Hutter 2016). For iterative learning, 128 × 128 image patches with RGB channels are used to train a lightweight denoising model. The mini-batch size is set to 16. Besides, the resolution of image patches and the batch size are updated at iteration numbers of 92k, 156K, 204K, 240K, and 276K to (1602, 8), (1922, 6), (2562, 4), (3202, 2), and (3842, 1), respectively. Horizontal and vertical flipping are implemented for data augmentation. Evaluation Metrics Objective criteria, i.e., peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), are adopted to evaluate the performance of denoising models. The two metrics are both calculated on the Y channel of the YCbCr space. Besides, Giga Floating-point Operations Per second (GFLOPs) is used as the efficient evaluation criterion for the denoising model in our work. Ablation Study We conduct ablation studies to validate the effect of each component in our proposed method. All the experiments use Restormer (Zamir et al. 2022) as the baseline model. The quantity results are shown in Table 1. Effectiveness of Double DWT. As shown in Table 1, using the double DWT is able to compress the original model by 16 times with high performance. It can be seen from the table that as the computational complexity of the model reduces, the performance also decreases slightly. Therefore, in order to verify whether the method of feature lightweighting is feasible and universal, we have carried out corresponding experiments on different methods, as shown in Table 2. It can be observed from the table that the feature lightweighting method will cause a very serious decline in the performance of the model, especially in the experiment on CBDNet (Guo et al. 2019) (dwtCBDNet in the table), where the performance of the model encountered a catastrophic mutation. From this, the corresponding experimental conclusion can be drawn: feature lightweighting by using DWT can significantly reduce the computational complexity of the model and alleviate the computational pressure, but it cannot guarantee the performance of the lightweight model. Effectiveness of CPFR. As shown in Table 1, we aim to explore efficient image denoising methods whose computational complexity approximates image classification tasks (i.e., below 5 GFLOPs). By reducing the number of modules in the original model and reusing historical features, this deliberate architectural refinement achieves an efficient denoising model of 2.82 GFLOPs in the table. It is worth noting that the utilization of CPFR has greatly saved the performance of dwtCBDNet in Table 2. The results show that our proposed CPFR module further reduces the computational complexity of the efficient denoising model, which has the same performance advantages as the model of 8.75 GFLOPs, verifying the effectiveness of this module. Effectiveness of CACA. Since the complementary coefficient set in the simple CPFR module is a constant value (i.e., 0.5), the flexibility of feature learning is limited. Therefore, Complementary Adaptive Channel Attention (CACA) is proposed to release the pressure of the given value, making CPFR adaptively complementary. To combine historical features and current deep features, adaptive channel attention considers freely choosing summation coefficients. As shown in Table 1, compared with the simple CPFR module, the introduction of the CACA module has a certain improvement effect. In addition, the adaptive learning method can enhance the generalization of our method and avoid the discomfort of the given value in other methods. Effectiveness of TMDTA. The channel-wise multi-head self-attention designed in the original Restormer (Zamir et al. 2022) effectively overcomes the inadequacy of the Transformer’s (Vaswani et al. 2017) full-pixel self-attention in dense prediction tasks. However, channel-wise multi-head self-attention cannot completely replace the role of full-pixel self-attention because channel global information is unable to represent spatial global information. Therefore, as shown in Table 1, TMDTA is more effective than the original localglobal representation learning by aggregating spatial global information and channel global information. Application to Other Image Denoising Models To demonstrate the versatility of the proposed lightweight framework (LIDFormer), a generalization analysis of our method is performed on three representative image denoising approaches: Restormer (Zamir et al. 2022), CBDNet (Guo et al. 2019) and NAFNet (Chen et al. 2022a). All these denoising models are retrained under the conditions of the original model. The results are presented in Table 2. It is shown that the proposed pipeline is generally applicable to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7709 Methods GFLOPs SIDD DND PSNR / SSIM PSNR / SSIM Restormer 140.00 40.02 / 0.9600 40.03 / 0.9560 dwtRestormer 8.75 39.55 / 0.9326 39.65 / 0.9417 LIDFormer 2.83 39.62 / 0.9557 39.76 / 0.9558 CBDNet 34.00 39.30 / 0.9214 39.35 / 0.9351 dwtCBDNet 2.20 27.68 / 0.7214 27.54 / 0.7134 LiCBDNet 2.00 39.01 / 0.9014 39.06 / 0.9117 NAFNet 65.00 39.77 / 0.9524 39.81 / 0.9561 dwtNAFNet 4.00 39.43 / 0.9317 39.48 / 0.9342 LiNAFNet 4.20 39.51 / 0.9437 39.62 / 0.9525 Table 2: Generalization results of the efficient framework in LIDFormer for different image denoising methods. Among them, Restormer is based on Transformer while CBDNet and NAFNet are based on convolutional neural networks. Methods GFLOPs / Params SIDD DND PSNR / SSIM PSNR / SSIM DnCNN - / 0.56M 23.66 / 0.5830 32.43 / 0.7900 FFDNet - / 0.48M - / 34.40 / 0.8474 CBDNet 34 / 4.34M 33.28 / 0.8680 38.06 / 0.9421 RIDNet 196.52 / 1.49M - / 39.26 / 0.9528 VDN 99.00 / 7.81M 39.26 / 0.9550 39.38 / 0.9518 DANet 14.85 / 9.15M 39.25 / 0.9160 39.47 / 0.9548 DeamNet 146.36 / 2.23M 39.35 / 0.9550 39.63 / 0.9555 InvDN 47.80 / 2.64M 39.28 / 0.9550 39.57 / 0.9522 Thunder 18.81 / 2.68M 39.47 / 0.9570 39.57 / 0.9526 ADFNet 117.32 / 7.65M 39.63 / 0.9580 39.87 / 0.9555 LIDFormer 2.83 / 2.72M 39.62 / 0.9575 39.76 / 0.9558 Table 3: Quantitative comparison of LIDFormer with other efficient and lightweight denoising methods. The best performance is bolded and the second is underlined. “GFLOPs” presents the computational cost for peer 256 × 256 images. “Params” means the number of model parameters. existing denoising methods, both convolutional neural networks and Transformers. Compared with the original model, the performance of the model after computational complexity reduction has a slight decrease, but the computational complexity has been optimized by more than 16 times, indicating that our pipeline is an effective and universally efficient method. Comparison with State-of-the-Art Methods We compare the proposed LIDFormer with popular state-ofthe-art efficient and lightweight methods for real-world image denoising, including DnCNN (Zhang et al. 2017), FFDNet (Zhang, Zuo, and Zhang 2018), CBDNet (Guo et al. 2019), RIDNet (Anwar and Barnes 2019), VDN (Yue et al. 2019), DANet (Yue et al. 2020), DeamNet (Ren et al. 2021), InvDN (Liu et al. 2021a), Thunder (Zhou et al. 2022) and ADFNet (Shen, Zhao, and Zhang 2023). The compared results are shown in Table 3. From the table, it can be concluded that our LIDFormer achieves the best results in terms of computational complexity and performance compromise. In particular, the performance of ADFNet (Shen, Zhao, and Zhang 2023) is slightly better than our method, but the FLOPs cost is more than × DANet GT InvDN VDN ADFNet Ours Noisy Image Figure 6: Visual comparison of LIDFormer with other efficient and lightweight denoising methods on SIDD. Figure 7: Visual comparison of LIDFormer with other efficient and lightweight denoising methods on DND. 40 ours. Therefore, LIDFormer achieves a better trade-off between high performance and low computational complexity. Moreover, the visual comparisons of our proposed LIDFormer with other methods are given in Fig. 6 and Fig. 7. Our proposed method is not inferior to other efficient and lightweight denoising methods in terms of visual effect. Conclusion In this paper, we propose an efficient and lightweight image denoising method named LIDFormer. LIDFormer includes three parts: feature lightweighting based on double Discrete Wavelet Transform (DWT), Complementary Periodic Feature Reusing (CPFR) and Triple Multi-Dconv Head Transposed Attention (TMDTA). Among them, the feature lightweighting based on the double DWT is used to transform the input image into a low-resolution space for lowcomputing operation; the CPFR module effectively amplifies feature information in low-resolution space and alleviates catastrophic forgetting; the TMDTA mechanism enhances the interaction of feature information and relieves the computational complexity of full-pixel self-attention. The qualitative and quantitative experimental results indicate that LIDFormer can achieve a “low computational complexity” level close to advanced semantic tasks while maintaining high performance. Moreover, the efficient framework in LIDFormer can be generalized to other image denoising methods for effective optimization. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7710 Acknowledgments This work was supported by the National Key Research and Development Program of China No.2020AAA0108301, the National Natural Science Foundation of China under Grant No.62176224, No.62222602, No.62176092, the Natural Science Foundation of Chongqing under No.CSTB2023NSCOJOX0007, and the CCF-Lenovo Blue Ocean Research Fund. References Abdelhamed, A.; Lin, S.; and Brown, M. S. 2018. A highquality denoising dataset for smartphone cameras. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1692–1700. Anwar, S.; and Barnes, N. 2019. Real image denoising with feature attention. In Proceedings of the IEEE/CVF international conference on computer vision, 3155–3164. Anwar, S.; Khan, S.; and Barnes, N. 2020. A deep journey into super-resolution: A survey. ACM Computing Surveys (CSUR), 53(3): 1–34. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; and Gao, W. 2021. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12299–12310. Chen, L.; Chu, X.; Zhang, X.; and Sun, J. 2022a. Simple baselines for image restoration. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, 17–33. Springer. Chen, Y.; and Pock, T. 2016. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE transactions on pattern analysis and machine intelligence, 39(6): 1256–1272. Chen, Z.; Zhang, Y.; Gu, J.; Kong, L.; Yuan, X.; et al. 2022b. Cross Aggregation Transformer for Image Restoration. Advances in Neural Information Processing Systems, 35: 25478–25490. Cheng, S.; Wang, Y.; Huang, H.; Liu, D.; Fan, H.; and Liu, S. 2021. Nbnet: Noise basis learning for image denoising with subspace projection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4896– 4906. Dabov, K.; Foi, A.; Katkovnik, V.; and Egiazarian, K. 2008. Image restoration by sparse 3D transform-domain collaborative filtering. In Image Processing: Algorithms and Systems VI, volume 6812, 62–73. SPIE. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations. Fedus, W.; Zoph, B.; and Shazeer, N. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1): 5232–5270. Graham, B.; El-Nouby, A.; Touvron, H.; Stock, P.; Joulin, A.; J´egou, H.; and Douze, M. 2021. Levit: a vision transformer in convnet’s clothing for faster inference. In Proceedings of the IEEE/CVF international conference on computer vision, 12259–12269. Gu, S.; Zhang, L.; Zuo, W.; and Feng, X. 2014. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2862–2869. Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; and Zhang, L. 2019. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1712–1722. He, K.; Sun, J.; and Tang, X. 2010. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12): 2341–2353. Jin, Y.; Jiang, X.-B.; Wei, Z.-k.; and Li, Y. 2019. Chest X-ray image denoising method based on deep convolution neural network. IET Image Processing, 13(11): 1970–1978. Kumar, M.; Weissenborn, D.; and Kalchbrenner, N. 2020. Colorization Transformer. In International Conference on Learning Representations. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte, R. 2021a. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, 1833–1844. Liang, Y.; Chongjian, G.; Tong, Z.; Song, Y.; Wang, J.; and Xie, P. 2021b. EViT: Expediting Vision Transformers via Token Reorganizations. In International Conference on Learning Representations. Liu, Y.; Qin, Z.; Anwar, S.; Ji, P.; Kim, D.; Caldwell, S.; and Gedeon, T. 2021a. Invertible denoising network: A light solution for real noise removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 13365–13374. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021b. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Loshchilov, I.; and Hutter, F. 2016. SGDR: Stochastic Gradient Descent with Warm Restarts. In International Conference on Learning Representations. Mallat, S. G. 1989. A theory for multiresolution signal decomposition: the wavelet representation. IEEE transactions on pattern analysis and machine intelligence, 11(7): 674– 693. Mao, X.; Shen, C.; and Yang, Y.-B. 2016. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Advances in neural information processing systems, 29. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7711 Plotz, T.; and Roth, S. 2017. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1586–1595. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. 2018. Improving language understanding by generative pre-training. Ren, C.; He, X.; Wang, C.; and Zhao, Z. 2021. Adaptive consistency prior based deep network for image denoising. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8596–8606. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer. Shen, H.; Zhao, Z.-Q.; and Zhang, W. 2023. Adaptive dynamic filtering network for image denoising. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2227–2235. Tai, Y.; Yang, J.; Liu, X.; and Xu, C. 2017. Memnet: A persistent memory network for image restoration. In Proceedings of the IEEE international conference on computer vision, 4539–4547. Ulyanov, D.; Vedaldi, A.; and Lempitsky, V. 2018. Deep image prior. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9446–9454. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; and Li, H. 2022. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17683–17693. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. Xu, J.; Zhang, L.; Zhang, D.; and Feng, X. 2017. Multichannel weighted nuclear norm minimization for real color image denoising. In Proceedings of the IEEE international conference on computer vision, 1096–1104. Xu, S.; Yang, X.; and Jiang, S. 2017. A fast nonlocally centralized sparse representation algorithm for image denoising. Signal Processing, 131: 99–112. Yair, N.; and Michaeli, T. 2018. Multi-scale weighted nuclear norm image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3165–3174. Yu, Y.; Chang, M.; Feng, H.; Xu, Z.; Li, Q.; and Chen, Y. 2018. Image denoising algorithm based on adversarial learning using joint loss function. In Fifth Conference on Frontiers in Optical Imaging Technology and Applications, volume 10832, 204–210. SPIE. Yuan, W.; Liu, H.; and Liang, L. 2023. Joint group dictionary-based structural sparse representation for image restoration. Digital Signal Processing, 104029. Yue, Z.; Yong, H.; Zhao, Q.; Meng, D.; and Zhang, L. 2019. Variational denoising network: Toward blind noise modeling and removal. Advances in neural information processing systems, 32. Yue, Z.; Zhao, Q.; Zhang, L.; and Meng, D. 2020. Dual adversarial network: Toward real-world noise removal and noise generation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, 41–58. Springer. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5728–5739. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2020. Learning enriched features for real image restoration and enhancement. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, 492– 511. Springer. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2021. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 14821– 14831. Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Van Gool, L.; and Timofte, R. 2021. Plug-and-play image restoration with deep denoiser prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 6360–6376. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; and Zhang, L. 2017. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7): 3142–3155. Zhang, K.; Zuo, W.; and Zhang, L. 2018. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Transactions on Image Processing, 27(9): 4608–4622. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2020. Residual dense network for image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(7): 2480–2495. Zhou, Y.; Jiao, J.; Huang, H.; Wang, Y.; Wang, J.; Shi, H.; and Huang, T. 2020. When awgn-based denoiser meets real noises. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 13074–13081. Zhou, Y.; Xu, X.; Liu, S.; Wang, G.; Lu, H.; and Shen, H. T. 2022. Thunder: Thumbnail based Fast Lightweight Image Denoising Network. arXiv preprint arXiv:2205.11823. Zou, B.; Zhang, Y.; Wang, M.; and Liu, S. 2023. Toward Efficient Image Denoising: A Lightweight Network with Retargeting Supervision Driven Knowledge Distillation. In Advances in Computer Graphics: 39th Computer Graphics International Conference, CGI 2022, Virtual Event, September 12–16, 2022, Proceedings, 15–27. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7712
2024
856
18,691
Intentional Evolutionary Learning for Untrimmed Videos with Long Tail Distribution Yuxi Zhou1,2,*†, Xiujie Wang1,*, Jianhua Zhang1,†, Jiajia Wang1, Jie Yu1, Hao Zhou1, Yi Gao1, Shengyong Chen1 1Department of Computer Science, Tianjin University of Technology, Tianjin, China 2DCST, BNRist, RIIT, Institute of Internet Industry, Tsinghua University, Beijing, China joy yuxi@pku.edu.cn, 1747897282@qq.com, zjh@ieee.org, wjj21@stud.tjut.edu.cn, YJ15303364087@gmail.com, zhouhao@stud.tjut.edu.cn, gaoyi01020304@stud.tjut.edu.cn, sy@ieee.org Abstract Human intention understanding in untrimmed videos aims to watch a natural video and predict what the person’s intention is. Currently, exploration of predicting human intentions in untrimmed videos is far from enough. On the one hand, untrimmed videos with mixed actions and backgrounds have a significant long-tail distribution with concept drift characteristics. On the other hand, most methods can only perceive instantaneous intentions, but cannot determine the evolution of intentions. To solve the above challenges, we propose a loss based on Instance Confidence and Class Accuracy (ICCA), which aims to alleviate the prediction bias caused by the long-tail distribution with concept drift characteristics in video streams. In addition, we propose an intention-oriented evolutionary learning method to determine the intention evolution pattern (from what action to what action) and the time of evolution (when the action evolves). We conducted extensive experiments on two untrimmed video datasets (THUMOS14 and ActivityNET v1.3), and our method has achieved excellent results compared to SOTA methods. The code and supplementary materials are available at https://github.com/Jennifer123www/UntrimmedVideo. Introduction Humans are born with the ability to observe the world and understand the intentions of the collaborators (i.e., predict what will happen soon). The ability to understand intention is fundamental to interaction between human and environment. However, designing algorithms to automatically understand intentions (Wanyan et al. 2023) is challenging, as it is necessary to model the relationship between past and future events without completely observing untrimmed videos. Currently, most methods on intent understanding primarily focus on trimmed videos, which process a full video into short clips with one action label each and make it unsuitable for direct application in real-world scenarios. This is due to the fact that videos are generally untrimmed. Recently, (Rodin et al. 2022) attempted to fine-tune trimmed methods to untrimmed videos and concluded that “perform*These authors contributed equally. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ing action prediction tasks on untrimmed videos is challenging”. We believe that one reason for the unsatisfactory results is that it ignores the long-tail distribution. Becasue in untrimmed vidoes, human actions coexisting with noisy backgrounds and thus each category of actions constitutes a minority among the background samples which lead to a long-tail distribution phenomenon. Taking the “Cliff Diving” video in the THUMOS14 dataset in Figure 1 as an example, the video lasts for 6 minutes, with a few target actions “diving” and “cliff diving” alternating with messy backgrounds. Moreover, as untrimmed videos manifest as video data streams in natural scenarios, the distribution differences between current data streams and new data streams may be substantial. However, there is no relevant method to solve the adaptive problem of long-tail distribution with concept drift (Krawczyk et al. 2017) characteristics (which refers to the possibility that data from non-stationary models may evolve over time, resulting in changes in target concepts and/or attribute distributions) in video streams. Additionally, existing intent understanding efforts can only predict the subsequent action but fail to assess the persistence and evolution patterns of intentions. However, predicting the evolution patterns of intentions can provide very important support for human-machine collaboration. For example, in Figure 1, predicting the current action A4 is ’cliffdiving’ and how long the person is about to ’dive’ is important for warning of dangerous scenarios. To address these challenges, a novel intentional evolutionary learning method is developed. Our work and contributions can be summarized as follows. • A loss based on Instance Confidence and Class Accuracy (ICCA) is presented that significantly enhances the classification accuracy under the influence of long-tail distribution with concept drift characteristics. • An intention-oriented evolutionary learning method is proposed to determine the intention evolution pattern (from what action to what action) and the time of evolution (when the action evolves). • We demonstrate the effectiveness and advancement of our proposed method on THUMOS14 and ActivityNET v1.3 datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7713 Figure 1: An example of untrimmed video which are long-tail distribution, as actions are alternating with many messy backgrounds. If human intent can be predicted as early as possible, it can provide auxiliary support for human-machine collaboration. Related Work Intention Understanding based on Trimmed Videos Trimmed-video-based intent understanding covers a wide range of tasks, including traffic prediction (Bai et al. 2020), anomaly detection (Liu et al. 2021), human behavior intent prediction (Wanyan et al. 2023), etc. Among them, human behavior intent prediction tasks are getting closer to the real intents of people with the advancement of algorithms. Recent years, (Zheng et al. 2023) predicted what the next action would be as early as possible. Above tasks are all based on trimmed video datasets and only make instantaneous judgments, but cannot determine evolution of intention. Duration estimation studies have been done in traffic(Grigorev et al. 2022) and medical(Bodenstedt et al. 2019) fields, but often need other modalities’ support. Based on single video modality, (Abu Farha, Richard, and Gall 2018) achieved long-term prediction of activity sequences, as well as the start and end times of each activity. However, it was based on trimmed videos and directly used ground truth labels, which is not in line with real-world application scenarios. We study the estimation of motion duration based on untrimmed videos to guide sustained intention understanding. Intention Understanding based on Untrimmed Videos Currently, deep learning is less applied on untrimmed videos. (Gao, Yang, and Nevatia 2017) proposed an enhanced RED network for action prediction, which uses reinforcement learning to encourage early and correct predictions. (Ke, Fritz, and Schiele 2019) proposed an attentive temporal feature, which used multi-scale temporal convolution to process temporal-conditioned observations. (Wang et al. 2021) proposed TTPP framework, reusing the Transformer-style architecture to aggregate observed features and then using a lightweight network to progressively predict future features and actions. Recently, (Rodin et al. 2022) tried to fine-tune trimmed methods for untrimmed videos for action prediction, but got poor results. We argue they ignored the long-tail distribution what is common in untrimmed videos. Specifically, untrimmed videos usually appear as streams, and the distribution difference between current data streams and new data streams may be very large. At present, there is no relevant method to solve the self-adaptive problem of long-tail distribution with drift characteristics in video datasets. Long-tail Distribution Long-tail distribution is a basic problem, especially for realworld deployment. Usually, weighting strategy is employed to deal with these problems by using weights to measure the penalty caused by prediction errors on samples. For example, focal Loss (Lin et al. 2017) focuses more on fewer and harder samples, assigning them higher weights. EQLv1 (Tan et al. 2020) tried to provide a better weight allocation system, using class frequency to assign sample weights. EQLv2 (Tan et al. 2021) further improved, providing smoother constraints using the gradient of class frequency. CDB loss (Sinha, Ohashi, and Nakamura 2022) dynamically measures the instantaneous difficulty of each class during the model training. Considering that untrimmed videos are presented as video data streams in natural scenes, the distribution difference between current and new data streams may be very large. We propose ICCA loss to specifically address the adaptive problem of long-tail distribution with concept drift characteristics in video streams. Methodology Problem Definition Given a series of untrimmed streaming videos V = {v1, v2, v3, ..., vM}, we use a sliding window strategy to expand the data, and obtain output Sc = n ˆy, ˆy′, ˜D o through the intention prediction model, where ˆy ∈RC+1 represents the potential action, which is the last category label recognized in the observation video frames, ˆy′ ∈RC+1 represents the evolution action, which is the predicted next category label, ˜D represents the remaining duration of the predicted The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7714 Figure 2: The overall framework. current action. Notice that C is the number of action categories and C + 1 represents all categories including background. The current data stream is used as our training data, and the new data stream is used as our testing data. Sliding window. Sliding window is a common data augmentation strategy. For detailed settings, please refer to the supplementary materials. Framework Overview The overall framework diagram of our method can be seen in Figure 2. It can be roughly divided into two main modules: the ICCA loss module and the intention-oriented evolutionary learning module. For the blue current data stream, the “observation part” of the sliding window is taken as input. After feature extraction and preprocessing, video spatio-temporal features and duration information are obtained. These are put into our intention-oriented evolutionary learning model and constrained by ICCA loss to obtain the intention prediction triplet result Sc = n ˆy, ˆy′, ˜D o (the process of the new data stream branch is basically the same). Specifically, our model can dynamically sense the distribution of the new data stream. The most recent new data stream result provides instance-level confidence and the previous epoch of the current round provides class accuracy. These two indicators are passed into the current data stream branch to guide ICCA loss constraints. Loss based on Instance Confidence and Class Accuracy (ICCA loss) The current SOTA method for solving the long-tail distribution problem is CDB-W loss(Sinha, Ohashi, and Nakamura 2022), which dynamically measures the instantaneous difficulty of each class during the model training phase. However they introduced a class-balanced subset, which cannot dynamically perceive the data distribution of new data streams. We propose Instance Confidence and Class Accuracy (ICCA loss) to solve the self-adaptation problem of the Figure 3: The specific implementation of ICCA loss in our method. (a) The output of the recent new data stream in the epoch e −E, obtaining Instance-level Confidence. (b) Output of the current data streams in the previous epoch e −1, obtaining Class Accuracy. (c) The result of combining (a) and (b) in the current epoch e to obtain evaluation indicators and ICCA weight, and finally obtain ICCA loss constraints. long-tail distribution with the concept drift characteristics in the untrimmed video. Instance-level confidence. For a video sample, when the predicted confidence of a certain category is higher, the difference in predicted probability between the category with the highest predicted probability and the second highest is generally larger. Therefore, we use the difference in probability between the model’s first and second highest predicted categories to approximate the measure of the model’s confidence. Formally, the predicted confidence of the model on sample x(i) t in the t-th batch is defined as: C  x(i) t  = P  ˙ˆy(i) t | x(i) t  −P  ¨ˆy(i) t | x(i) t  , (1) where ˆy(i) t represents the predicted value of the model on the sample x(i) t , ˙ˆy(i) t and ¨ˆy(i) t represent the classes with the highest and second highest predicted probabilities of the model on the sample x(i) t , respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7715 The approximate measure of confidence we defined above does not require prior knowledge of the ground truth of the videos. It only requires the use of a model trained to a certain extent to obtain the predicted logits of the classes and thus obtain the confidence. Class accuracy. We have empirically found that the predicted performance of the classes in the (e −1)-th epoch of the data stream has an important guiding role for the weight distribution of the classes in the e-th epoch. We use ζ(cj) to represent the probability of correctly predicting the cj-th class, where ζ(cj) can be any error function. It has been found through experiments that using the accuracy of each class is better, so the ζ(cj) obtained using the accuracy of each class can be expressed as: ζ (cj) = TP (cj) TP (cj) + FP (cj), (2) where TP(cj) is true positive and FP(cj) is false positive. ICCA loss. In the e-th epoch, we use the instance-level confidence C  x(i) t  obtained in the most recent new data stream and the average class accuracy ζ(cj) obtained in the previous epoch to dynamically evaluate the performance measure of each class. We use the product of the predicted confidence of the model on sample x(i) t and the probability of correctly predicting class cj by the model C  x(i) t  ·ζ (cj) to approximate the probability that the model predicts correctly when the predicted class is cj on the new data stream sample x(i) t . Conversely, its predicted error probability is C  x(i) t  · (1 −ζ (cj)). Formally, the performance measure of the model for class cj on the data stream set xt can be defined as ε (cj, xt) = N X i=1 C  x(i) t  · ζ (cj) n i | ˙ˆy(i) t = cj o + N X i=1 C  x(i) t  · (1 −ζ (cj)) n i | ˙ˆy(i) t ̸= cj o . (3) Referring to the constraint of CDB-W loss, we use e (cj, xt) to represent the weight corresponding to class cj wε (cj, xt) = Ω i∈C+1(ε (ci, xt)) −ε (cj, xt) , (4) where C + 1 represents the total number of all classes including the background class, and Ωrepresents operations such as taking the maximum value, taking the average value, summing, etc. |∗| represents taking the absolute value. To explain the weighted loss based on ICCA, we use the most traditional cross-entropy loss here. Thus, the weighted cross-entropy loss based on ICCA is calculated as: ICCA lossce = − C X i=0 wε (cj, xt) yi log (pi) = −wε (cj, xt) log (pc). (5) The specific implementation of ICCA loss. We propose ICCA loss to mitigate the inherent problem of long-tail distribution (that is, for untrimmed videos, the action class is a minority class compared to the interspersed background class, but the background class itself does not have common characteristics) with concept drift. Specifically, our ICCA loss can make the distribution of the current data stream as close as possible to the distribution of the new data stream without knowing the distribution of the new data stream in advance, thereby effectively alleviating the problem of longtail distribution with concept drift properties caused by inconsistent distribution between current data stream and new data stream. The ICCA loss constraint of the model is shown in Figure 3. We use e to represent the current epoch, e−1 to represent the previous epoch, and E to represent the fusion frequency of the new data streams, that is, every E epochs, the the new data streams is used to obtain results based on the new data streams (the ablation experiment will explore the fusion frequency later). Figure 3(a) shows the results of applying the current model to the new data stream at the (e −E)-th epoch. Here, we use ˜ye−E logit and ˜y′e−E logit to represent the recognition probability scores and prediction probability scores of each class obtained by the new data stream, respectively. Then, based on the two probability scores, we obtain the class confidence of recognition and prediction based on the new data stream ˜ C (x) and ˜ C ′(x) (using Equation (1)). Figure 3(b) shows the current data stream results at the (e −1)-th epoch. We use the accuracy of each class as the evaluation indicator and obtain the accuracy of each class for recognition and prediction of the current data stream at the ˜ζ(c) and ˜ζ′(c) according to Equation (2). In Figure 3(c), the two guided performance indicators in Figure 3(a) and (b) are combined with the recognition and prediction results of the current model to calculate the ICCA loss of the current epoch using Equations(3) (4) and (5). Intention-oriented Evolutionary Learning We define intention interpretation as action evolution learning guided by potential intentions and potential actions, constrained by intention coherence in the intention semantic space, so that the predicted intention gradually approaches the true intention. Unlike previous work (Girdhar and Grauman 2021; Wanyan et al. 2023), which only judges an instantaneous concept (can only predict what the future intention is), our method can judge the evolution pattern (from what action to what action) and evolution timing (when to evolve) of the intention. Data preprocessing. Following (He et al. 2022), we use the I3D (Carreira and Zisserman 2017) model to extract video features based on a sliding window to obtain the feature matrix F ∈RB×D, where B is the batch size and D is the feature dimension. Considering that the feature matrix F is the most primitive feature representation of the video stream V , we use a fully connected function ΨF to obtain the initialized latent action ˆy representing the most likely class of the current The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7716 Figure 4: The architecture of the intention-oriented evolutionary learning. The † in the upper right corner indicates that the variable is an output. video segment. Through the potential actions ˆy and feature matrix F, we use StackedRNN to obtain the initialized potential intention ¯o ∈RB×(D+1). The initialized potential intention ¯o combines both the original video features and the current most likely category semantics to generate a preliminary intention interpretation. In addition, we initialize the initialized potential intention ¯o as the best potential intention o∗. By combining the feature matrix F, potential action ˆy and initialized potential intention ¯o, the possible action candidate list ν′ = [..., ν′ i, ...] is obtained through RNN , ν′ = RNN(F, ˆy, ¯o) ∈RB×(D+1)×N, (6) where N is the number of candidate actions in the action candidate list ν′. Next, we will traverse the action candidate list ν′ to implement action evolution learning and iteratively evolve and update the best potential intention o∗and evolutionary actions ˆy′. Action evolution learning. Take a candidate action ν′ i from the action candidate list ν′ = [..., ν′ i, ...] (i < N), combine it with the feature matrix F, execute the ΓF,ν function to obtain the latent intent o ∈RB×(D+1). The meaning of the potential intention o here is the intention based on the semantic relationship of the candidate action ν′ i) and the original feature F. This intention may be closer to the real intention or further away from the real intention. The best potential intention o∗is the potential intention o that satisfies the intention coherence constraint. This is a constraint on action evolution learning in the intention space. First, we need to determine whether the current potential intention o is closer to the real intention compared to the initial potential goal ¯o. This can gradually bring the current potential intention o closer to the real intention and promote the update of the best potential intention o∗, as shown below ∥o∗−o∥2 2 > ∥¯o −o∥2 2 . (7) In addition, we set a threshold ξ to measure the distance between the optimal potential intention o∗and the current potential intention o to ensure that the change in the current potential intention o is not too outrageous, as shown below ∥o∗−o∥2 2 ≤ξ. (8) Finally, we perform full connection FC and linear regression on the optimal latent intention o∗respectively to obtain the evolutionary action ˆy′ ∈RB×(C+1) and evolution time ˜D ∈RB×1. Loss function. The loss consists of four parts: action evolution loss LAE, potential action loss LP A, evolution timing loss LET and intention coherence lossLcoherence. The action evolution loss LAE aims to constrain the consistency between the predicted evolution category ˆy′ and the next category of the ground truth y′. We use our designed ICCA performance indicator in conjunction with cross-entropy loss calculation as follows, LAE = ICCA lossce(ˆy′, y′). (9) The potential action loss LP A aims to constrain the consistency between the initialized recognition of the evolution category ˆy and the current category of the ground truth y. The representation is the same as Equation 9. We perform L1 loss for the evolution timing loss LET , LET = ˜D −D 1 , (10) where D is the ground truth of the current action duration based on the sliding window. Intention coherence loss Lcoherence consists of two parts: update loss Lu and maintenance loss Lm. Update loss Lu forces the current potential intention o to be closer to the real intention than the initial potential intention ¯o, Lu = max(0, ∥o∗−o∥2 2 −∥¯o −o∥2 2 + m), (11) where m is a very small value to ensure numerical stability. Maintenance loss Lm ensures that the potential intention o is consistent during training and that the changes are not too outrageous. Using max-margin loss, the deviation between the threshold ξ and the difference between the potential intention is measured. Lm = max(0, ∥o∗−o∥2 2 −ξ + m). (12) Therefore, Lcoherence = Lu + Lm. The overall loss can be represented as, L = LAE + LP A + LET + Lcoherence. (13) Experiment Experimental Setup Dataset. We use two popular untrimmed human action datasets, THUMOS14 (Idrees et al. 2017) and ActivityNET v1.3 (Caba Heilbron et al. 2015), as our benchmark datasets. The THUMOS14 dataset is a large-scale video dataset that includes 1,010 videos for validation and 1,574 videos for testing from 20 classes. Among all the videos, there are 220 and 212 videos with temporal annotations in validation and testing set, respectively. Following previous works (Wang et al. 2017; Paul, Roy, and Roy-Chowdhury 2018; Luo et al. 2020), we use the 200 videos in the validation set for training The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7717 (a) length=50 (b) length=100 (c) length=150 Figure 5: The effect of sliding window length and stride. The left y-axis represents sample size, the right y-axis represents accuracy (%), and the x-axis represents stride values of {10, 20, 30, 50}. Green bars denote training set samples, while yellow bars represent test set samples. The red and blue lines show the top-1 recognition accuracy and prediction accuracy. The yellow and green lines represent the mean precision on recognition and predictive. All experiments are based on the ICCA loss. loss function top-1 acc top-5 acc MP reco pred reco pred reco pred focal loss 59.2 61.1 92.7 88.1 37.76 37.96 weighted-CE 57.1 59.1 90.4 87.3 23.91 23.95 EQLv1 65.2 59.6 87.4 74.6 39.73 37.12 EQLv2 57.8 59.3 89.1 90.9 30.62 36.28 CDB loss 66.1 65.8 92.7 92.5 92.5 92.5 39.26 50.10 ICCA loss 67.3 67.3 67.3 70.2 70.2 70.2 92.9 92.9 92.9 91.0 45.47 45.47 45.47 54.61 54.61 54.61 Table 1: Performance results obtained on the THUMOS14 dataset using different loss functions. Where reco denotes recognition, pred denotes prediction, acc denotes accuracy, and MP denotes mean precision. loss function top-1 acc top-5 acc MP reco pred reco pred reco pred focal loss 59.3 16.0 75.8 38.9 4.16 7.77 weighted CE 63.9 36.1 36.1 36.1 83.9 64.8 13.76 26.54 EQLv1 58.8 30.3 72.3 44.2 12.05 23.09 EQLv2 64.8 36.1 36.1 36.1 72.4 61.0 16.63 28.75 CDB loss 64.3 28.9 81.9 54.6 10.20 20.14 ICCA loss 65.0 65.0 65.0 34.3 84.0 84.0 84.0 63.5 63.5 63.5 18.65 18.65 18.65 29.47 29.47 29.47 Table 2: Performance results obtained on the ActivityNET v1.3 dataset using different loss functions. and the 213 videos in the testing set for evaluation. ActivityNET v1.3 is a large-scale dataset with 200 complex daily activities. It has 10,024 training videos and 4,926 validation videos. Following (Yang et al. 2021; Luo et al. 2021), we use the training set as current data stream to train our model and the validation set as new data stream for evaluation. Metrics. We use the average accuracy (top1, top5) and the mean precision (MP) of each class as evaluation for long-tail distribution. The former reflects the overall evaluation performance, while the latter can accurately evaluate the weight correction effect of our ICCA loss on head and tail classes. For untrimmed intent estimation task, in addition to using the average accuracy (top1, top5) , we use time accuracy to evaluate the estimation of evolution duration (following (Rodin et al. 2022)), where time accuracy is defined as the dataset Methods top-1 acc top-5 acc (trimmed) reco pred reco pred THUMOS14 RULSTM – – 50.3 – – 67.0 latent goal 58.7 54.8 54.8 54.8 94.7 92.1 ours 59.7 59.7 59.7 54.3 95.0 95.0 95.0 93.1 93.1 93.1 ActivityNET v1.3 RULSTM – – 35.7 – – 78.5 latent goal 49.0 39.9 79.3 85.0 ours 55.8 55.8 55.8 45.7 45.7 45.7 82.3 82.3 82.3 88.1 88.1 88.1 Table 3: Recognition and prediction results using different backbones on the trimmed THUMOS14 and ActivityNET v1.3 datasets. For fair comparison, our trimmed sample sets include the “background” class. percentage of samples whose predicted duration is within 1 second of the ground truth duration. Implementation details. Details of experimental parameters are provided in the supplementary material. Experimental Results Comparison with various long-tail distribution loss functions. We compare popular loss functions with adjusted weights on the THUMOS14 and ActivityNET v1.3 datasets (Table 1 and 2). Focal loss (Lin et al. 2017) increases the weight of majority classes by intuition at the beginning of training, without considering the problem that though the “background class” in untrimmed video samples is a majority class, it is difficult to classify it as one class. WeightedCE, EQLv1 (Tan et al. 2020) and EQLv2 (Tan et al. 2021) use hard weights based on the training set sample distribution as a reference, ignoring the problem of overfitting on the training set when the difference between the training set and testing set distributions is too large. CDB loss (Sinha, Ohashi, and Nakamura 2022) uses a class-balanced subset to dynamically correct the weight distribution, to some extent correcting the overfitting problem on the training set, but without dynamically judging the data drift of training set and testing set samples. Considering that in the process of untrimmed video prediction, the long-tail problem may change in different class distributions and the difference beThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7718 dataset Methods top-1 acc top-5 acc time (untrimmed) reco pred reco pred acc THUMOS 14 RU-reg – – 59.2 – – 92.6 35.30 latent goal 60.1 56.8 92.6 89.8 37.76 ours 67.9 67.9 67.9 64.8 64.8 64.8 95.5 95.5 95.5 92.8 92.8 92.8 38.11 38.11 38.11 ActivityNET v1.3 RU-reg – – 28.9 – – 55.7 30.87 30.87 30.87 latent goal 63.5 31.0 81.5 58.5 14.50 ours 65.0 65.0 65.0 34.3 34.3 34.3 84.0 84.0 84.0 66.5 66.5 66.5 27.04 Table 4: Recognition and prediction results and time estimation results using different backbones on the untrimmed THUMOS14 and ActivityNET v1.3 datasets tween training set and test set distributions may be very large, we propose ICCA loss to specifically address the selfadaptive problem of long-tail distribution with drift characteristics in video datasets. Our method achieves optimal or suboptimal performance in mean accuracy (top-1, top-5) and mean precision of each class for recognition and prediction. Comparison with trimmed methods. To be fair, samples are processed into video clips containing ”background” to study the effectiveness of our backbone in the long tail distribution between classes (between action classes and background classes) and within classes (between background classes and background classes). RULSTM (Furnari and Farinella 2020) is designed to predict categories within 1 second in the future. Latent goal (Roy and Fernando 2022) recognizes and predicts the current category and the next category. These two methods are representative methods for predicting human actions on trimmed datasets. As can be seen from Table 3, our methods have achieved good performance. Comparison with untrimmed methods. Table 4 shows the recognition and prediction results and time duration estimation results of different backbone on the untrimmed THUMOS14 and ActivityNET v1.3 datasets. Among them, RU-reg (Rodin et al. 2022) is aimed to predict the time-toaction by exploiting an additional fully connected layer attached to the RULSTM model and trained to solve the regression task and multi-classification task. In addition, latent goal is commonly used for multi-classification tasks. We also add a fully connected layer at the end of the model to perform a regression task. Table 4 shows that we achieved advanced results on both the two untrimmed datasets. Ablation Experiment The effect of sliding window length and stride. We have explored the impact of different lengths and strides of the sliding window. Experiments have found that the smaller the window length, the larger the sample size; with the same window length, the smaller the stride, the larger the sample size, and the mean precision is often larger. Taking into account the calculation cost and performance results, we chose length=100 and stride=20 as our experimental settings. The effect of fusion frequency. We explored the impact of fusion frequency in ICCA loss (Table 5). We believe that fusion frequency is an important parameter to determine the frequency of new data stream and current data stream distribution calibration. Table 5 shows that the smaller the fusion fusion top-1 acc top acc MP frequency reco pred reco pred reco pred 5 66.3 64.7 90.4 92.4 92.4 92.4 40.48 45.42 10 67.3 67.3 67.3 70.2 70.2 70.2 92.9 91.0 45.47 54.61 54.61 54.61 20 64.7 61.6 93.1 93.1 93.1 90.8 48.05 48.05 48.05 50.31 30 63.7 60.1 89.6 91.0 37.08 39.19 Table 5: The effect of fusion frequency. weight error top-1 acc top-5 acc MP strategy function reco pred reco pred reco pred sum each 66.0 65.0 92.6 89.6 43.86 47.33 mean class 67.3 67.3 67.3 57.9 94.3 88.6 34.06 37.20 max precision 67.3 67.3 67.3 70.2 70.2 70.2 92.9 91.0 91.0 91.0 45.47 45.47 45.47 54.61 54.61 54.61 f1-score 65.2 63.8 92.8 89.7 30.4 40.67 recall 67.3 59.9 94.8 94.8 94.8 88.9 33.40 36.82 Gmean 64.1 57.3 94.0 90.6 32.84 38.11 Table 6: The effect of error function and weight strategy. frequency are, the more frequently the current data stream and new data stream are distributed for calibration, and the predicted results tend to be more accurate. However, if the fusion frequency is set to 5, the frequency is too fast, which is not conducive to the convergence of the training model, and the calculation burden will be increased to some extent, so the performance is not the best. Based on the above considerations, we selected fusion frequency=10 as our experimental setting. The effect of error function. We have explored the impact of various error functions on ICCA loss (Table 6). The error function is a discussion of the error function ζ(cj). We have designed four strategies: f1-score , class precision, recall, and Gmean. Experiments have found that using mean precision performs best. The effect of weight strategy. We have explored the impact of weight strategy on ICCA loss (Table 6). The weight strategy is the implementation of the Ωfunction in Equation 4. We have designed three strategies: sum, mean, and max. Experiments have found that using the max strategy performs best. Conclusion We designed a method to predict human intentions in untrimmed videos based on intentional evolutionary learning. Specifically, an ICCA loss is presented to alleviate prediction bias caused by long-tail distribution with concept drift characteristics. Moreover, an intention-oriented evolutionary learning method is proposed to determines the intention evolution patterns and the time of evolution. Extensive experiments show that our method can achieve better results on untrimmed video than fine-tuned trimmed methods. While the paper presents an innovative approach to intention pattern detection, there are opportunities for further improvement. By improving the accuracy of the analysis models for the time of intention evolution, future research can advance the field of human intention understanding. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7719 Acknowledgments The authors gratefully acknowledge the financial supports by the National Natural Science Foundation of China under Grant 92048301 and Grant 62202332, and Diversified Investment Foundation of Tianjin under Grant 21JCQNJC00980. References Abu Farha, Y.; Richard, A.; and Gall, J. 2018. When will you do what?-anticipating temporal occurrences of activities. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5343–5352. Bai, L.; Yao, L.; Li, C.; Wang, X.; and Wang, C. 2020. Adaptive graph convolutional recurrent network for traffic forecasting. Advances in neural information processing systems, 33: 17804–17815. Bodenstedt, S.; Wagner, M.; M¨undermann, L.; Kenngott, H.; M¨uller-Stich, B.; Breucha, M.; Mees, S. T.; Weitz, J.; and Speidel, S. 2019. Prediction of laparoscopic procedure duration using unlabeled, multimodal sensor data. International Journal of Computer Assisted Radiology and Surgery, 14: 1089–1095. Caba Heilbron, F.; Escorcia, V.; Ghanem, B.; and Carlos Niebles, J. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the ieee conference on computer vision and pattern recognition, 961–970. Carreira, J.; and Zisserman, A. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6299–6308. Furnari, A.; and Farinella, G. M. 2020. Rolling-unrolling lstms for action anticipation from first-person video. IEEE transactions on pattern analysis and machine intelligence, 43(11): 4021–4036. Gao, J.; Yang, Z.; and Nevatia, R. 2017. Red: Reinforced encoder-decoder networks for action anticipation. arXiv preprint arXiv:1707.04818. Girdhar, R.; and Grauman, K. 2021. Anticipative video transformer. In Proceedings of the IEEE/CVF international conference on computer vision, 13505–13515. Grigorev, A.; Mihaita, A.-S.; Lee, S.; and Chen, F. 2022. Incident duration prediction using a bi-level machine learning framework with outlier removal and intra–extra joint optimisation. Transportation research part C: emerging technologies, 141: 103721. He, B.; Yang, X.; Kang, L.; Cheng, Z.; Zhou, X.; and Shrivastava, A. 2022. ASM-Loc: action-aware segment modeling for weakly-supervised temporal action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13925–13935. Idrees, H.; Zamir, A. R.; Jiang, Y.-G.; Gorban, A.; Laptev, I.; Sukthankar, R.; and Shah, M. 2017. The thumos challenge on action recognition for videos “in the wild”. Computer Vision and Image Understanding, 155: 1–23. Ke, Q.; Fritz, M.; and Schiele, B. 2019. Time-conditioned action anticipation in one shot. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9925–9934. Krawczyk, B.; Minku, L. L.; Gama, J.; Stefanowski, J.; and Wo´zniak, M. 2017. Ensemble learning for data stream analysis: A survey. Information Fusion, 37: 132–156. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. Liu, B.; Chen, Y.; Liu, S.; and Kim, H.-S. 2021. Deep learning in latent space for video prediction and compression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 701–710. Luo, W.; Zhang, T.; Yang, W.; Liu, J.; Mei, T.; Wu, F.; and Zhang, Y. 2021. Action unit memory network for weakly supervised temporal action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9969–9979. Luo, Z.; Guillory, D.; Shi, B.; Ke, W.; Wan, F.; Darrell, T.; and Xu, H. 2020. Weakly-supervised action localization with expectation-maximization multi-instance learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, 729–745. Springer. Paul, S.; Roy, S.; and Roy-Chowdhury, A. K. 2018. W-talc: Weakly-supervised temporal activity localization and classification. In Proceedings of the European Conference on Computer Vision (ECCV), 563–579. Rodin, I.; Furnari, A.; Mavroeidis, D.; and Farinella, G. M. 2022. Untrimmed action anticipation. In Image Analysis and Processing–ICIAP 2022: 21st International Conference, Lecce, Italy, May 23–27, 2022, Proceedings, Part III, 337–348. Springer. Roy, D.; and Fernando, B. 2022. Action anticipation using latent goal learning. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, 808–816. IEEE. Sinha, S.; Ohashi, H.; and Nakamura, K. 2022. ClassDifficulty Based Methods for Long-Tailed Visual Recognition. International Journal of Computer Vision, 130(10): 2517–2531. Tan, J.; Lu, X.; Zhang, G.; Yin, C.; and Li, Q. 2021. Equalization loss v2: A new gradient balance approach for longtailed object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1685–1694. Tan, J.; Wang, C.; Li, B.; Li, Q.; Ouyang, W.; Yin, C.; and Yan, J. 2020. Equalization loss for long-tailed object recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11662–11671. Wang, L.; Xiong, Y.; Lin, D.; and Van Gool, L. 2017. Untrimmednets for weakly supervised action recognition and detection. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 4325–4334. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7720 Wang, W.; Peng, X.; Su, Y.; Qiao, Y.; and Cheng, J. 2021. Ttpp: Temporal transformer with progressive prediction for efficient action anticipation. Neurocomputing, 438: 270– 279. Wanyan, Y.; Yang, X.; Ma, X.; and Xu, C. 2023. Dual Scene Graph Convolutional Network for Motivation Prediction. ACM Transactions on Multimedia Computing, Communications and Applications, 19(3s): 1–23. Yang, W.; Zhang, T.; Yu, X.; Qi, T.; Zhang, Y.; and Wu, F. 2021. Uncertainty guided collaborative training for weakly supervised temporal action detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 53–63. Zheng, N.; Song, X.; Su, T.; Liu, W.; Yan, Y.; and Nie, L. 2023. Egocentric Early Action Prediction via Adversarial Knowledge Distillation. ACM Transactions on Multimedia Computing, Communications and Applications, 19(2): 1–21. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7721
2024
857
18,692
SasWOT: Real-Time Semantic Segmentation Architecture Search WithOut Training Chendi Zhu1*, Lujun Li2*, Yuli Wu3, Zhengxing Sun 1† 1State Key Laboratory for Novel Software Technology, Nanjing University 2The Hong Kong University of Science and Technology 3Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany chendi.zhu@smail.nju.edu.cn, lilujunai@gmail.com, yuli.wu@lfb.rwth-aachen.de, szx@nju.edu.cn Abstract In this paper, we present SasWOT, the first training-free Semantic segmentation Architecture Search (SAS) framework via an auto-discovery proxy. Semantic segmentation is widely used in many real-time applications. For fast inference and memory efficiency, Previous SAS seeks the optimal segmenter by differentiable or RL Search. However, the significant computational costs of these training-based SAS limit their practical usage. To improve the search efficiency, we explore the training-free route but empirically observe that the existing zero-cost proxies designed on the classification task are sub-optimal on the segmentation benchmark. To address this challenge, we develop a customized proxy search framework for SAS tasks to augment its predictive capabilities. Specifically, we design the proxy search space based on the some observations: (1) different inputs of segmenter statistics can be well combined; (2) some basic operators can effectively improve the correlation. Thus, we build computational graphs with multiple statistics as inputs and different advanced basis arithmetic as the primary operations to represent candidate proxies. Then, we employ an evolutionary algorithm to crossover and mutate the superior candidates in the population based on correlation evaluation. Finally, based on the searched proxy, we perform the segmenter search without candidate training. In this way, SasWOT not only enables automated proxy optimization for SAS tasks but also achieves significant search acceleration before the retrain stage. Extensive experiments on Cityscapes and CamVid datasets demonstrate that SasWOT achieves superior trade-off between accuracy and speed over several state-of-the-art techniques. More remarkably, on Cityscapes dataset, SasWOT achieves the performance of 71.3% mIoU with the speed of 162 FPS. Introduction Semantic segmentation predicts pixel-level annotations of different semantic categories for an image. Recently, many real-time applications like autonomous driving require segmenter to fast inference on low-power edge devices and guarantee satisfied performance. However, many state-ofthe-art models stack convolutions, fuse multi-scale features and increase the resolution to improve accuracy, making *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Kendall-Tau (left) and Spearman (right) Correlation on Segmentation Benchmark. Method Algorithm Cost Auto-DeepLab (Liu et al. 2019) Gradient 72 h GAS (Li et al. 2019a) Graph 160 h CAS (Zhang et al. 2019) Gradient 200 h DF-Seg (Li et al. 2019b) Pruning 9600 h FasterSeg (Chen et al. 2020) Gradient 48 h SasWOT Training-free 1.8 h Table 1: Comparison of various SAS methods. h indicates GPU-hour. them suffer from high computational resources and memory budget (Liu et al. 2023; Li et al. 2023b; Li and Jin 2022; Li et al. 2023a, 2022b,a, 2020; Li 2022; Shao et al. 2023). To tackle this challenge, there are many lightweight architectural engineering(Badrinarayanan, Kendall, and Cipolla 2017; Paszke et al. 2016) proposed to allow the segmenter to deploy on resource-constrained platforms. For example, ICNet (Zhao et al. 2018a) uses an image cascade network to incorporate multi-resolution inputs. BiSeNet (Yu et al. 2018a), and DFANet allocate more computing resources to feature fusion modules and utilize lightweight backbones. However, this architectural engineering also requires extensive expert design and lots of experimental trial and error. To address this issue, some semantic segmentation architecture search (SAS) methods have been presented by building search spaces and processes. For example, AutoDeepLab (Liu et al. 2019) first searches cell structures and the downsampling strategy to optimize resolutions. CAS (Zhang et al. 2019) searches for operators and decoders in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7722 a pre-defined multi-scale network with customized latency constraints. Although these methods achieve promising results, they also introduce significant computational overhead and optimization difficulties during the search phase. As shown in Table 1, these methods always involve training a supernet from scratch that is larger than the original model to perform a differentiable or Reinforcement Learning (RL) search. Thus, they need lots of costs for search, which brings heavy burdens for practical usage. As Albert Einstein once said: “Everything should be made as simple as possible, but not simpler”. Inspired by the recent zero-shot NAS, we experimented with trainingfree methods for classification tasks and evaluated their predictability on a segmentation benchmark. As shown in Figure 1, these methods typically perform worse on segmentation tasks. These failures are large because they are handdesigned fixed forms that do not generalize well to new segmentation tasks with different data domains, architectural styles, and evaluation metrics. To design customized proxies for segmentation tasks, we analyze the existing methods and discover that: (1) Proxies with different model statistic inputs can be well combined. (Tanaka et al. 2020) This indicates that we can design proxies to combine various inputs to improve predictability. (2) Most proxies can be factored into some input options and basic operations. For example, log, abs, and matrix multiplication are often present in proxy expressions (Tanaka et al. 2020; Lee, Ajanthan, and Torr 2018; Lin et al. 2021). This indicates that the proxy can also be designed from scratch following Auto-Zero. Based on the above observations, we present, SasWOT, an automated search framework that utilizes evolutionary algorithms to search training-free proxies for SAS efficiently. Firstly, we represent the proxy search space with the segmenter’s multiple statistics (e.g., weights and gradient) as input and different unary and binary mathematical operations as candidates. Next, our search algorithm initializes the population, evaluates, crosses, and varies for a better proxy. During the search, we directly use the ranking correlation with accuracy results in the segmentation benchmark as optimization objectives to find a more fitting proxy for the segmentation task. To speed up the process, we employ judgment and elitism-preserve strategies during the proxy search. With the automatic search framework, our SasWOT surpasses existing training-free NAS approaches by a large margin without prior knowledge. Finally, we search for segmentation architectures with SasWOT and then implement a complete training process on the searched segmentation architecture. We perform extensive experiments to validate the performance and efficiency strengths of our SasWOT in semantic segmentation benchmarks and on multiple autonomous driving datasets. On the segmentation benchmark, SasWOT consistently improves ranking consistency compared to other state-of-the-art training-free methods, on Cityscapes and CamVid benchmarks, our method yields search acceleration of at least 30 times compared to the traditional gradientbased SAS method and allows the segmenter search to be completed on a single GPU within 2 hours. The main contributions can be summarized as follows: • We propose a novel real-time segmenter search framework via auto-discovery proxy without training, which to our knowledge, is the first training-free search method for segmentation. • We present a comprehensive proxy search space and evolve proxy with correlation in segmentation as fitting objectives. In addition, we achieve significant search acceleration by searching for segmenters with discovered proxies. • We conduct extensive experiments on the standard Cityscapes and CamVid benchmarks. Compared to other real-time methods, our method obtains a state-ofthe-art trade-off between performance and latency. Related Work Semantic Segmentation Architecture Search Following FCN (Long, Shelhamer, and Darrell 2015), many advanced Handcrafted architectures are presented for larger receptive field (Zhao et al. 2017; Chen et al. 2017a,b, 2018; Yu et al. 2018b) and better pixel-wise relationships (Zhao et al. 2018b; Huang et al. 2019; Fu et al. 2019; Song et al. 2019) to improve the performance of semantic segmentation. To alleviate resource budget issues, some light-weighted Segmenter like ENet (Paszke et al. 2016), ICNet (Zhao et al. 2018a) and BiSeNet (Yu et al. 2018a) have been proposed in real-time applications. However, manual design methods are hard to achieve a good trade-off between performance and efficiency, so researchers develop semantic segmentation architecture search to automatically optimize the segmenter. For example, Auto-DeepLab (Liu et al. 2019) pioneered in searching cell-level and network-level dense-connected search space to achieve better spatial resolution changes. CAS (Zhang et al. 2019) first imposed resource constraints while searching for an efficient backbone for segmentation. Subsequent SAS methods present advanced search space designs (e.g., multi-resolution branch) and search algorithms (e.g., gradient, pruning, meta-learning). However, these training-based methods always require training the supernetwork to evaluate the performance of different candidates, resulting in complex optimization processes and large additional computational overheads. To address this issue, we propose SasWOT, the first training-free framework for SAS. Our SasWOT directly uses proxy scoring on wellinitialized segmenters to search without any training cost. Our SasWOT not only opens new doors for SAS research, but also dramatically improves the search efficiency in practical applications. Training-Free Architecture Search Traditional architecture search involves designing search spaces, search algorithms, and evaluation strategies to automatically discover the optimal architecture within certain constraints (Wei et al. 2024; Hu et al. 2021; Dong et al. 2022; Yang et al. 2022; Dong, Li, and Wei 2023; Dong et al. 2023; Lu et al. 2024; Zimian Wei et al. 2024). The training-based methods employ a train-then-search process by multiple trials or weight-sharing policies. Training-free architecture search, also known as zero-shot NAS, is a faster alternative to vanilla NAS and one-shot NAS, as it can predict network The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7723 New proxy Weight Grad Unary OP Unary OP Binary OP Proxy candidates pool Top K Mutation Crossover Sort w.r.t. Kendall Tau * * * Population = N *: drop if the proxy is illegal Proxy Discovery Proxy Evolution New architecture Architecture candidates pool Top K Mutation Crossover Sort w.r.t. proxy prediction value Architecture Discovery Architecture Evolution Population = N random alpha random beta fixed pruning ratio (0.1) Figure 2: The overall process of SasWOT includes automated proxy discovery and training-free architecture search. In the proxy search phase, we build candidates with gradients and weights as inputs and different unary binary operations as options. we perform an evolutionary search to remove weak individuals and crossover & mutation to generate new populations within promising ones. Finally, we pick up the best-performing proxy for training-free architecture search, which evolves different architectures in the search space using scores of SasWOT proxy. performance without training network parameters. Zero-cost proxies can be divided into parameter-level and architecturelevel proxies. Parameter-level zero-cost proxies rely on pruning and summing up the saliency value of each layer weight as the proxy. Several methods have been proposed for Convolution-based architectures, such as Synflow (Tanaka et al. 2020), SNIP (Lee, Ajanthan, and Torr 2018), and GraSP (Wang, Zhang, and Grosse 2020), which rely on gradient computations using a single minibatch of data at initialization and can be computed very quickly. Architecturelevel zero-cost proxies evaluate the network’s discriminability from the architecture level. Zen-NAS (Lin et al. 2021) proposed a novel zero-shot proxy called Zen-Score to evaluate the expressivity of a network, while NWOT (Mellor et al. 2021) examined how the linear maps induced by data points correlate for untrained network architectures. However, these methods all employ hand-designed fixed proxies for the classification task and encoder-only CNN models. In semantic segmentation, the encoder-decoder model usually has complex branches and multi-scale features that result in huge challenges for traditional proxies in predicting final accuracy. To address this challenge, we automatically optimize the first customized proxies for SAS based on some observations. Our SasWOT first explore the training-free SAS method and bridges theoretical proxy design and practical downstream applications. Methodology Our framework is divided into two parts: (1) evolving customized proxies on a semantic segmentation benchmark (2) evolving optimal segmenters utilizing searched proxies. In this section, we first illustrate the proxy search component on search space design, search process, and search results. Then, we introduce the details of our trainning-free segmenter search. The pipeline of our approach is shown in Figure 2). Search Space for Proxy Discovery Search Space Organization Our approach evolves the customized proxy in a semantic segmentation benchmark (Duan et al. 2021), which consists of U-Net-like encoder-decoder models with various operators and their individual training results. We extract the activations (A), gradients(G), and weights(W) of each convolutional layer in both encoder and decoder modules as input. Then, we evaluate the existing proxy formulation with these inputs, and observe that: (1) Although with the same input statistics, different mathematical operations can bring large performance disparities for proxies. This point suggests that we can improve the proxy by tuning it with different mathematical operators. (2) Proxy models with different inputs can be well combined. For example, Synflow and NWOT can be well integrated with better correlation. This motivates us to select multiple statistic inputs for the proxy search space, bringing additional gains. Based on the above observations, we include activations, gradients, and weights as multiple inputs to the search and represent the zero-cost proxy as a computation graph. Then we collect the common mathematical operators for intermediate nodes of the graph, which are detailed in the next part. Primitive Operations In the context of the zero-cost proxy, primitive operations are used to process segmenter statistics, resulting in a computational graph for performance evaluation. We consider two types of primitive operations, including unary operations (operations with only one operand) and binary operations (operations with two operands). A summary of available primitive operations is presented as follows: • Unary operations: “elementwise pow,”“to-sum-scalar,” “elementwise exp,”“normalize,”“elementwise relu,” “elementwise sign,”“elementwise invert,”“slogdet,” “frobenius norm,”“elementwise normalized sum,” “l1norm,”“softmax,”“sigmoid,”“logsoftmax,” “to-mean-scalar,”“gram-matrix,”“elementwise log,” “elementwise abslog,”“elementwise abs,”“no op”. • Binary operations: “elementwise sum,”“equal to,” “elementwise difference,”“elementwise product,” “lesser than,”“greater than,”“matrix multiplication,” “hamming distance,”“pairwise distance,”“l1 loss,” The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7724 Figure 3: Correlation visualization of NWOT, Synflow, SasWOT-, SasWOT (from left to right). “cosine similarity,”“kl divergence,”“mse loss”. These mathematical operations are essential building blocks to construct a diverse and expressive search space of zerocost proxy. We provide more details of their formulas and properties in the supplementary material. Evolution Procedure, Acceleration, and Objective Based on our proxy search space and graph representation, we leverage an evolutionary algorithm (EA) to search for optimal proxy expressions. Our EA starts with an initial population of candidate Proxies and iteratively evolves the population over generations using genetic operators, such as selection, crossover, and mutation, to generate better solutions. For fitting objectives, each proxy is evaluated based on the ranking correlation between its output proxy Q scores and ground truths D to efficiently discover the optimal proxy ρ∗ from search space P, as: ρ∗ SasW OT = arg max ρ∈P (τ(D, Q)). (1) where Kendall’s Tau τ is formulated as the correlation coefficient. Each evolution picks top-k candidates with the highest evaluation score and randomly selects a parent from the candidates for mutation. To improve the efficiency of gene evolution, we adopt a greedy strategy to maximize the evolution of candidates. Genetic algorithms are designed to iteratively evolve the best-fit candidates.Greed is demonstrated by picking only topK candidates from the candidates pool after mutation and crossover operators, instead of randomly picking from the candidates pool in each round as in traditional genetic algorithms. Only the topK of evolved candidates is retained in each round, and if the number of candidates is less than the population at that point, new proxies are randomly sampled, and the topK is drawn, and the candidate pool is emptied before the start of the next round. In the mutation step, depending on the mutation mode, we randomly select unary or binary operations to change. In the crossover step, we randomly select primitive operations from the two proxies to form a new proxy. To improve the search efficiency of the proxy, we propose Early-Rejection Strategy. If two unary operators cannot perform a binary operation due to a dimensional mismatch, the genetic algorithm will immediately eliminate the combination. Also, if no better candidates are produced after five iterations, the topK at that point will be recorded and the candidates pool will be reset randomly. Once a candidate proxy has optimization collapse Algorithm 1: Evolution Search for SasWOT Proxy Input: Search space S, population P, max iteration T , sample ratio r, sampled pool R, topk k, . Output: SasWOT proxy with best KD(Kendall Tau) index. 1: P0 := Initialize population(Pi); 2: Sample pool R := ∅; 3: for i = 1, 2, . . . , T do 4: Select Gi s := GetTopk(R, k); 5: // greedy evolution strategy 6: Clear sample pool R := ∅; 7: Mutate := MUTATE(Gs i); 8: Crossover Gc i := CROSSOVER(Gs i); 9: Append Gm i to R; 10: Append Gc i to R; 11: if R < P then 12: Add random samples 13: else 14: Go to line 4; 15: end if 16: end for and overflow issues, this strategy will directly break its verification and then generates new offspring to avoid searching for local optimums. Searched Zero-cost Proxy In table 3, by considering the correlation of zero-cost proxies in MICRO and MACRO space, we decided to try to combine two relatively well-performing proxies (NWOT, SasWOT) after normalization. The experimental results (SasWOT) show that this was a successful attempt. The comparison between SasWOT- and SasWOT highlights that AVTnormalized proxy combination strategies can achieve better results than individually searched proxies. We present formulas of the searched proxies SasWOT- and SasWOT as follows: ρ∗ SasW OT −= Ncl X i=1 (softmax(Wcli) < ReLU  ∂L ∂Wcli  ) (2) ρ∗ SasW OT = normaavt(ρ∗ SasW OT −) + normaavt(log|KH|) (3) where Ncl is the number of convolution and linear layers, Wcli is the weight parameter matrix of a convolution layer or a linear layer, ∂L/∂Wcli is the corresponding gradient matrix. KH is the kernel matrix of binary codes in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7725 Method InputSize mIoU (%) FPS FLOPs Params Search Method Training cost Search cost val test (GPU days) (GPU hours) Enet 640×360 58.3 76.9 3.8G 0.4M Manual BiSeNet 768×1536 69 68.4 105.8 14.8G 5.8M Manual Fast-SCNN 1024×2048 68.6 68 123.5 1.1M Manual ICNet 1024×2048 69.5 37.7 28.3G 26.5M Manual DFANet 1024×1024 71.3 100 3.4G 7.8M Manual SFNet(DF1) 1024×2048 74.5 NA 9.03M Manual GAS 769×1537 71.8 108.4 Graph 160 CAS 768×1536 71.6 70.5 108 Gradient 200 DF1-Seg-d8 1024×2048 72.4 71.4 136.9 Pruning 9600 FasterSeg 1024×2048 69.8 163.9 28.03G 3.42M Gradient 3.24 48 Random 1024×2048 67.9 65.8 162.37 29.17G 4.64M Training-free 2.63 1 NWOT 1024×2048 69.2 68.4 162.61 30.56G 2.29M Training-free 2.54 1 Synflow 1024×2048 67.0 66.0 161.13 33.42G 3.88M Training-free 2.75 1 SasWOT1024×2048 69.2 66.7 162.04 32.13G 3.12M Training-free 2.58 1 SasWOT 1024×2048 71.3 69.8 162.64 29.34G 3.33M Training-free 2.55 1.8 Table 2: mIoU and inference FPS on Ciytscapes validation (val) and test (test) sets. The training and search costs are obtained from the original papers (Enet(Paszke et al. 2016), BiSeNet (Yu et al. 2018a), Fast-SCNN (Poudel, Liwicki, and Cipolla 2019), ICNet (Zhao et al. 2018a), DFANet A (Zhang et al. 2023), SFNet(DF1) (Li et al. 2022c), GAS (Li et al. 2019a) , CAS (Zhang et al. 2019) , DF1-Seg-d8 (Li et al. 2019b) and FasterSeg (Chen et al. 2020)) report or from our experimental records. NWOT (Mellor et al. 2021). SasWOT is built on the combination of two proxies, one from the SasWOT- obtained from the search and the other from the proxy proposed by the previous work. Considering the magnitude of different proxies, we adopt the strategy of AVT normalization to achieve the summation between two proxies. Given a set of Narch architectures L, let one proxy predict the result as Pzc(L)1×Narch, then the AVT normalization of proxy prediction pzc(Li) for one architecture is given by: normavt(pzc(Li)) = pzc(Li) −min(Pzc(L)) max(Pzc(L)) −min(Pzc(L)) (4) In our experiments, we find that the combination of SasWOT- and NWOT (Mellor et al. 2021) has an effective improvement in evaluation metrics. However, due to the static nature of the AVT normalization policy, we are unable to score the architecture in real-time in practice. To address this issue, we estimate the maximum and minimum values of possible SasWOT scores by pre-selecting a certain number of architectures in the search space at random. Figure 3 has demonstrated the superiority of SasWOT- and SasWOT, which significantly outperform the previous zero-cost proxy methods and meanwhile enjoy more search efficiency. Training-free Segmenter Search Our approach utilizes a training-free segmenter search to efficiently explore a large search space of candidate architectures without the need for costly and time-consuming training on large datasets. To begin, we employ an evolutionary search algorithm to obtain a good proxy model. Once we have a reliable proxy, we conduct a training-free segmenter search by using the evolutionary algorithm to discover the optimal segmenter α∗from the search space A. While evolutionary search is typically the most effective approach, we opt for an EA search in this case due to the limited size of the search space, which simplifies the process. Using random initialized weights W, we conduct the training-free search algorithm to identify the optimal Segmenter efficiently, as: α∗= arg max α∈A ρ∗ SasW OT (α, W). (5) In the architecture search process, we first randomly generate matrices that can denote the model architectures and add them to the candidate pool. Then the architectures are scored by zero-cost predictor and the top K architectures are selected for the mutation and crossover steps. In the mutation step, depending on the mutation pattern, we randomly change the rows of the alpha or beta matrix. In the crossover step, we randomly select the alpha and beta matrices from the two architectures to generate a new architecture. After the mutation and crossover steps, if the population is insufficient, new alpha and beta matrices are randomly generated and added to the candidates. the new architectures in each of these steps need to be verified for legitimacy, i.e., whether they have been recorded before. Experiments In this section, we first evaluate the ranking performance of the searched proxies in the semantic segmentation benchmark, and then we use the searched proxies to perform a training-free segmenter search in Cityscapes (Cordts et al. 2016) and CamVid (Brostow et al. 2008). All models are trained from scratch without ImageNet pre-trained weights. In all experiments, the class mIoU (mean Intersection over Union per class) and FPS (frame per second) are used as the metrics for accuracy and speed, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7726 Figure 4: Visualization of search curves. From left to right: proxy search in macro search space, proxy search in micro search space, segmenter search with SasWOT- proxy, segmenter search with SasWOT proxy. Figure 5: Visualization of segmentation results on CityScapes. From left to right:ground truth, (architecture searched with) random, nwot, synflow, SasWOT-, SasWOT Space MICRO MACRO Ranking Kd Sp Ps Kd Sp Ps Mean Mean Mean Mean Mean Mean Fisher 11.11 15.20 10.84 7.66 11.17 5.99 FLoPs 43.98 62.55 53.12 51.18 70.69 46.72 Grad norm 38.83 53.49 22.48 3.62 5.49 4.42 Grasp 1.90 3.02 3.02 0.66 6.28 4.18 Epe nas 53.48 68.40 66.29 14.26 19.15 10.71 Jacov 46.45 62.53 43.64 10.31 15.08 19.80 NWOT 36.7 58.52 52.09 45.71 66.09 58.40 Params 45.11 63.96 58.27 1.22 2.05 1.56 SNIP 48.34 64.17 35.47 17.17 25.88 24.38 Synflow 45.36 61.94 51.83 10.85 16.82 18.40 Zen-NAS 48.27 66.27 43.64 23.73 34.78 35.05 SasWOT 64.08 83.98 76.62 52.76 74.06 67.97 SasWOT55.22 77.63 75.51 28.92 41.34 37.06 Table 3: Ranking results (%) on TransNAS-Bench101 (Duan et al. 2021)-Semantic Segmentation. Kd: Kendall Tau, Sp: spearman, Ps: Pearson. Method mIoU (%) FPS FLOPs Random 62.2 388.8 7.28G NWOT(Mellor et al. 2021) 62.2 392.5 7.63G Synflow(Tanaka et al. 2020) 57.1 390.3 8.34G SasWOT58.7 388.9 8.03G SasWOT 64.3 388.7 7.33G Table 4: Results on the CamVid test set with resolution 960 ×720. Experiments on Segmentation Benchmark Dataset and implementation. The TransNAS-Bench101 (Duan et al. 2021) dataset was used as our segmentation benchmark to evaluate the performance of the searched proxies. This dataset includes 3256 and 4096 networks from the macro-level search space and the cell-level search space. These networks were trained for 30 epochs with the same setup on the Taskonomy dataset. The Taskonomy dataset was sampled from 17 classes of MSCOCO(Lin et al. 2014). The Labels for the Taskonomy dataset were predicted from a network pre-trained on the MSCOCO dataset. We tested the performance of the searched proxies in micro and macro search spaces, respectively. Fifty architectures were randomly selected for evaluation in each experiment, with the Kendall Tau index, Spearman index, and Pearson index being evaluated. Through several sets of experiments, we found that the proxy SasWOT- found by evolution search had a more stable model performance prediction, as its average Kendal Tau index was higher compared to the other proxies. SasWOT further improves the performance of SasWOT-, which is more evident in the Micro search space. Comparison results Table 3 demonstrates the ranking capabilities of different methods on macro and micro search spaces. These results demonstrate that (1) our SasWOT achieves the best ranking performance and outperforms other proxy methods, (2) SasWOT- performs well in some cases, indicating that our single-input statistic proxy search is effective, and SasWOT improves on this by demonstrating the advanced nature of our multi-input strategy, (3) for other methods, NWOT, Synflow, and Flops have some advantages over other methods. This may be attributed to the properties of the search space. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7727 Experiments on Cityscapes Dataset and implementation. We evaluate SasWOT on Cityscapes dataset(Cordts et al. 2016). The dataset includes over 5,000 images, each with high-quality pixel-level annotations for various semantic segmentation tasks. Inherited from Fasterseg’s architecture search space, SasWOT enables two branches to share three operators. Unlike previous work, SasWOT fixes the pruning rate of the searched student model, thus ensuring a reduced model complexity. In the retraining phase, we adopted a knowledge distillation strategy and used a deeplabv3+ model with a backbone of resnet101 as the teacher network. For this task, we used an SGD optimizer with an initial learning rate of 0.015 and an exponential learning rate decay. We trained the searched architectures for 800 epochs and evaluated SasWOT on the Cityscapes validation and test sets. We used a raw image resolution of 1024 × 2048 (H×W) to measure mIoU and velocity inference. Comparison results In Table 2, we see the performance metrics and inference speed of the models searched using different proxies. Compared to the proxies presented in previous work, the architecture search using saswot- achieves no less than the architecture searched by NWOT. It is worth mentioning that SasWOT further improves the performance of the searched architectures on top of SasWOT- with no additional computational cost increase. It achieves a performance comparable to the gradient-based Fasterseg search method but with a significant reduction (about 24x faster) in search time. In addition, we also compared other trainingfree methods utilizing the same search and training settings. The experimental results show that NWOT is a superior method among them. In conclusion, SasWOT can achieve a significant improvement in search efficiency compared to the previous SAS methods and superior performance compared to other training-free methods on Cityscapes. Experiments on CamVid Dataset and implementation. CamVid is another street scene dataset extracted from five video sequences taken from a driving automobile. It contains 701 images in total, where 367 for training, 101 for validation and 233 for testing. The images have a resolution of 720 × 960 (H×W) and 11 semantic categories. Considering the spatial resolution of the Fasterseg search space, we cropped the input images randomly and then resized them to 512 × 1024 (H×W). Similar to experiments on cityscapes, we used a deeplabv3+ teacher network for knowledge distillation with the searched architectures. We trained the searched architecture 80 epochs using the SGD optimizer, with an initial learning rate of 0.01 and exponential learning rate decay. Comparison results Table 4 reveals that by achieving this metric in only 80 training epochs and maintaining an inference speed that approximates that of Fasterseg, the SasWOT search framework is certainly more efficient. The teacher network deeplabv3+ only achieves comparable segmentation results to SasWOT with the same training setup while consuming several times the computational power. The implication of this finding is that the SasWOT search framework may be a more practical and efficient approach to semantic segmentation tasks, especially in real-time applications where inference speed is critical. However, it is important to note that the comparison is specific to the CamVid dataset and may not necessarily generalize to other datasets or tasks. Ablation Study Search algorithm. The evolutionary search algorithm used in this study is a gradient-free optimization technique that is commonly used in AutoML. As shown in Figure 4, the evolutionary algorithm obtains faster convergence and better final search results compared to random search in the proxy and architecture search process. This suggests that the evolutionary algorithm is a more effective and efficient approach to our SasWOT than random search. Correlation Visualization. Figure 3 reports that the score of SasWOT and the performance ground-truth are visualized to observe the proxy’s predictability. The performance ground-truth represents the actual segmentation results obtained by the larger and more complex segmentation model. The figure shows that SasWOT can effectively detect the actual segmentation results, indicating that it is a reliable proxy model for semantic segmentation tasks. Segmentation Result Visualization. As shown in Figure 5, the visualization reveals that our method can accurately segment some detailed regions that are challenging for other methods, such as small objects and thin structures. Additionally, our method can effectively segment some important categories, e.g., vehicles and pedestrians, which are critical for applications like autonomous driving and surveillance. The visualization also demonstrates that our method can maintain region connectivity, meaning that adjacent regions of the same semantic class are accurately grouped together. Conclusion In this paper, we present a novel training-free SAS framework, dubbed SasWOT, to efficiently search for the optimal segmenter utilizing our automated search proxy. Our framework includes customized proxy search and training-free segmenter search. With the discovered proxies, our sasWOT allows an efficient search for promising candidates without any training cost. As a result, our SasWOT achieves at least 30 × acceleration in the search stage. Comprehensive experiment results on segmentation benchmarks and multiple autonomous driving segmentation datasets illustrate the superior ranking ability improvement and segmentation performance of our method. We hope that our novel investigations would give more insight and new directions for the semantic segmentation and NAS research community. Acknowledgments The paper is supported by: The National Natural Science Foundation of China No. 42075139,42077232; The Science and technology program of Jiangsu Province No. BE2020082 and BE2022063; The Innovation Fund of State Key Laboratory for Novel Software Technology No. ZZKT2022A18. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7728 References Badrinarayanan, V.; Kendall, A.; and Cipolla, R. 2017. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12): 2481–2495. Brostow, G. J.; Shotton, J.; Fauqueur, J.; and Cipolla, R. 2008. Segmentation and recognition using structure from motion point clouds. In European conference on computer vision, 44–57. Springer. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017a. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. T-PAMI. Chen, L.-C.; Papandreou, G.; Schroff, F.; and Adam, H. 2017b. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), 801–818. Chen, W.; Gong, X.; Liu, X.; Zhang, Q.; Li, Y.; and Wang, Z. 2020. FasterSeg: Searching for Faster Real-time Semantic Segmentation. In International Conference on Learning Representations. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223. Dong, P.; Li, L.; and Wei, Z. 2023. DisWOT: Student Architecture Search for Distillation WithOut Training. In CVPR. Dong, P.; Li, L.; Wei, Z.; Niu, X.; Tian, Z.; and Pan, H. 2023. EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization. arXiv preprint arXiv:2307.10554. Dong, P.; Niu, X.; Li, L.; Xie, L.; Zou, W.; Ye, T.; Wei, Z.; and Pan, H. 2022. Prior-Guided One-shot Neural Architecture Search. arXiv preprint arXiv:2206.13329. Duan, Y.; Chen, X.; Xu, H.; Chen, Z.; Liang, X.; Zhang, T.; and Li, Z. 2021. Transnas-bench-101: Improving transferability and generalizability of cross-task neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5251–5260. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; and Lu, H. 2019. Dual attention network for scene segmentation. In CVPR. Hu, Y.; Wang, X.; Li, L.; and Gu, Q. 2021. Improving oneshot NAS with shrinking-and-expanding supernet. Pattern Recognition. Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; and Liu, W. 2019. Ccnet: Criss-cross attention for semantic segmentation. In ICCV. Lee, N.; Ajanthan, T.; and Torr, P. H. 2018. Snip: Singleshot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340. Li, G.; Qian, G.; Delgadillo, I. C.; M¨uller, M.; Thabet, A.; and Ghanem, B. 2019a. SGAS: Sequential Greedy Architecture Search. ArXiv. Li, L. 2022. Self-Regulated Feature Learning via Teacherfree Feature Distillation. In ECCV. Li, L.; Dong, P.; Li, A.; Wei, Z.; and Ya, Y. 2023a. KD-Zero: Evolving Knowledge Distiller for Any TeacherStudent Pairs. In Thirty-seventh Conference on Neural Information Processing Systems. Li, L.; Dong, P.; Wei, Z.; and Yang, Y. 2023b. Automated Knowledge Distillation via Monte Carlo Tree Search. In ICCV. Li, L.; and Jin, Z. 2022. Shadow knowledge distillation: Bridging offline and online knowledge transfer. Advances in Neural Information Processing Systems. Li, L.; Shiuan-Ni, L.; Yang, Y.; and Jin, Z. 2022a. Boosting Online Feature Transfer via Separable Feature Fusion. In IJCNN. Li, L.; Shiuan-Ni, L.; Yang, Y.; and Jin, Z. 2022b. Teacherfree Distillation via Regularizing Intermediate Representation. In IJCNN. Li, L.; Wang, Y.; Yao, A.; Qian, Y.; Zhou, X.; and He, K. 2020. Explicit Connection Distillation. Li, X.; Zhang, J.; Yang, Y.; Cheng, G.; Yang, K.; Tong, Y.; and Tao, D. 2022c. Sfnet: Faster, accurate, and domain agnostic semantic segmentation via semantic flow. arXiv preprint arXiv:2207.04415. Li, X.; Zhou, Y.; Pan, Z.; and Feng, J. 2019b. Partial order pruning: for best speed/accuracy trade-off in neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9145–9153. Lin, M.; Wang, P.; Sun, Z.; Chen, H.; Sun, X.; Qian, Q.; Li, H.; and Jin, R. 2021. Zen-NAS: A Zero-Shot NAS for HighPerformance Image Recognition. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, 740–755. Springer. Liu, C.; Chen, L.-C.; Schroff, F.; Adam, H.; Hua, W.; Yuille, A. L.; and Fei-Fei, L. 2019. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In CVPR. Liu, X.; Li, L.; Li, C.; and Yao, A. 2023. NORM: Knowledge Distillation via N-to-One Representation Matching. arXiv preprint arXiv:2305.13803. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3431–3440. Lu, L.; CHen, Z.; Lu, L., Xiaoyu; Rao, Y.; Li, L.; and Pang, S. 2024. UniADS: Universal Architecture-Distiller Search for Distillation Gap. In AAAI. Mellor, J.; Turner, J.; Storkey, A.; and Crowley, E. J. 2021. Neural architecture search without training. In ICML. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7729 Paszke, A.; Chaurasia, A.; Kim, S.; and Culurciello, E. 2016. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147. Poudel, R. P.; Liwicki, S.; and Cipolla, R. 2019. FastSCNN: fast semantic segmentation network. arXiv preprint arXiv:1902.04502. Shao, S.; Dai, X.; Yin, S.; Li, L.; Chen, H.; and Hu, Y. 2023. Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling. arXiv preprint arXiv:2305.10769. Song, L.; Li, Y.; Li, Z.; Yu, G.; Sun, H.; Sun, J.; and Zheng, N. 2019. Learnable Tree Filter for Structure-preserving Feature Transform. In NeurIPS. Tanaka, H.; Kunin, D.; Yamins, D. L.; and Ganguli, S. 2020. Pruning neural networks without any data by iteratively conserving synaptic flow. NeurIPS. Wang, C.; Zhang, G.; and Grosse, R. 2020. Picking winning tickets before training by preserving gradient flow. arXiv preprint arXiv:2002.07376. Wei, Z.; Pan, H.; Li, L. L.; Lu, M.; Niu, X.; Dong, P.; and Li, D. 2024. TVT: Training-free Vision Transformer Search on Tiny Datasets. In ICASSP). Yang, C.; Zhou, H.; An, Z.; Jiang, X.; Xu, Y.; and Zhang, Q. 2022. Cross-Image Relational Knowledge Distillation for Semantic Segmentation. arXiv preprint arXiv:2204.06986. Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; and Sang, N. 2018a. Bisenet: Bilateral segmentation network for realtime semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), 325–341. Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; and Sang, N. 2018b. Learning a discriminative feature network for semantic segmentation. In CVPR. Zhang, Y.; Li, K.; Zhang, G.; Zhu, Z.; and Wang, P. 2023. DFA-UNet: Efficient Railroad Image Segmentation. Applied Sciences, 13(1): 662. Zhang, Y.; Qiu, Z.; Liu, J.; Yao, T.; Liu, D.; and Mei, T. 2019. Customizable Architecture Search for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 11641–11650. Zhao, H.; Qi, X.; Shen, X.; Shi, J.; and Jia, J. 2018a. Icnet for real-time semantic segmentation on high-resolution images. In Proceedings of the European Conference on Computer Vision (ECCV), 405–420. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2881– 2890. Zhao, H.; Zhang, Y.; Liu, S.; Shi, J.; Change Loy, C.; Lin, D.; and Jia, J. 2018b. Psanet: Point-wise spatial attention network for scene parsing. In ECCV. Zimian Wei, Z.; Li, L. L.; Dong, P.; Hui, Z.; Li, A.; Lu, M.; Pan, H.; and Li, D. 2024. Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery. In AAAI. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7730
2024
858
18,693
Enhance Sketch Recognition’s Explainability via Semantic Component-Level Parsing Guangming Zhu1,2,3, Siyuan Wang1, Tianci Wu1, Liang Zhang1,2,3,∗ 1School of Computer Science and Technology, Xidian University, China 2Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province 3Xi’an Key Laboratory of Intelligent Software Engineering gmzhu@xidian.edu.cn, siyuanwang@stu.xidian.edu.cn, 22031212495@stu.xidian.edu.cn, liangzhang@xidian.edu.cn Abstract Free-hand sketches are appealing for humans as a universal tool to depict the visual world. Humans can recognize varied sketches of a category easily by identifying the concurrence and layout of the intrinsic semantic components of the category, since humans draw free-hand sketches based a common consensus that which types of semantic components constitute each sketch category. For example, an airplane should at least have a fuselage and wings. Based on this analysis, a semantic component-level memory module is constructed and embedded in the proposed structured sketch recognition network in this paper. The memory keys representing semantic components of each sketch category can be self-learned and enhance the recognition network’s explainability. Our proposed networks can deal with different situations of sketch recognition, i.e., with or without semantic components labels of strokes. Experiments on the SPG and SketchIME datasets demonstrate the memory module’s flexibility and the recognition network’s explainability. The code and data are available at https://github.com/GuangmingZhu/SketchESC. Introduction Free-hand sketch is a universal tool to depict the visual world, and it is not bound by age, race, language, geography, or national boundaries. Sketch images are highly sparse, abstract and lack of background. Sketch can be regarded as an expression of the human brain’s internal representation of the visual world (Xu et al. 2022). Humans can recognize sketches and identify the intrinsic semantic components easily, even sketches of the same category drawn by different persons may be very different in appearance. Sketch can be represented as an image in the static pixel space, as a time series in the dynamic stroke coordinate space, or as a graph in the geometric graph space. This results in various Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Graph Neural Network (GNN) based methods for sketch recognition (Zhang et al. 2019; Xu et al. 2022). These methods usually take image- or Scalable Vector Graphics (SVG)- format data as input, and predict the category label for a given sketch sample. However, there is lacking of work on interpreting the reason of giving such predictions. ∗Corresponding Author Copyright c⃝2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Explainable artificial intelligence (XAI) has become a hot research topic to explain models’ decision (Ramaswamy et al. 2020; Shitole et al. 2021; Garau et al. 2022). Visualizing the activation maps of deep neural networks is widely used in computer vision. However, sketch images composed of stroke lines without textures, are different from natural images. This means that the existing XAI methods cannot be applied directly in the sketch research field. A first look at explainability for human sketches was achieved by SketchXAI using the counterfactual explanation (Qu et al. 2023). The stroke location inversion module in SketchXAI offers an explainability angle to sketch in terms of asking a network how well it can recover stroke locations of an unseen sketch. Liu et al. developed an image classifier explanation model using the counterfactual maps, in which the counterfactual map generator module is used to identify the critical structures for the specific category (Liu et al. 2023). Counterfactual explanation (CE), as a post-hoc explainability method, aims to identify what are the minimal input changes for a model to make a different visual decision (Van Looveren and Klaise 2021). SketchXAI (Qu et al. 2023) used CE to relocate reshuffled strokes to construct a sketch given a category, while Liu et al. designed a counterfactual map generator to discover the stroke-level principal components for a specific category (Liu et al. 2023). The above two methods try to explain the question of “why the sketch is classified as X” by providing positive and negative semantic explanation evidences. However, we believe that the concurrence and layout of the intrinsic semantic components of a category can be a crucial evidence to explain the question from another perspective. For example, taking into consideration the common knowledge that an airplane should at least have a fuselage and wings, if a sketch is composed of strokes which can be semantically grouped into a fuselage and wings, it probably is an airplane. As to the analysis above, we propose to enhance sketch recognition’s explainability via semantic component-level parsing. Specifically, a Semantic Component-level Memory (SCM) module is constructed, whose memory keys represent the semantic components of different sketch categories. The SCM module is embedded in a Structured Sketch Recognition (SSR) network, and evolves the stroke features based on the similarity with the learnable features of memory keys. The fused stroke-level or component-level The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7731 features are fed into a Transformer to achieve a high recognition performance under the supervision on segmentation (if available) or compositionality (i.e., which types of semantic components constitute each sketch category). For the dataset with the category labels and the semantic component labels of strokes, the supervision on the component-level parsing in the SCM module and on the semantic segmentation results of the Transformer can be used to achieve a precise and explainable recognition performance. For the dataset only with category labels, the supervision on the compositionality can be used in the proposed SSR network to enhance the recognition network’s explainability. This flexibility makes the proposed SCM module and SSR network applicable on sketch recognition and segmentation tasks and achieve better and explainable performance. The main contribution can be summarized as follows. • A semantic component-level memory module is constructed, which can learn and store memory keys representing semantic components, and do explainable parsing from strokes to components. • A structured sketch recognition network is proposed, which has hierarchical and explainable abilities, from stroke-level embedding, component-level parsing to sketch-level recognition. • The proposed network is explainable and flexibility to deal with the sketch recognition situations with or without semantic component labels of strokes, and can achieve remarkable performance on the public datasets. Related Work Sketch Recognition Sketches are generally represented as pixel-level rasterized images or ordered sequences of point coordinates. Typically, CNNs (Yu et al. 2017; Prabhu et al. 2018), RNNs (Sarvadevabhatla and Kundu 2016; Ha and Eck 2017), or CNNRNN architectures (Xu et al. 2018; Li et al. 2020) were constructed for sketch recognition. Recently, the trend from Euclidean (CNN, RNN based) to topological analysis (GNN based) has emerged in sketch recognition. A sketch can also be represented as the sparsely connected graphs in the topological space. Therefore, GNN based models were proposed to model sketch’s local and global topological stroke structures (Xu, Joshi, and Bresson 2021). There is no consensus on which representation style is better than the other, as each has its own merits based on the application scenarios. Rasterized images ignore the sketching orders and are better for offline recognition. Sequence-based representation can be used to continuously predict the labels using accumulated sketch strokes online, and can be used in more interactive real-time applications. Graph-based representation is flexible to encode local and global geometric sketch structures, and can be used for sketch grouping or segmentation. However, no matter which representation style is used, visual explanation is rarely studied for sketch recognition. Visual Explanation Various activation map visualization techniques, such as the Grad-CAM series methods(Selvaraju et al. 2017; Chattopadhay et al. 2018; Omeiza et al. 2019), have been widely researched to interpret the classifier’s decision-making rationale. These methods highlight the essential regions, but the explainability on sketch recognition is better to explore strokes’ effects on recognition, therefore they are not suitable for sketch researches. Contrasted to these pixel-level methods, patch-level methods tried to use representative patches to explain the classifier’s prediction (Chen et al. 2019; Zhang et al. 2018; Ge et al. 2021). However, considering the surrounding or overlapping between strokes of a sketch in the spatial layout, patches can not always represent individual semantic components. Besides, explanation via visualization is hard to understand for non-expert users. Counterfactual explanation methods (Van Looveren and Klaise 2021; Miller 2019) supplied alternative approaches to identify what are the minimal input changes for a model to make a different visual decision. SketchXAI(Qu et al. 2023) used CE to relocate reshuffled strokes to construct a sketch given a category, while Liu et al. designed a counterfactual map generator to discover the stroke-level principal components for a specific category (Liu et al. 2023). These methods contribute the first exploration on sketch recognition’s explainability in the stroke-level. Humans draw free-hand sketches based a common consensus that which types of semantic components constitute each sketch category. Strokes of a sketch can be considered as abstract representation of the object’s shape, component, or attributes. Therefore, since humans perceive the visual world by parsing objects’ shape, components and attributes hierarchically and structurally, why cannot sketch recognition networks enhance their explainability by identifying the semantic components that strokes constitute. Alanize et al. constructed a Primitive-Matching Network (PMN) to learn interpretable abstracts of a sketch through simple primitives (Alaniz et al. 2022). Zhu et al. proposed a simultaneous sketch recognition and segmentation (SketchRecSeg) network which parses the semantic components at the same time when recognizing a sketch (Zhu et al. 2023). However, PMN (Alaniz et al. 2022) only fulfills the matching between strokes and primitives. SketchRecSeg (Zhu et al. 2023) uses a two-stream architecture, but its segmentation stream cannot enhance its recognition stream’s explainability. Methodology We aim to construct a Structured Sketch Recognition (SSR) network which does Stroke-Level Embedding on each stroke, implements Component-Level Parsing, and fulfills explainable Sketch-Level Recognition, as shown in Fig. 1. For the data with category labels and the semantic component labels of each stroke (i.e., the scenario x in Fig. 1), sketches can be recognized and semantically segmented simultaneously. For the data only with category labels and the prior knowledge about the intrinsic semantic components of each category (i.e., the scenario y in Fig. 1), sketches can be recognized with the auxiliary constraint that which types of semantic components constitute each sketch category. Both two scenarios result in sketch recognition results with the auxiliary information about which types of semantic components constitute each sketch sample. This enhances the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7732 Figure 1: Overview of the proposed Structured Sketch Recognition network. The x indicates the scenario that the Semantic Component-Level Memory module feeds the fused stroke-level features into Transformer for sketch recognition and segmentation. The y indicates the scenario that the fused component-level features are fed into Transformer for sketch recognition and the probability prediction on the existence of each type of semantic component. sketch recognition network’s explainability. Stroke-Level Embedding Formally, each sketch can be represented as an ordered sequence of strokes, denoted as {s1, s2, · · · , si, · · · , sN}. Stroke si consists of k points, {si,1, si,2, · · · , si,k−1, si,k}, and each point contains a two-dimensional coordinate value and a two-dimensional binary pen state (Ha and Eck 2018). Three descriptors are learned to identify three inherent properties of each sketch, i.e., shape shi, stroke order oi and location li, as in SketchXAI (Qu et al. 2023). The location of stroke si is defined as the coordinate of the first point si,1. Specifically, a bidirectional Long Short-Term Memory (LSTM) is used for the shape embedding to extract shape information shi of each stroke, a learnable embedding is used for the order embedding oi, and one linear layer is used for the location embedding li. These three kinds of embeddings are summed as the stroke embeddings. Component-Level Parsing When a sketch is represented as a sparsely connected graph, the graph nodes generally denote the stroke points, as in SketchGNN (Yang et al. 2021) and MultiGraph Transformer (Xu, Joshi, and Bresson 2021). In this study, stroke points have been aggregated in the stroke-level embedding stage, therefore a stroke-level graph G = (V, E) can be constructed. V denotes the graph node set, in which each node represents a stroke. E denotes edges that connect adjacent strokes in sketching order. Dynamic Graph Convolution The above stroke-level embedding does not involve inter-stroke feature fusion. However, the semantic meaning of a stroke does not only depend on its shape and location, but also depends on the context of strokes. Inter-stroke fusion is necessary to learn which strokes constitute a semantic component. A two-layer dynamic graph convolution (Yang et al. 2021) unit is used in our network. The same graph convolution operation as in EdgeConv (Wang et al. 2019) is adopted. In order to enlarge the receptive field, E is updated layer-by-layer using the Dilated k-NN (Li et al. 2019). The motivation of updating E is to explore the feature fusion between strokes which belong to the same semantic component but are not adjacent in sketching order. The dilation ratios in the two layers are 1 and 2, respectively. A residual connection exists in each graph convolution layer to sum the input and output features. Semantic Component-Level Memory Memory augmented neural networks utilize external memory to explicitly access the experiences (Khasahmadi et al. 2019). A Semantic Component-level Memory (SCM) module can store the feature representation of semantic components, so that a similarity metric can be implemented to associate strokes with the semantic components to which they belong. In such case, strokes belonging to the same semantic component can be fused further to get the component-level features. Explainable similarity metrics ensure the explainability of the semantic component-level parsing, and the category classifier can do explainable inference, i.e., “The sketch is recognized as X because it is composed of the semantic components which constitute X”. Specifically, the SCM module consists of a multi-head array of memory keys. Each semantic component is represented by a multi-head key in SCM. Given the stroke feature qi outputted by the dynamic graph convolution module, we use Eq. (1) as a kernel to measure the normalized similarity between the stroke feature qi and the key kj of SCM as1: Ci,j = (ε + ∥qi −kj∥2/τ)−τ+1 2 P j′ (ε + ∥qi −kj′∥2/τ)−τ+1 2 , (1) where Ci,j is the normalized score between the stroke feature qi and the memory key kj (representing the j-th type of semantic component), τ is the degree of freedom, and ε is a bias value which is much smaller than the average of ∥qi −kj∥2/τ. Memory keys are learnable parameters and learned automatically during the network training process. A Max-pooling operation is implemented to select the most similar head from the multi-head key of each semantic component for each stroke. For simplicity, we use Ci,j to represent the similarity between the stroke qi and its most 1The head index of multi-head keys is omitted for simplicity The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7733 similar head key kj of the j-th type of semantic component in the following description, and use K ∈RK×d to represent the set of the most similar head key of each semantic component for one stroke, where K is the component type count. A Softmax operation is further implemented along the j-dimension of {Ci,j} to obtain the normalized assignment matrix C ∈RN×K, where N is the stroke count. Feature Fusion. Two feature fusion strategies are designed. One is the stroke-level feature fusion, i.e., enhancing the stroke features by memory keys, denoted as Fs ∈RN×d = (1−maxj(C))◦Q+maxj(C)◦C∗K. (2) The enhanced features Fs are further fed into Transformer for sketch recognition and segmentation. The other is the component-level feature fusion, i.e., generating component features by fusing stroke features and memory keys, Fc ∈RK×d = (1 −maxj(C)) ◦C⊤∗Q + maxj(C) ◦K. (3) The component features Fc can be fed into Transformer for sketch recognition along with the prediction on the existence of each type of semantic component. In Eqs. (2) and (3), Q ∈RN×d is the stroke features outputted by the dynamic graph convolution module, ◦is the broadcasting multiply operation, and ∗is the matrix multiplication operation. The balance ratio maxj(C) means that if a stroke can be assigned to a semantic component with a high confidence, the key feature of the semantic component is more representative and better used for sketch recognition. Supervision on SCM. The keys in MemGNN are learned without extra supervision (Khasahmadi et al. 2019). We believe that it is better to ensure keys’ distinguishability, since keys represent different types of semantic components. Therefore, a linear classifier and a Cross-Entropy (CE) loss are implemented in the SCM module as L1 = CE(fw1(kj), j). (4) If the semantic component label of each stroke is available, a supervision on the assignment matrix C can be implemented by a balanced Binary Cross-Entropy (bBCE) loss as L2 = bBCE(C, Cgt) = γn X Cgt i,jCi,j + γp X (1 −Cgt i,j)(1 −Ci,j), (5) where the (i, j)-th value Cgt i,j in Cgt ∈RN×K is 1 when the i-th stroke belongs to the j-th type of semantic component, otherwise the value is 0. The γn and γp denote the ratio of 0 and 1 in Cgt, respectively. The balance ratio γn and γp prevent the C from being learned as all-zero, since only one of K elements in each row of Cgt is 1. Sketch-Level Recognition The Transformer architecture in ViT (Dosovitskiy et al. 2020) is used for sketch-level recognition. When taking the fused stroke-level features Fs as input, the Transformer outputs the category label and the semantic component label of each stroke. The classification (L4) and stroke-level semantic segmentation (L5) losses can be denoted as L3 = CE(fw2(Fs), yc) | {z } L4 +λs CE(fw3(Fs), ys) | {z } L5 , (6) where yc is the ground-truth category label and ys is the ground-truth semantic component label of strokes. When taking the fused component-level features Fc as input, the Transformer outputs the category label and the prediction probability on the existence of each semantic component in the sketch sample. The classification (L4) and compositionality prediction (L6) losses can be denoted as L3 = CE(fw2(Fs), yc) | {z } L4 +λc bBCE(fw4(Fc), ye) | {z } L6 , (7) where ye indicates the existence or not of each type of semantic component. yj e = 1 when the sketch sample should contain the j-th type of semantic component, otherwise yj e = 0. The component-level features Fc has fixed K feature vectors, no matter how many strokes are contained in the sketch sample. Therefore, ye is sparse, and a balanced binary cross-entropy loss is used (denoted as L6). Losses The overall loss can be calculated as L = λ1L1 + λ2L2 + L3. (8) L1 ensures the distinguishability of the memory keys in SCM, and it does not need the semantic component labels of keys or strokes. L2 works only when the dataset has the semantic component labels of strokes. If not, the memory keys are learned without the direct supervision on the assignment matrix C. L3 in Eq. (6) works for sketch recognition and segmentation. If the semantic component labels of strokes are unavailable but the prior information about which types of semantic components constitute each sketch category is known, L3 in Eq. (7) can help the Transformer achieve a better and explainable recognition performance. Experiments Datasets The SPG dataset (Li et al. 2018) and SketchIME dataset (Zhu et al. 2023) are used to verify the advantages of the proposed network. SPG was originally constructed for sketch perceptual grouping, and the same 20 categories as in SketchGNN (Yang et al. 2021) are used for evaluation. An average of 600 samples per category are used for training, while 100 samples for testing. Total 87 types of semantic components are defined according to the original labels in SPG to support our researches. SketchIME is a systematic dataset comprising 374 specialized sketch categories. Total 139 types of semantic components are defined. This study selects 56K samples which have category labels and semantic component labels of strokes from the released 209K samples. An average of 100 samples per sketch category are used for training, while 50 samples for testing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7734 Available Labels SCMFeat w/ L2 w/ L5 w/ L6 Acc@1 C-Metric C-Labels Only Fs as Eq. (2) 88.48 Fs = C ∗K 91.41 Fs = Q 91.01 C-Labels and Prior Info Fc as Eq. (3) 92.02 Fc = C⊤∗Q 90.71 Fc as Eq. (3) ✓ 94.04 Fc = C⊤∗Q ✓ 94.55 C-Labels and SC-Labels Fs as Eq. (2) ✓ ✓ 95.81 90.12 Fs = C ∗K ✓ ✓ 96.62 89.69 Fs = Q ✓ ✓ 96.67 89.42 Table 1: The performance on the SPG dataset. “SCMFeat” denotes which kinds of features are fed into Transformer by the SCM module. “C-Labels” means the category labels, and “SC-Labels” denotes the semantic component labels of strokes. “Prior Info” represents the prior information about which types of semantic components constitute each sketch category. The losses L1 and L4 are always used, but L2, L5 and L6 may not be used when different labels are available. Evaluation Metrics The Top-1 accuracy (Acc@1) is used as the evaluation metric for sketch recognition. SketchSegNet (Wu et al. 2018) and SketchGNN (Yang et al. 2021) used point-based accuracy and component-based accuracy for sketch segmentation. Since the proposed SSR network does predictions on strokes for semantic segmentation directly, only the component-based accuracy (C-Metric) which indicates the percentage of the correctly predicted strokes is used as the evaluation metric for segmentation. Network Details In the stroke-level embedding module, a bidirectional LSTM layer takes a sequence of 4-dimensional stroke points as input and outputs a 768-dimensional shape embedding, a linear layer transforms a two-dimensional coordinate into a 768-dimensional location embedding, and the 768-dimensional order embedding is learned by PyTorch’s nn.Embedding function. In the dynamic graph convolution module, the number of neurons in each convolution layer is all 768. The same Transformer as ViT-Base (Dosovitskiy et al. 2020) is used for sketch-level recognition. Training Details The learning rate is initialized to 3 × 10−4 with a batch size of 128. The Adam optimizer is used. Total 200 epochs are implemented for each training. The τ in Eq. (1) is set to 1. The λ1 and λ2 in Eq. (8) are set to 1 and 20, respectively. The λs in Eq. (6) and the λc in Eq. (7) are set to 10 empirically. The SSR network is trained from scratch, except the Transformer module initialized with the pretrained ViT-Base model from HuggingFace 2. Our network is implemented by Pytorch and trained on a single NVIDIA GTX 3090. Ablation Study As aforementioned, the proposed SCM module and SSR network can deal with different cases with or without semantic component labels of strokes. As illustrated in Table 1, three cases which use different features and losses are evaluated. 2https://huggingface.co/ Firstly, when the category labels and semantic component labels of strokes are available, the supervision on the assignment matrix C (i.e., “w/ L2”) and on the prediction of the semantic component labels of each stroke (i.e., “w/ L5”) can be used. The multi-rows of the case “C-Labels and SC-Labels” in Table 1 illustrate the evaluation results. No matter which kinds features outputted by the SCM module are fed into Transformer for recognition and segmentation, excellent performances are achieved compared with the cases without semantic component labels of strokes. “Fs = Q” means that the stroke features learned by the dynamic graph convolution module are fed into Transformer, while “Fc = C ∗K” means that the transformed memory keys are fed into Transformer. Both the two cases have achieve comparable performance. This means the learned memory keys can represent the semantic components effectively, although the memory keys are not calculated from the stroke features directly. “Fs = Q” does not mean the SCM module is excluded from the learning process, Q is still partially updated according to the gradient propagation from the supervision on the assignment matrix C. Secondly, when the semantic component labels of strokes are unavailable but the prior information about which types of semantic components constitute each sketch category is known, the prior information still can be used to enhance recognition’s performance. The multi-rows of the case “CLabels and Prior Info” in Table 1 illustrate the evaluation results. In such case, the supervision on the existence of each type of semantic component given a sketch can be used (i.e., “w/ L6”). The stroke features cannot be fed into Transformer directly, since Transformer cannot be supervised on the semantic component prediction for strokes. Therefore, the component-level features transformed from the stroke features based on the assignment matrix C are fed into Transformer. The four rows show that using the supervision can improve recognition performance significantly (92.02% vs. 94.04% and 90.71% vs. 94.55%). It also makes the recognition explainable, since Transformer can tell which types of semantic components are contained in each sketch sample, although it does not know to which type of semantic component each stroke belong. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7735 Networks Acc@1 C-Metric ViT (Dosovitskiy et al. 2020) 76.21 BiGRU (Chung et al. 2014) 79.10 ResNet18 (Xu et al. 2022) 80.66 MGT (Xu, Joshi, and Bresson 2021) 91.05 SketchSegNet (Wu et al. 2018) 45.46 SketchGNN (Yang et al. 2021) 87.86 SketchRecSeg (Zhu et al. 2023) 97.47 91.65 SSR(Fs as Eq. (2)) 95.81 90.12 SSR(Fs = C ∗K) 96.62 89.69 SSR(Fs = Q) 96.67 89.42 Table 2: Comparison with state-of-the-art methods on the SPG dataset. The proposed SSR network using all the losses in Eq. (6) and Eq. (8). Thirdly, when neither the semantic component labels of strokes nor the prior information are available, the proposed network can still be used as a typical recognition network, as illustrated in the case of “C-Labels Only” in Table 1. Both the fused stroke-level features Fs (i.e., see the three rows of the case of “C-Labels Only”) and the fused component-level features Fs (i.e., see the top two rows of the case of “CLabels and Prior Info”) can be fed into Transformer which only does the prediction of category labels. It is unsurprising that the performances are not so good, since the memory keys are hard to learn without any extra supervision. In conclusion, it is expected that more supervision can result in better performance, and the proposed method gives a flexible and explainable architecture to deal with sketch recognition with different auxiliary information. Comparison with State-of-The-Art Table 2 gives the comparison results with the state-of-theart methods on the SPG dataset. The proposed SSR network outperforms all the methods except SketchRecSeg (Zhu et al. 2023). This is because SketchRecSeg is a twostream network which takes both image- and SVG-format data as input, but the proposed SSR network only uses the SVG-format data. Besides, SketchSegNet (Wu et al. 2018), SketchGNN (Yang et al. 2021) and SketchRecSeg (Zhu et al. 2023) all construct stroke point-level graphs and predict point-level segmentation labels. The proposed SSR network uses a hierarchical and structural architecture, stroke-level graphs are constructed and stroke-level predictions are performed. Furthermore, the proposed SSR network can fulfill simultaneous recognition and segmentation with the onestream architecture, while SketchRecSeg (Zhu et al. 2023) employs a two-stream architecture for recognition and segmentation, respectively. These factors demonstrate the superiority of the proposed SSR network. Table 3 gives the comparison results on the selected SketchIME dataset which has 374 categories and 139 types of semantic components. The proposed SSR network still obtains the superior performance. This exactly demonstrates the applicability on large-scale datasets. Networks Acc@1 C-Metric ViT (Dosovitskiy et al. 2020) 22.02 ResNet18 (Xu et al. 2022) 89.01 MGT (Xu, Joshi, and Bresson 2021) 70.31 SketchSegNet (Wu et al. 2018) 61.78 SketchGNN (Yang et al. 2021) 94.01 SSR(Fs as Eq. (2)) 89.88 94.59 SSR(Fs = C ∗K) 87.92 94.43 SSR(Fs = Q) 91.48 94.91 Table 3: Comparison with state-of-the-art methods on the SketchIME dataset. The proposed SSR network using all the losses in Eq. (6) and Eq. (8). Visualization Figure 2 gives the visualization of semantic component features using t-SNE (Van der Maaten and Hinton 2008) and some sketch samples. It can be concluded from the feature visualization in Fig. 2(a) that, the classification supervision on the memory keys of SCM ensures the distinguishability of the keys, and the SCM module further enhances the distinguishability of the strokes in the feature space. The memory mechanism can store the features of semantic components using multi-head arrays, and outperforms the mechanisms using classifiers to recognize the strokes’ label directly or using Conditional Random Field (CRF) based methods (Yuan and Ji 2020) to learn strokes’ clustering relationship. Figure 2(b) gives some sketch examples which have indistinguishable semantic components in the stroke feature space but are recognizable when considering the concurrence and layout of the semantic components. The proposed SSR network uses the SCM module to evolve stroke features in an explainable way, and uses Transformer to recognize the category label and the semantic component labels of strokes (or probabilities on the existence of each type of semantic component). This ensures the explainability of sketch recognition via semantic component-level parsing. Figure 3 displays the recognition and segmentation results of some wrongly recognized sketch samples. It can be seen from Fig. 3 that these sketches are wrongly recognized because their strokes are wrongly resolved. Eqs. (2) and (3) show that the features fed into Transformer by the SCM module are calculated based on the stroke features outputted by the dynamic graph convolution module in an explainable way. Therefore, Transformer’s prediction can be mapped into original strokes. This exactly demonstrates the explainability of our sketch recognition network. Discussion Activation map visualization techniques are not suitable for sketch recognition’s explainability in the stroke-level. Counterfactual explanation based methods supply an alternative way, but SketchXAI (Qu et al. 2023) only uses CE to explore the deserved layout of strokes of a sketch, and Liu et al. use CE to discover the stroke-level principal components for a specific category (Liu et al. 2023). They just partially answer the question of “why the sketch is classified as X”. This The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7736 Figure 2: Visualization of semantic component features using t-SNE and some sketch samples. Fig. 2(a) shows the feature visualization of the 87 types of semantic components in SPG. Fig. 2(b) shows some sketch examples whose parts of semantic components are indistinguishable in the feature space, but our SSR network can do recognition and segmentation correctly. Figure 3: Examples of wrongly-recognized sketches. The numbers around the strokes are the groundtruth or predicted type indexes of semantic components. study answers the question from a perspective of semantic component-level parsing. Humans generally describe an object using sentences about its components and attributes. If a consensus can be reached that a sketch is represented structurally by some types of semantic components and their layout, we can easily find the superiority of our proposed network because the stroke-level embedding module can encode the layout of strokes, and the SCM and Transformer modules have abilities to resolve the semantic components. The proposed SSR network gains sketch recognition’s explainability in a more understandable and explainable way. Conclusion Deep learning based sketch recognition networks have achieved remarkable performance that even beats humans. However, humans can explain “why the sketch is classified as X” easily, while sketch recognition networks are lacking of interpretable reasons of predictions. This study tries to explore sketch recognition’s explainability via semantic component-level parsing. A semantic component-level memory module is constructed, which can learn and store features of semantic components in multi-head arrays, and parse the strokes in the component-level. A structured sketch recognition network is proposed. The network gives the explanation “The sketch is recognized as X because it is composed of the semantic components which constitute X”. Acknowledgments This work is partially supported by the National Natural Science Foundation of China under Grant No.62073252 and No.62072358. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7737 References Alaniz, S.; Mancini, M.; Dutta, A.; Marcos, D.; and Akata, Z. 2022. Abstracting sketches through simple primitives. In ECCV, 396–412. Chattopadhay, A.; Sarkar, A.; Howlader, P.; and Balasubramanian, V. N. 2018. Grad-CAM++: Generalized GradientBased Visual Explanations for Deep Convolutional Networks. In WACV, 839–847. Chen, C.; Li, O.; Tao, D.; Barnett, A.; Rudin, C.; and Su, J. K. 2019. This looks like that: deep learning for interpretable image recognition. volume 32. Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Garau, N.; Bisagno, N.; Sambugaro, Z.; and Conci, N. 2022. Interpretable part-whole hierarchies and conceptualsemantic relationships in neural networks. In CVPR, 13689– 13698. Ge, Y.; Xiao, Y.; Xu, Z.; Zheng, M.; Karanam, S.; Chen, T.; Itti, L.; and Wu, Z. 2021. A peek into the reasoning of neural networks: Interpreting with structural visual concepts. In CVPR, 2195–2204. Ha, D.; and Eck, D. 2017. A neural representation of sketch drawings. arXiv preprint arXiv:1704.03477. Ha, D.; and Eck, D. 2018. A Neural Representation of Sketch Drawings. In ICLR. Khasahmadi, A. H.; Hassani, K.; Moradi, P.; Lee, L.; and Morris, Q. 2019. Memory-Based Graph Networks. In ICLR. Li, G.; Muller, M.; Thabet, A.; and Ghanem, B. 2019. Deepgcns: Can gcns go as deep as cnns? In ICCV, 9267–9276. Li, K.; Pang, K.; Song, J.; Song, Y.-Z.; Xiang, T.; Hospedales, T. M.; and Zhang, H. 2018. Universal sketch perceptual grouping. In ECCV, 582–597. Li, L.; Zou, C.; Zheng, Y.; Su, Q.; Fu, H.; and Tai, C.-L. 2020. Sketch-R2CNN: an RNN-rasterization-CNN architecture for vector sketch recognition. IEEE TVCG, 27(9): 3745–3754. Liu, S.; Li, J.; Zhang, H.; Xu, L.; and Cao, X. 2023. Prediction with Visual Evidence: Sketch Classification Explanation via Stroke-Level Attributions. IEEE TIP. Miller, T. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267: 1–38. Omeiza, D.; Speakman, S.; Cintas, C.; and Weldermariam, K. 2019. Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models. arXiv preprint arXiv:1908.01224. Prabhu, A.; Batchu, V.; Munagala, S. A.; Gajawada, R.; and Namboodiri, A. 2018. Distribution-aware binarization of neural networks for sketch recognition. In WACV, 830–838. Qu, Z.; Gryaditskaya, Y.; Li, K.; Pang, K.; Xiang, T.; and Song, Y.-Z. 2023. SketchXAI: A First Look at Explainability for Human Sketches. In CVPR, 23327–23337. Ramaswamy, H. G.; et al. 2020. Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In WACV, 983–991. Sarvadevabhatla, R. K.; and Kundu, J. 2016. Enabling my robot to play pictionary: Recurrent neural networks for sketch recognition. In ACM MM, 247–251. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In ICCV, 618–626. Shitole, V.; Li, F.; Kahng, M.; Tadepalli, P.; and Fern, A. 2021. One explanation is not enough: structured attention graphs for image classification. volume 34, 11352–11363. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. JMLR, 9(11). Van Looveren, A.; and Klaise, J. 2021. Interpretable counterfactual explanations guided by prototypes. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 650–665. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. ACM TOG, 38(5): 1–12. Wu, X.; Qi, Y.; Liu, J.; and Yang, J. 2018. Sketchsegnet: A rnn model for labeling sketch strokes. In MLSP, 1–6. Xu, P.; Hospedales, T. M.; Yin, Q.; Song, Y.-Z.; Xiang, T.; and Wang, L. 2022. Deep learning for free-hand sketch: A survey. IEEE TPAMI, 45(1): 285–312. Xu, P.; Huang, Y.; Yuan, T.; Pang, K.; Song, Y.-Z.; Xiang, T.; Hospedales, T. M.; Ma, Z.; and Guo, J. 2018. Sketchmate: Deep hashing for million-scale human sketch retrieval. In CVPR, 8090–8098. Xu, P.; Joshi, C. K.; and Bresson, X. 2021. Multigraph transformer for free-hand sketch recognition. IEEE TNNLS, 33(10): 5150–5161. Yang, L.; Zhuang, J.; Fu, H.; Wei, X.; Zhou, K.; and Zheng, Y. 2021. Sketchgnn: Semantic sketch segmentation with graph neural networks. ACM TOG, 40(3): 1–13. Yu, Q.; Yang, Y.; Liu, F.; Song, Y.-Z.; Xiang, T.; and Hospedales, T. M. 2017. Sketch-a-net: A deep neural network that beats humans. IJCV, 122: 411–425. Yuan, H.; and Ji, S. 2020. Structpool: Structured graph pooling via conditional random fields. In ICLR. Zhang, Q.; Cao, R.; Shi, F.; Wu, Y. N.; and Zhu, S.-C. 2018. Interpreting CNN knowledge via an explanatory graph. In AAAI, 4454–4463. Zhang, X.; Li, X.; Liu, Y.; and Feng, F. 2019. A survey on freehand sketch recognition and retrieval. IMAVIS, 89: 67– 87. Zhu, G.; Wang, S.; Cheng, Q.; Wu, K.; Li, H.; and Zhang, L. 2023. Sketch Input Method Editor: A Comprehensive Dataset and Methodology for Systematic Input Recognition. In ACM MM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7738
2024
859
18,694
Relevant Intrinsic Feature Enhancement Network for Few-Shot Semantic Segmentation Xiaoyi Bao,*1,2,3 Jie Qin,*1,2 Siyang Sun,3 Xingang Wang,2† Yun Zheng3 1School of Artificial Intelligence, University of Chinese Academy of Sciences 2Institute of Automation, Chinese Academy of Sciences 3Alibaba group {baoxiaoyi2021, qinjie2019}@ia.ac.cn, siyang.ssy@alibaba-inc.com, xingang.wang@ia.ac.cn, zhengyun.zy@alibaba-inc.com Abstract For few-shot semantic segmentation, the primary task is to extract class-specific intrinsic information from limited labeled data. However, the semantic ambiguity and inter-class similarity of previous methods limit the accuracy of pixellevel foreground-background classification. To alleviate these issues, we propose the Relevant Intrinsic Feature Enhancement Network (RiFeNet). To improve the semantic consistency of foreground instances, we propose an unlabeled branch as an efficient data utilization method, which teaches the model how to extract intrinsic features robust to intraclass differences. Notably, during testing, the proposed unlabeled branch is excluded without extra unlabeled data and computation. Furthermore, we extend the inter-class variability between foreground and background by proposing a novel multi-level prototype generation and interaction module. The different-grained complementarity between global and local prototypes allows for better distinction between similar categories. The qualitative and quantitative performance of RiFeNet surpasses the state-of-the-art methods on PASCAL −5i and COCO benchmarks. 1 Introduction As a fundamental and crucial task in the field of computer vision, semantic segmentation is widely applied in visual tasks such as medical image understanding, industrial defect inspection, virtual reality games, autonomous driving, etc. With the development of convolutional neural networks, fully-supervised semantic segmentation has achieved remarkable success (Long, Shelhamer, and Darrell 2015). Subsequent transformer-based methods also greatly improve the segmentation performance (Zhu et al. 2020; Xie et al. 2021; Yuan et al. 2021). However, it is still arduous to acquire pixel-level annotations that require a huge amount of manual effort and costs. To alleviate such expensive and unwieldy annotations, many works tend to resort to a few-shot semantic segmentation paradigm. In this setting, the model learns to segment with a few limited labeled data as the support set and then transferred to test the query inputs. *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Comparison of RiFeNet and other works. (a) Two foreground-related issues that limit the effectiveness of previous research. (b) Schedule of previous prototypebased methods. The two-branch structure uses a single global prototype as the only medium for information interaction. (c) Framework of RiFeNet. An additional unlabeled branch is used for feature mining during training, with multigranularity prototypes extracted from different branches. Previous approaches (Shaban et al. 2017; Liu et al. 2020) focus on making full use of limited support data. The training data is divided into a support set and a query set, which are processed using two different branches. Relationship prompt and information interaction between the two branches are achieved by proposing superior matching strategies (Kang and Cho 2022) or generating representative prototypes (Tian et al. 2020; Wang et al. 2019; Zhang et al. 2019a, 2020). The former class of methods is represented The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 765 by HSNet (Min, Kang, and Cho 2021), which usually uses a cost volume method to aggregate dense correlation scores in high-dimensional space by 4D convolution. Subsequent researches improve the network using a swin transformer (Hong et al. 2022) or a doubly deformable transformer (Xiong, Li, and Zhu 2022) in 4-dimension space. With much lower computational costs, prototype-based methods also achieve good segmentation results. As illustrated in Fig. 1 (a), the single or multiple prototypes integrating semantic class-specific features are extracted from the support branch and used to activate query features. Researchers have proposed different activation methods in PPNet (Liu et al. 2020), PFENet (Tian et al. 2020), CyCTR (Zhang et al. 2021), etc., to fully exploit the limited category features. The object to be segmented is called the foreground while the background refers to the rest of the stuff and things. Despite the prevailing success of these methods, there are still two main issues affecting the segmentation effect of previous methods, i.e. semantic ambiguity and inter-class similarity, as shown in Fig.1 (a). For the foreground itself, semantic ambiguity appears for different instances of the same class. The intra-class differences between the two branches lead to semantic mistakes on query images. Further to the problem of distinguishing between foreground and background, inter-class similarity is specifically the difficulty in pixel-level binary classification. When objects of different classes with similar textures appear simultaneously, the local features of foreground and background become confusing. The semantic ambiguity of the foreground is caused by the poor intra-class generalization of the model. Previous models mainly explore how to match the support set and the query set. With little labeled data in few-shot tasks, their models are prone to extract semantically ambiguous shallow features such as shape and pose, and thus unable to learn robust class representations for highly diverse query appearances. This distribution discrepancy makes the model identify the semantic parts of the foreground objects incompletely and segment them inappropriately. As for the interclass similarity between foreground and background, the lack of discriminative features for query data is to blame for this problem. Specifically, previous methods extract global prototypes or prototypes with a single level of granularity. The information they carry is inadequate and monolithic, thus leading to pixel misidentify. To address the above problems, we propose a novel relevant intrinsic feature enhancement network, as shown in Fig. 1 (c). For foreground objects, we enhance the semantic consistency of the same class, thus improving the intraclass generalization of features. We incorporate a novel unlabeled branch into the support-query pair branches to teach the model how to mine intrinsic robustness behind appearance variability. The additional unlabeled data branch serves as a regularization term during training that enhances the semantic consistency of few-shot segmentation models. We go one step further in achieving an overall enhancement of the distinction between foreground and background. A novel multi-level prototype generation and interaction module is proposed to ensure inter-class variability. The multi-level prototype features contain a global prototype from the support data and a local prototype from the query data. The former represents the high-level semantic abstraction about the entire structure of the target category, while the latter captures the fine-grained appearance concepts providing details for category discrimination. The global and local prototype interaction can complement each other to ensure that the network extracts the corresponding category features. This multi-level prototype feature interaction module can greatly widen the inter-class differences, benefiting the identification between foreground and background. Our contributions can be summarized as follows: • We propose a relevant intrinsic feature enhancement network. By alleviating semantic ambiguity and inter-class similarity, our model improves the forehead segmentation performance in this few-shot task. • To maintain the foreground semantic consistency, we propose an unlabeled branch as an efficient data utilization method, which teaches the model how to extract intrinsic features robust to intra-class differences. • To further achieve an overall effective distinction between foreground and background objects, we propose a novel multi-level prototype generation and interaction module to extend the inter-class variability. With different-grained semantic information from different sources, the mutual complementarity of information is facilitated and the discriminative representation ability of confusing foreground and background objects is boosted. • Extensive experiments demonstrate the effectiveness of our proposed method, which achieves state-of-the-art accuracy on both PASCAL-5i and COCO benchmarks. 2 Related Work 2.1 Semantic Segmentation Rapid progress has been made on semantic segmentation tasks since fully convolutional networks (FCN) transformed them into pixel-level classification (Long, Shelhamer, and Darrell 2015). After that, the corresponding encoder-decoder architecture has been widely used (Qin et al. 2022b,a, 2023). Recent research mainly focuses on multi-scale feature fusion (Zhao et al. 2017; Chen et al. 2018; Yang et al. 2018; He et al. 2019), insertion of attention modules (Fu et al. 2019; Yuan et al. 2018; Li et al. 2019; Tao, Sapra, and Catanzaro 2020; Zhu et al. 2019; Zhang et al. 2019b), and context prior (Lin et al. 2017; Zhang et al. 2018; Yu et al. 2020; Jin et al. 2021). Inspired by the success of the vision transformer (Dosovitskiy et al. 2020), researchers have carried out attempts to apply the transformer structure to segmentation (Yuan et al. 2021; Wang et al. 2021; Lee et al. 2022; Xie et al. 2021; Strudel et al. 2021). These works perform well under the task setting of semantic segmentation. However, in practical application scenarios, they are unable to cope with sparse training data and thus often fail on categories that they have not seen during the training process. 2.2 Few-Shot Segmentation A two-branch structure is widely used in few-shot segmentation, i.e., a support branch and a query branch. Mainstream The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 766 Figure 2: Overall architecture of RiFeNet. An additional branch using unlabeled inputs is attached to the traditional twobranch structure, as shown in green. The forward process of RiFeNet consists of three main blocks. The multi-level prototype generation block generates global and local prototypes. The multi-level prototype interaction module allows for interaction and integration between prototypes. A feature activation block is used consequently for obtaining final segmentation results. few-shot methods are divided into two categories: prototype extraction and spatial correlation. With the development of image-matching, spatial correlation-based methods have regained attention in few-shot segmentation. HSNet (Min, Kang, and Cho 2021) applies 4D convolution to 4D correlation tensors in a pyramid structure, and the following ASNet (Kang and Cho 2022) squeezes semantic correlation into the foreground map. VAT (Hong et al. 2022) and DACM (Xiong, Li, and Zhu 2022) use cost volume aggregationbased models with 4D transformers. The above methods benefit from the retained spatial structure. However, their high computational complexity and the excessive number of parameters are the drawbacks, making training slow and failing to generalize. Since PL (Dong and Xing 2018) introduced prototype learning to fewshot segmentation, most of the following research focus on prototype-based methods (Zhang et al. 2019a; Wang et al. 2019). SG-ONE (Zhang et al. 2020) proposes to use masked average pooling to extract a prototype from support features carrying the category information. Dense matching computation is performed between query features and this prototype. Tian proposes the PFENet (Tian et al. 2020), adding a train-free prior mask to the feature enrichment process. Multi-prototypes are generated in PPNet (Liu et al. 2020), ASGNet (Li et al. 2021) , and RPMMs (Yang et al. 2020) to include more local information with different generating methods, while the use of multiple prototypes in these networks does not bring them more competitive results. IPMT (Liu et al. 2022) proposes an intermediate prototype and uses the transformer to update the prototype iteratively. SSP (Fan et al. 2022) also proposes the idea of the selfsupport prototype, but their single spacial-agnostic prototype is generated for self-matching. Although also generated from query branches, our multiple local prototypes carry discriminative and comprehensive spatial information that helps to provide details for intra-class discrimination. 2.3 Unlabeled Usage in Few-Shot Segmentation Very few researchers have explored the leverage of unlabeled data in few-shot tasks. PPNet (Liu et al. 2020) is one of the cases, which incorporates 100 unlabeled data into every support input, with the use of a graph neural network (GNN). Based on it, Soopil proposes an uncertainty-aware model to make more adequate use of unlabeled data without using graph neural networks (Kim et al. 2023). However, the above two methods also require additional unlabeled data of the current novel class, which is inconsistent with the original few-shot task setting but relevant to the semi-supervised paradigm. And the number of unlabeled images used in every meta-training process far exceeds that of support and query images. In contrast, our work removes the dependence of the testing process on unlabeled data. During training, a tiny amount of unlabeled data is used to constrain the foreground semantic similarity. 3 Method 3.1 Problem Definition The few-shot semantic segmentation task is defined as the segmentation of novel-class objects, based on a very small number of images with pixel-level annotations. According to this definition, the dataset for training Dtrain and the one for testing Dtest have no overlap in their categories. In each meta-training process of K-shot setting, conventional methods randomly sample K + 1 image pairs of the same class j: {(Ii, Y i), i ∈{1, 2, ...K + 1}|(Ii, Y i) ∈ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 767 Dtrain, Ci = j}. Ii, Y i, and Ci refer to the RGB image, its segmentation mask, and the class of the segmented object. Among them, K pairs are sent to the support branch, and the last is used as the query input and ground truth. The difference in our network is that we take K + 1 + M pairs of images from Dtrain at a time, and Ii of the extra M pairs are used as input to the unlabeled branch. In each testing process, the K + 1 pairs from the jth novel classes{(Ii, Y i), i ∈{1, 2, ...K + 1}|(Ii, Y i) ∈ Dtest, Ci = j} are sampled and assigned to support and query branch as before. 3.2 Relevant Intrinsic Feature Enhancement As a pixel-level binary classification task, semantic segmentation needs to identify the foreground, i.e.object to be segmented, and the background, i.e.the rest of the stuff and things. To alleviate the semantic ambiguity and inter-class similarity in Fig.1, we propose this relevant intrinsic feature enhancement network. We present the complete framework under the 1-shot setting in Fig. 2. The proposed RiFeNet consists of three branches with a shared backbone. The extra unlabeled branch helps the traditional support-query framework learn how to assure semantic consistency. The forward process consists of three main modules: the multi-level prototype generation module, the multi-level prototype interaction module, and the feature activation module. The first two modules are used to provide multigrained evidence for better inter-class discrimination. The feature activation module activates pixels containing objects of the target class and deactivates the others, providing the final segmentation result. 3.3 Feature Enhancement with Unlabeled Data There are large intra-class distributional differences in the foreground objects that need to be segmented. Therefore, the training purpose is to ensure the semantic consistency of features extracted from different instances of the foreground category, rather than focusing only on semantically ambiguous appearance features. Therefore, we introduce an auxiliary unlabeled branch as an effective data utilization method to aid model learning. Without adding training data, we re-sample a subset of training samples as unlabeled data, with the same segmentation loss applied. By augmenting the sample diversity, it teaches the model to avoid learning sample-specific biases of labeled inputs, even in the absence of unlabeled data during testing. Our unlabeled branch shares parameters with the query branch to align features and enhance relevance. It is trained with pseudo-labels generated from each other. Specifically, the initial unlabeled input goes through two versions of data augmentation, different in their kinds and intensities. We use Iu and ˆIu to refer to the weakly and strongly transformed unlabeled input. Both are sent to the backbone simultaneously with an identical forward process, thus we use symbols without a hat in all subsequent equations for simplicity. F m u = ReLU(Conv(CAT(F 1(Iu), F 2(Iu)))). (1) F m u refers to merged unlabeled feature maps. For the sake of simplicity, we omit the above processing in Fig.2. The supFigure 3: Visual illustration of the multi-level prototype generation block that extracts global and local prototypes. Figure 4: Schedule of the multi-level prototype interaction merging relevant intrinsic information for enhancement. port and query branches are handled in the same way, and all parameters in Eq.1 are shared with query branches. F j(Iu) corresponds to the feature map from the j-th backbone layer. As for the prototype generation block, if we follow the query paradigm to generate local prototypes from the unlabeled branch, the self-supervised training may be off-target in a few extreme cases. In complex environments with multiple objects, the segmented class in the generated pseudo mask may not be inconsistent. This off-track pseudo-truth may worsen the expression capability of the model. We introduce query prototypes here to provide classspecific prior. Only intrinsic features with intra-class relevance are extracted when processing unlabeled data. The subsequent prototype interaction is the same as the query one. Augmented unlabeled features are obtained and sent to the activation block for prediction. Su and ˆY u represent the predicted results of weakly and strongly transformed inputs, respectively. The former output Su serves as a pseudo label for the latter one. The loss of this selfsupervised process is calculated as follows, with dice loss (Milletari, Navab, and Ahmadi) as its training objective. Lunlabel = DICE(Su, ˆY u). (2) 3.4 Multi-Level Prototype Processing While the unlabeled branch ensures intra-class semantic consistency of the foreground, the background noise it introduces may worsen existing inter-class similarity between foreground and background. Therefore, we propose the multi-level prototype generation and interaction blocks aim to capture representative information in different granularity and augment the discrimination of foreground and background in confusing areas. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 768 Global Support Prototype Generation. To capture category information at the high-dimensional category level, we extract global prototypes from support features. As shown in Fig.3, Ps ∈R1×1×C is obtained through masked average pooling on the global feature. w and h are the width and height of F m s . ⊙means dot production. Ps = Pw,h x=1,y=1(F m s ⊙Ys)x,y Pw,h x=1,y=1 Y x,y s . (3) Local Query Prototype Generation. In some cases, the similarities between different classes may be more significant than the differences within classes (Fan et al. 2022). Meanwhile, the prototypes obtained by global average pooling completely discard spatial structure information, making the decoupling of local components unrealistic. To provide discriminative details, we additionally extract local prototypes from the query branch. These prototypes facilitate binary classification by prompting fine-grained information. Fig.3 illustrates their generation process. P ∗ q = Poolingavg(F m q ⊙M(F 3(Iq))). (4) M(F 3(Iq)) is the prior mask proposed in PFENet (Tian et al. 2020), as a form of similarity measure between highlevel feature maps of support and query branches. Local average pooling is done on the multiplication of prior masks and query feature maps. To squeeze the channel and enhance the representation of local prototypes, 1 × 1 convolution and channel-wise attention are used. The refined local prototypes Pq ∈Rm×m×C′ from the query branch itself are obtained as follows. The symbol Attchn refers to channel-wise attention. Pq = Attchn(Conv1×1(P ∗ q )). (5) For the generation of prior masks, fixed high-level feature maps F 4 s and F 4 q are more suitable. But when generating local query prototypes, superficial partly appearance information counts a lot. With the need for pixel-level correspondence, we use feature map F 1 q from the first layer to extract locally consistent appearance information in the local area, as shown in Fig.3. Multi-Level Prototype Interaction. To emphasize the intrinsic class information, we build interactions between different-grained prototypes, thus enhancing the featuremining ability for identification. Fig.4 shows detailed steps of our interaction block. The generated global and local prototypes are expanded to the size of feature maps and then concatenated with query features and the prior mask. After a 1 × 1 convolution with activation, augmented query features F ∗ q are obtained. F ∗ q = ReLU(Conv(CAT(Ps, Pq, F m q , M(F 3(Iq))))). (6) 3.5 Feature Activation For deeper information interaction between the query and the support branch, we use an n-layer transformer encoder for feature activation. In this process, information with relevance is activated, enhancing the total feature map. Augmented query features F ∗ q are processed by selfattention and then cross-attention with support features F ∗ s . Each block is made up of a multi-head attention and a feed-forward network. The setting of multi-head attention for cross-attention follows the one in CyCTR (Zhang et al. 2021), with the segmentation mask of the support image Ys used to maintain the cycle consistency. F final q = Attcross(Attself(F ∗ q ), F ∗ s , Ys). (7) Attcross and Attself represent the cross-attention and selfattention block, respectively. The output of the transformer F final q is resized and passed to the classification head to obtain the final pixel-by-pixel segmentation result, as shown in the following equation. Yq = CLS(F final q + ReLU(Conv(CAT(F final q ))). (8) Y q represents the predicted result of the query input and CLS refers to the segmentation head. The loss of RiFeNet is calculated between predictions of query inputs and their corresponding ground truth. We choose dice loss as the loss function, thus the loss for each meta-learning task can be represented as: Lmain = DICE( ˆY q, Y q) + Laux. (9) where ˆY q and Y q are the prediction and ground truth of query images. Laux refers to an auxiliary loss attached for feature alignment, following the common setting (Milletari, Navab, and Ahmadi 2016). Lfinal = Lmain + βLunlabel. (10) As the above equation shows, the final loss is a weighted sum of the main loss and the self-supervised loss Lunlabel. The weight β is set to 0.5 empirically. 4 Experiments We conduct experiments on the PASCAL −5i and COCO datasets. The primary evaluation metric is mean intersection-over-union (mIoU). The foregroundbackground IoU (FB-IoU) is used as an auxiliary indicator. 4.1 Process of Training and Testing The number of support: query: unlabeled images is 1 : 1 : 2 or 5 : 1 : 2 for each meta-training task in the one-shot and five-shot settings. The unlabeled input branch comes from the same training dataset. We resample M extra images of the same class without mask truth, rather than adding another dataset. For a fair comparison, we exclude the unlabeled images during testing. Pre-trained ResNet50 and ResNet101 (He et al. 2016) are used as the backbone of the model, frozen without parameter update. The baseline method refers to the CyCTR. Local query prototypes are generated by the multi-level prototype generation. Concatenated with one global support prototype, m2 local query prototype, and the prior mask, query features are sent to the transformer after a merging and flattening operation. We set m = 4 in the main experiments. All data in the following tables are the average results of five experiments with the same settings. Specific implementation details are presented in the Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 769 Method Backbone 1-shot 5-shot #learnable split0 split1 split2 split3 mean split0 split1 split2 split3 mean params PPNet 2020 Res-50 52.7 62.8 57.4 47.7 55.2 60.3 70.0 69.4 60.7 65.1 31.5M PMM 2020 52.0 67.5 51.5 49.8 55.2 55.0 68.2 52.9 51.1 56.8 PFENet 2020 61.7 69.5 55.4 56.3 60.8 63.1 70.7 55.8 57.9 61.9 10.8M CyCTR 2021 65.7 71.0 59.5 59.7 64.0 69.3 73.5 63.8 63.5 67.5 7.4M HSNet 2020 64.3 70.7 60.3 60.5 64.0 70.3 73.2 67.4 67.1 69.5 2.6M ASGNet 2021 58.8 67.9 56.8 53.7 59.3 63.7 70.6 64.1 57.4 63.9 10.4M SSP 2022 61.4 67.2 65.4 49.7 60.9 68.0 72.0 74.8 60.2 68.8 DCAMA 2022 67.5 72.3 59.6 59.0 64.6 70.5 73.9 63.7 65.8 68.5 RiFeNet (Ours) 68.4 73.5 67.1 59.4 67.1 70.0 74.7 69.4 64.2 69.6 7.7M DAN 2020 Res-101 54.7 68.6 57.8 51.6 58.2 57.9 69.0 60.1 54.9 60.5 PMM 2020 54.7 68.6 57.8 51.6 58.2 57.9 69.0 60.1 54.9 60.5 PFENet 2020 60.5 69.4 54.4 55.9 60.1 62.8 70.4 54.9 57.6 61.4 10.8M CyCTR 2021 67.2 71.1 57.6 59.0 63.7 71.0 75.0 58.5 65.0 67.4 7.4M HSNet 2020 67.3 72.3 62.0 63.1 66.2 71.8 74.4 67.0 68.3 70.4 2.6M ASGNet 2021 59.8 67.4 55.6 54.4 59.3 64.6 71.3 64.2 57.3 64.4 10.4M SSP 2022 63.7 70.1 66.7 55.4 64.0 70.3 76.3 77.8 65.5 72.5 DCAMA 2022 65.4 71.4 63.2 58.3 64.6 70.7 73.7 66.8 61.9 68.3 RiFeNet (Ours) 68.9 73.8 66.2 60.3 67.3 70.4 74.5 68.3 63.4 69.2 7.7M Table 1: Performance comparison on PASCAL-5i in terms of mIoU (%). Method Backbone 1-shot 5-shot split0 split1 split2 split3 mean split0 split1 split2 split3 mean PPNet 2020 Res-50 28.1 30.8 29.5 27.7 29.0 39.0 40.8 37.1 37.3 38.5 PMM 2020 29.3 34.8 27.1 27.3 29.6 33.0 40.6 30.3 33.3 34.3 RPMMs 2020 29.5 36.8 28.9 27.0 30.6 33.8 42.0 33.0 33.3 35.5 CyCTR 2021 38.9 43.0 39.6 39.8 40.3 41.1 48.9 45.2 47.0 45.6 HSNet 2020 36.3 43.1 38.7 38.7 39.2 43.3 51.3 48.2 45.0 46.9 SSP 2022 46.4 35.2 27.3 25.4 33.6 53.8 41.5 36.0 33.7 41.3 DCAMA 2022 41.9 45.1 44.4 41.7 43.3 45.9 50.5 50.7 46.0 48.3 RiFeNet (Ours) 39.1 47.2 44.6 45.4 44.1 44.3 52.4 49.3 48.4 48.6 Table 2: Performance comparison on COCO in terms of mIoU (%). 4.2 Comparison with State-of-the-Arts We compare the performance of RiFeNet with other classical or effective methods on PASCAL −5i, as is shown in Tab.1 and Tab. A of the Appendix. RiFeNet outperforms the best method under most of the experimental scenarios. RiFeNet outperforms CyCTR by about 3.5% under the 1shot setting and about 2% for the 5-shot one. Compared with the existing state-of-the-art DCAMA in the 1-shot setting with ResNet50 backbone, it surpasses by 2.5%, rising to 2.7% with the use of ResNet101. As for the larger gain in the 1-shot than in the 5-shot setting, we attribute this to the decreasing impact weight due to the constant amount of unlabeled data, as the ratio of unlabeled to labeled images decreases from 2 to 0.4. As labeled images increase, the positive effects of the unlabeled branch decrease, with a proportional decline in performance gain in 5-shot. Similar experiments on COCO support the above conclusion in Tab.2. Faced with a scenario with multiple objects in this dataset and a complex environment, RiFeNet still outperforms the current best DCAMA by 0.8% for almost all splits in the 1-shot setting. The comparison results demonstrate the benefits of RiFeNet. The unlabeled branch provides RiFeNet with richer relevant information, which in turn improves the performance of the model. Qualitative results also prove the effectiveness of RiFeNet. In Fig.5 and Fig.A of the Appendix, the foreground objects in support and query images vary a lot, with inconsistent postures, appearances, and angles of photography. Despite this large intra-class variability, RiFeNet achieves significant improvement in maintaining foreground semantic consistency compared with the baseline. As for the similarity of background and foreground, the model deals with this binary identification much better even in cases with neighboring objects of similar appearance, with foreground occlusion, and with multiple classes of objects. Looking back to Fig.1, our extracted features are essential to maintain foreground semantic consistency and provide inter-class distinction for binary classification. 4.3 Ablation Studies RiFeNet improves the pixel-level binary classification. To demonstrate the effectiveness of our proposed unlabeled enhancement and multi-level prototypes in RiFeNet, we conduct diagnostic experiments in Tab.3. All comparisons are set under a 1-shot setting, with ResNet50 as the backbone. Using either the unlabeled branch or multi-level prototype The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 770 Figure 5: Qualitative segmentation results on novel classes on PASCAL-5i. From left to right: support image with mask, query input, query ground-truth mask, query prediction of the baseline, and prediction of RiFeNet. Un MP split0 split1 split2 split3 mIoU 65.7 71.0 59.5 59.7 64.0 ✓ 67.3 71.8 66.2 59.2 66.1 ✓ 66.0 72.1 66.2 60.4 66.2 ✓ ✓ 68.4 73.5 67.1 59.4 67.1 Table 3: Ablation studies on the key components of RiFeNet. “Un” and “MP” denote the use of the unlabeled branch and the multi-level prototypes, respectively. interaction results in a boost of approximately 2%. When two strategies work together, RiFeNet improves by 3.1% on top of the baseline. Different design choices of multi-level prototypes. We conduct ablation experiments on the model design details mentioned for the multi-level prototype, as is shown in Tab.4. Consistent with the theoretical analysis in Sec.3.4, it proves that our practices such as adding guidance to unlabeled branches are reasonable and reliable. Different design choices of unlabeled branches. We conduct experiments with different designed unlabeled branches to further explore their effect. As shown in Tab.5, the unlabeled branch without guided query prototypes results in even worse performance than the baseline, which is consistent with our analysis in Sec.3.3. On the other hand, because the unlabeled inputs come from resampling the training dataset, we double the training iterations of the baseline for a fair comparison. Increased training iterations have little effect on the baseline due to early convergence. This proves that the effectiveness of our method is not from the multiple sampling of data but from the learned discriminative and semantic features. Different hyper-parameters. We first look into the effect of different numbers of unlabeled input in a single metacomponents split0 split1 split2 split3 mIoU gp (support-only) 67.3 71.8 66.2 59.2 66.1 gp+gp 67.5 73.1 66.2 58.4 66.3 gp+lp (w/o CA) 68.1 73.2 66.7 59.1 66.8 gp+lp (w/ CA) 68.4 73.5 67.1 59.4 67.1 Table 4: Ablation studies on multi-level prototypes. “gp” and “lp” denote global and local prototypes, respectively. That is, “gp+gp” means extracting both query and support prototypes globally. “CA” refers to channel-wise attention. components epoch split0 split1 split2 split3 mIoU w/o unlabel 200 66.0 72.1 66.2 60.4 66.2 w/o unlabel 400 66.5 72.4 65.5 59.5 66.0 un (w/o guide) 200 66.9 72.2 65.9 58.3 65.8 un (w/ guide) 200 68.4 73.5 67.1 59.4 67.1 Table 5: Ablation studies on the unlabeled branch. “w/ guide” refers to the use of query local prototypes in the unlabeled branch for guidance, while “w/o guide” means using prototypes generated from the unlabeled branch itself. num split0 split1 split2 split3 mIoU 0 66.0 72.1 66.2 60.4 66.2 1 66.8 72.8 66.9 59.8 66.6 2 68.4 73.5 67.1 59.4 67.1 3 65.9 72.6 66.9 59.8 66.0 Table 6: Ablation studies of different numbers of unlabeled images in the single meta-training process. training process. Tab.6 shows the results on PASCAL −5i under a 1-shot setting, with ResNet50 as its backbone. The best results are obtained when the number of unlabeled images is set to 2. Initially, the segmentation effect of the model increased as the number of unlabeled images increased. When the number continues to increase, the accuracy decreases instead. We deem the reason is that when the effect of unlabeled enhancement counts much more than the query branch itself, the attention of feature mining may turn to the unlabeled branch, thus disturbing the query prediction. The segmentation accuracy decreases after the features are blurred. We also conduct detailed ablation experiments with other parameters, which are included in the Appendix. 5 Conclusion In few-shot segmentation, traditional methods suffer from semantic ambiguity and inter-class similarity. Thus from the perspective of pixel-level binary classification, we propose RiFeNet, an effective model with an unlabeled branch constraining foreground semantic consistency. Without extra data, this unlabeled branch improves the in-class generalization of the foreground. Moreover, we propose to further enhance the discrimination of background and foreground by a multi-level prototype generation and interaction module. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 771 References Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision. Dong, N.; and Xing, E. P. 2018. Few-shot semantic segmentation with prototype learning. In BMVC, volume 3. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fan, Q.; Pei, W.; Tai, Y.-W.; and Tang, C.-K. 2022. Selfsupport few-shot semantic segmentation. In Proceedings of the European Conference on Computer Vision. Springer. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; and Lu, H. 2019. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 3146–3154. He, J.; Deng, Z.; Zhou, L.; Wang, Y.; and Qiao, Y. 2019. Adaptive pyramid context network for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7519–7528. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 770–778. Hong, S.; Cho, S.; Nam, J.; Lin, S.; and Kim, S. 2022. Cost aggregation with 4d convolutional swin transformer for fewshot segmentation. In Proceedings of the European Conference on Computer Vision. Springer. Jin, Z.; Gong, T.; Yu, D.; Chu, Q.; Wang, J.; Wang, C.; and Shao, J. 2021. Mining contextual information beyond image for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7231–7241. Kang, D.; and Cho, M. 2022. Integrative few-shot learning for classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9979–9990. Kim, S.; Chikontwe, P.; An, S.; and Park, S. H. 2023. Uncertainty-aware semi-supervised few shot segmentation. Pattern Recognition, 109292. Lee, Y.; Kim, J.; Willette, J.; and Hwang, S. J. 2022. Mpvit: Multi-path vision transformer for dense prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7287–7296. Li, G.; Jampani, V.; Sevilla-Lara, L.; Sun, D.; Kim, J.; and Kim, J. 2021. Adaptive prototype learning and allocation for few-shot segmentation. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 8334–8343. Li, X.; Zhong, Z.; Wu, J.; Yang, Y.; Lin, Z.; and Liu, H. 2019. Expectation-maximization attention networks for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9167–9176. Lin, G.; Milan, A.; Shen, C.; and Reid, I. 2017. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 1925–1934. Liu, Y.; Liu, N.; Yao, X.; and Han, J. 2022. Intermediate prototype mining transformer for few-shot semantic segmentation. arXiv preprint arXiv:2210.06780. Liu, Y.; Zhang, X.; Zhang, S.; and He, X. 2020. Part-aware prototype network for few-shot semantic segmentation. In Proceedings of the European Conference on Computer Vision. Springer. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 3431–3440. Milletari, F.; Navab, N.; and Ahmadi, S.-A. 2016. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision, 565–571. Ieee. Min, J.; Kang, D.; and Cho, M. 2021. Hypercorrelation squeeze for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6941–6952. Qin, J.; Wu, J.; Li, M.; Xiao, X.; Zheng, M.; and Wang, X. 2022a. Multi-granularity distillation scheme towards lightweight semi-supervised semantic segmentation. In European Conference on Computer Vision, 481–498. Springer. Qin, J.; Wu, J.; Xiao, X.; Li, L.; and Wang, X. 2022b. Activation modulation and recalibration scheme for weakly supervised semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2117– 2125. Qin, J.; Wu, J.; Yan, P.; Li, M.; Yuxi, R.; Xiao, X.; Wang, Y.; Wang, R.; Wen, S.; Pan, X.; et al. 2023. FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19446–19455. Shaban, A.; Bansal, S.; Liu, Z.; Essa, I.; and Boots, B. 2017. One-shot learning for semantic segmentation. arXiv preprint arXiv:1709.03410. Shi, X.; Wei, D.; Zhang, Y.; Lu, D.; Ning, M.; Chen, J.; Ma, K.; and Zheng, Y. 2022. Dense cross-query-and-support attention weighted mask aggregation for few-shot segmentation. In Proceedings of the European Conference on Computer Vision. Springer. Strudel, R.; Garcia, R.; Laptev, I.; and Schmid, C. 2021. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7262–7272. Tao, A.; Sapra, K.; and Catanzaro, B. 2020. Hierarchical multi-scale attention for semantic segmentation. arXiv preprint arXiv:2005.10821. Tian, Z.; Zhao, H.; Shu, M.; Yang, Z.; Li, R.; and Jia, J. 2020. Prior guided feature enrichment network for few-shot segmentation. IEEE transactions on pattern analysis and machine intelligence, 44(2): 1050–1065. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 772 Wang, H.; Zhang, X.; Hu, Y.; Yang, Y.; Cao, X.; and Zhen, X. 2020. Few-shot semantic segmentation with democratic attention networks. In Proceedings of the European Conference on Computer Vision. Springer. Wang, K.; Liew, J. H.; Zou, Y.; Zhou, D.; and Feng, J. 2019. Panet: Few-shot image semantic segmentation with prototype alignment. In proceedings of the IEEE/CVF International Conference on Computer Vision, 9197–9206. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 568–578. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. Xiong, Z.; Li, H.; and Zhu, X. X. 2022. Doubly Deformable Aggregation of Covariance Matrices for Few-Shot Segmentation. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX, 133–150. Springer. Yang, B.; Liu, C.; Li, B.; Jiao, J.; and Ye, Q. 2020. Prototype mixture models for few-shot semantic segmentation. In Proceedings of the European Conference on Computer Vision. Springer. Yang, M.; Yu, K.; Zhang, C.; Li, Z.; and Yang, K. 2018. Denseaspp for semantic segmentation in street scenes. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 3684–3692. Yu, C.; Wang, J.; Gao, C.; Yu, G.; Shen, C.; and Sang, N. 2020. Context prior for scene segmentation. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 12416–12425. Yuan, Y.; Fu, R.; Huang, L.; Lin, W.; Zhang, C.; Chen, X.; and Wang, J. 2021. Hrformer: High-resolution vision transformer for dense predict. Advances in Neural Information Processing Systems, 34: 7281–7293. Yuan, Y.; Huang, L.; Guo, J.; Zhang, C.; Chen, X.; and Wang, J. 2018. Ocnet: Object context network for scene parsing. arXiv preprint arXiv:1809.00916. Zhang, C.; Lin, G.; Liu, F.; Yao, R.; and Shen, C. 2019a. Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5217–5226. Zhang, F.; Chen, Y.; Li, Z.; Hong, Z.; Liu, J.; Ma, F.; Han, J.; and Ding, E. 2019b. Acfnet: Attentional class feature network for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6798–6807. Zhang, G.; Kang, G.; Yang, Y.; and Wei, Y. 2021. Few-shot segmentation via cycle-consistent transformer. Advances in Neural Information Processing Systems, 34: 21984–21996. Zhang, H.; Dana, K.; Shi, J.; Zhang, Z.; Wang, X.; Tyagi, A.; and Agrawal, A. 2018. Context encoding for semantic segmentation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 7151–7160. Zhang, X.; Wei, Y.; Yang, Y.; and Huang, T. S. 2020. Sg-one: Similarity guidance network for one-shot semantic segmentation. IEEE transactions on cybernetics, 50(9): 3855–3865. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2881– 2890. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. Zhu, Z.; Xu, M.; Bai, S.; Huang, T.; and Bai, X. 2019. Asymmetric non-local neural networks for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 593–602. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 773
2024
86
18,695
Learning Discriminative Noise Guidance for Image Forgery Detection and Localization Jiaying Zhu∗, Dong Li*, Xueyang Fu†, Gang Yang, Jie Huang, Aiping Liu, Zheng-Jun Zha University of Science and Technology of China {zhujy53, dongli6, yg1997, hj0117}@mail.ustc.edu.cn, {xyfu, aipingl, zhazj}@ustc.edu.cn Abstract This study introduces a new method for detecting and localizing image forgery by focusing on manipulation traces within the noise domain. We posit that nearly invisible noise in RGB images carries tampering traces, useful for distinguishing and locating forgeries. However, the advancement of tampering technology complicates the direct application of noise for forgery detection, as the noise inconsistency between forged and authentic regions is not fully exploited. To tackle this, we develop a two-step discriminative noise-guided approach to explicitly enhance the representation and use of noise inconsistencies, thereby fully exploiting noise information to improve the accuracy and robustness of forgery detection. Specifically, we first enhance the noise discriminability of forged regions compared to authentic ones using a de-noising network and a statistics-based constraint. Then, we merge a model-driven guided filtering mechanism with a data-driven attention mechanism to create a learnable and differentiable noise-guided filter. This sophisticated filter allows us to maintain the edges of forged regions learned from the noise. Comprehensive experiments on multiple datasets demonstrate that our method can reliably detect and localize forgeries, surpassing existing state-of-the-art methods. Introduction Forged images present risks in numerous areas, such as copyright watermark removal, fake news generation, and even evidence falsification in court (Zhang et al. 2023a,b; Lin et al. 2023). Consequently, image forgery detection and localization (IFDL) is of paramount importance. However, with the widespread use of techniques like GAN (Isola et al. 2017; Zhu et al. 2017), VAE (Kingma and Welling 2013; Van Den Oord, Vinyals et al. 2017), and homogeneous manipulation (Cong et al. 2022; Ling et al. 2021), the manipulation traces of forged images become visually invisible, making IFDL challenging. Thus, it is crucial to devise effective methods for accurate manipulation trace capture. Conversely, the noise distribution of authentic and forged regions is inconsistent (Zhou et al. 2018; Wang et al. 2022a), leading to manipulation traces in the noise domain. Many researchers have used this noise information to aid IFDL, *Co-first authors contributed equally, † corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: We process the forged image using denoising networks with different standard deviations (CBDNet (Guo et al. 2019) trained with noise with standard deviations of 15, 25, and 50), and then extract the noise separately. For this image, the denoiser of 25 standard deviation can help to obtain discriminative noise. (Best viewed on screen.) achieving significant results. For instance, RGB-N (Zhou et al. 2018) leverages noise features extracted from a steganalysis rich model filter layer to identify the noise inconsistency between authentic and tampered regions. SPAN (Hu et al. 2020) extracts anomalous local noise features from noise maps using CNNs to differentiate heterologous regions. MVSS-Net (Chen et al. 2021) learns multi-view features by utilizing both noise views and boundary artifacts. These methods directly build the end-to-end mapping of noise features to masks and adopt fusion strategies to integrate RGB and noise information to enhance forgery detection accuracy. However, as tampering and post-processing techniques evolve, the difference between the two regions in the noise domain becomes less noticeable or even hidden. Given these findings, we propose that explicitly learning and leveraging noise inconsistencies can further improve IFDL performance. Therefore, we introduce a novel twostep noise-guided scheme. The first step involves training a noise extractor to explicitly enlarge the noise distribution difference between authentic and forged regions. We use a denoising network followed by a Bayar convolution (Wu, AbdAlmageed, and Natarajan 2019) to construct the noise extractor, optimized using a statistics-based constraint. The rationale for using a denoiser stems from the observation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7739 that a suitable denoising network can amplify the noise distribution difference between two regions. As shown in Figure 1, the denoising network with a standard deviation of 25 maximizes the difference between the two regions, i.e., authentic and forged regions, in the noise domain, while the directly extracted noise cannot achieve the same effect. In order to adaptively tune the denoising network, we impose the customized constraint on the processed image noise, designed based on the Jensen–Shannon (JS) divergence of the Gaussian distribution. The second phase involves the integration of noise inconsistency and RGB data for forgery detection and localization. Unlike previous fusion strategies, we utilize noise explicitly to guide the RGB branch, significantly enhancing effectiveness. We merge the hand-crafted guided filtering (He, Sun, and Tang 2012) and data-driven attention mechanism (Wang et al. 2018) to create the Cross-Attention-Based Guided Filter (CAGF). Thanks to the local linearity and edge preservation of guided filtering, CAGF not only fully integrates the complementary information in the RGB and noise domains but also ensures the transfer of structural information from the noise domain to the RGB domain. In essence, our method explicitly learns the noise inconsistency in the first phase and utilizes this representation in the second phase. This approach allows us to effectively mine the noise prior and use it to model forgery inconsistency. Our contributions are as follows: • We propose a novel discriminative noise-guided scheme that explicitly enhances the representation and exploitation of noise inconsistencies. • We develop a method to highlight noise inconsistencies in forged regions, using a denoising network to process images and a statistics-based constraint to optimize the noise extraction. • We design a cross-attention-based guided filter that combines model-driven and data-driven technologies to explicitly enhance the guiding effect of noise inconsistencies on the RGB branch, fully utilizing the forgeryinformed noise representations. Extensive experiments on several representative benchmarks show that our method is superior to state-of-the-art methods, especially on the real-life dataset IMD20 (Novozamsky, Mahdian, and Saic 2020). Related Work Noise-unrelated IFDL Most early works tend to focus on a specific type of forgery, including splicing (Huh et al. 2018), copy-move (Cozzolino, Poggi, and Verdoliva 2015), and removal (Aloraini, Sharifzadeh, and Schonfeld 2020). While the above works demonstrate satisfactory performance, the practical application of these methods encounters challenges due to the unpredictability of forgery types. Therefore, recent studies emphasize the need for an approach that employs one model to address multiple forgery types. ManTra-net (Wu, AbdAlmageed, and Natarajan 2019) leverages an end-to-end network, which extracts image manipulation trace features and identifies anomalous regions by assessing how different a local feature is from its reference features. PSCCNet (Liu et al. 2022) uses a progressive spatial-channel correlation module that uses features at different scales and dense crossconnections to generate masks in a coarse-to-fine fashion. Besides, some methods (Liu et al. 2024) explore forgery detection in the frequency domain, such as ObjectFormer (Wang et al. 2022b) captures forged traces from the highfrequency parts of images. Different from the above, we focus on fully mining and exploiting noise inconsistencies. Noise-assisted IFDL A series of methods use noise information to assist IFDL. Mahdian et al. (Mahdian and Saic 2009) detects changes in noise standard deviations for blind image forensics. Lyu et al. (Lyu, Pan, and Zhang 2014) expose region splicing by revealing inconsistencies in local noise levels. These methods focus on specific tampering artifacts and are limited to specific forgeries. Recently, some deep learning-based IFDL also utilize noise information as assistance. RGB-N (Zhou et al. 2018) explores to leverage noise features to model the inconsistency between tampered and untouched regions. NoiseDF (Wang and Chow 2023) extracts noise traces and features from the video image frames’ cropped face and background squares. ERMPC (Li et al. 2023) utilizes noise branches as auxiliary information to facilitate further refinement of forgery localization. TruFor (Guillaro et al. 2023) learns the noise-sensitive fingerprint by training on real data in a self-supervised manner. In contrast, our method explicitly learns the noise inconsistency by statistical constraints and exploits the noise in the form of explicit guidance. Methodology Overview Noise features between the source and target images are unlikely to match (Zhou et al. 2018), and the tampering operation destroies the natural noise distribution (Wang et al. 2022a). Besides, using noise can suppress the content information, which is beneficial to extract semantic-agnostic features for IFDL (Chen et al. 2021). However, with the development of tampering and post-processing techniques, the inconsistency of the noise is not obvious or even hidden. We argue that explicitly mining and exploiting noise inconsistencies can further improve accuracy and robustness. Therefore, we propose a two-step strategy to make full use of noise inconsistencies for IFDL, including noise representation learning and noise guided network. First, we propose a learning scheme using a denoising network and a customized constraint. The input image is represented as X ∈RH×W ×3, where H and W represent the height and width of the image. X is input to the denoising network to obtain the image X ′, and then the noise Gd ∈RH×W ×3 is extracted by BayarConv (Wu, AbdAlmageed, and Natarajan 2019). The choice of denoising networks is not the focus of this work, so we use the widely used CBDNet (Guo et al. 2019) for the trade-off of performance and computation. And we impose a statistics-based constraint on Gd to ensure that the noise distributions of the two regions are pulled The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7740 (a) Noise Representation Learning Network. The blue components represent the denoising network CBDNet. (b) Noise Guided Network. The well-trained DNE comes from the Noise Representation Learning Network. Figure 2: An overview of our two-step discriminative noise guided scheme. It contains noise representation learning and noise guided network. The dashed red lines denote the constraints we imposed. apart. Second, we train the noise-guided network (NGNet) to explicitly apply noise inconsistency. The NGNet is a dual-branch network that contains multiple cross-attentionbased guided filters that progressively guide RGB information with noise inconsistencies for more accurate IFDL. Noise Representation Learning To explicitly represent the noise inconsistency, we design a noise representation learning network (NRLNet) as in Figure 2a, which uses a denoising network to process images and statistical losses to optimize the network. Considering that the noise distribution of a forged image is unknown, we use the blind denoising network CBDNet (Guo et al. 2019) and load the weights of blind denoising as initialization. The specific architecture and training strategy is described below. We process the input image X using a CBDNet-based network. Specifically, we first predict a noise level map ˆl ∈RH×W ×3, which can be viewed as weights related to the noise distribution. Then we feed ˆl together with the input X into the encoder-decoder structure to get the image X ′ ∈RH×W ×3, which is expressed as: ˆl = NE (X) , (1) X ′ = D  Concat  X, ˆl  , (2) where Concat denotes the concatenate operation. NE is the noise estimation module implemented by a five-layer fully convolutional network, and the convolution kernel size is 3 × 3. D is a U-Net architecture that obtains images with discriminative noise. Then, following (Chen et al. 2021), we adopt BayarConv to extract the noise Gd ∈RH×W ×3 from X ′. Besides, in order to make the learned noise more conducive to IFDL, we feed the noise into the Res-CNN to predict coarse localization result Gc ∈RH×W ×1. Res-CNN contains ten res-blocks, and one block consists of two 3 × 3 convolution and ReLU function. Optimization. To explicitly pull apart the noise distribution of the two regions (authentic and forged), we introduce the JS divergence to constrain Gd. First, we divide Gd into the noise of the authentic region Na and the noise of the forged region Nf with the help of the mask of the ground truth. The stationary disturbances in images can be modeled as Gaussian (Guo et al. 2019), and both Na and Nf can be regarded as sampled values of noise. Therefore, we utilize the JS divergence of the continuous Gaussian to measure the distance between the noise distributions of two regions: JSD (Pa ∥Pf) = 1 2KL (Pa ∥M) + 1 2KL (Pf ∥M) , (3) where Pa and Pf are the distributions of Na and Nf respectively, and M is (Pa+Pf ) 2 . The KL divergence of two Gaussian distributions is calculated as follows: KL (P1 ∥P2) = logσ2 −logσ1 + σ2 1 + (µ1 −µ2)2 2σ2 2 −1 2, (4) where σ1, σ2 are the standard deviations of P1 and P2, and µ1, µ2 are the means respectively. Then, Equation 3 is calculated as follows: JSD = log p σa2 + σf 2 2 −log σa + log σf 2 + (µa −µf)2 σa2 + σf 2 + 1 2, (5) where σa, σb are the standard deviations of Na and Nf, and µ1, µ2 are the mean values of Na and Nf. In addition, if only JS divergence is used as the loss function, the optimization process of the network will oscillate. Therefore, we adopt the loss of forgery localization to assist the optimization of the network, which is also more conducive to the final image forgery localization. We combine the assisted loss and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7741 Algorithm 1: Cross-attention-based Guided Filter (CAGF) Input: Gn: guidance image(Noise), Gr: input image(RGB), Output: q: CAGF filtering output. 1: meann = MFConv(Gn) meanr = MFConv(Gr) 2: # Calculate variance and covariance (CMA: Figure 3) varn = CMA(Gn, Gn) covnr = CMA(Gn, Gr) 3: # Calculate coefficients of local linear relationship a = ResBlock(Concat(covnr, varn)) b = meanr - a ∗meann 4: meana = MFConv(a) meanb = MFConv(b) 5: # Output q = meana * Gn + meanb return q JS divergence to compose the loss function for noise representation learning, which can be written as: Ln = λ (1 −JSD) + (1 −λ) L (Y, Gc) , (6) where L denotes the Dice loss (Chen et al. 2021), Y ∈ RH×W ×1 is the ground-truth, and λ is the hyperparameter to balance the two terms which is set as 0.80. Noise Guided Network As shown in Figure 2b, after the NRLNet converges, we embed the well-trained discriminative noise extractor (shown in the box: the denoiser and BayarConv) into the noise guided network (NGNet). Different from previous work, we apply noise in the form of explicit guidance. The network architecture of NGNet, cross-attention-based guided filter (CAGF) and optimization are detailed as follows. Network architecture. We utilize two branches to handle RGB and noise information. The well-trained discriminative noise extractor is used to obtain the input of the noise branch to better extract forged traces. We use ResNet-50 pretrained on ImageNet (Deng et al. 2009) as the backbone network. Then to guarantee the guiding effect of noise inconsistency on RGB, we design CAGF and place it alternately with the ResNet block, as shown in Figure 2b. Guided by the noise, the RGB branch can extract features highly correlated with tampering artifacts. Finally, we transform the extracted features with plain convolutional layers and bilinear upsampling into the final predicted mask Gout ∈RH×W ×1. Cross-attention-based guided filter. Existing IFDL methods directly use fusion strategies, which cannot explicitly guarantee that the tampering artifacts in the noisy domain are fully exploited. The guided filter can guarantee the transfer of structural information from the guided image to the target image and has the edge-preserving property (He, Sun, and Tang 2012). Inspired by this, we explore the fusion of noise and RGB information from a guidance perspective thus proposing CAGF, as shown in Algorithm 1. And we use three CAGF blocks in practice. The traditional guided filter is derived from a local linear model. It generates the filtering output by considering the guidance. Figure 3: Cross-Modal Attention. It can flexibly calculate the variance and covariance of noise and RGB (Equation 7 to 8). If X and Y are the same, then Z is the variance, and if they are not, then Z is the covariance. However, the traditional guided filter is a non-trainable algorithm without considering the mutual dependency between the guidance and the target, which is inappropriate for IFDL. Because of the large information gap between noise and RGB, simply transferring structural information from noise to RGB would result in various artifacts. Therefore, based on the traditional algorithm, we use the attention mechanism to calculate the variance and covariance and use the convolutional layer instead of the mean filter. Specifically, denoting input features derived from the noise stream and the RGB stream as Gn ∈RHs×Ws×Cs and Gr ∈ RHs×Ws×Cs. We take Gn as the guide image and Gr as the input image. First, we design novel cross-modal attention (CMA) to obtain covariance and variance. Taking the calculation of covariance as an example, the input of CMA is Gn and Gr. CMA leverages the computation block described in Figure 3 to convert them to covnr ∈RHs×Ws×Cs: covnr = CMA (Gn, Gr) = Res(C(C(Gn)T ⊗C(Gr)) ⊗C(C(Gn) ⊙C(Gr))), (7) where ⊗is the matrix multiplication, ⊙is the Hadamard product, Res denotes the res-block containing two 3 × 3 convolution and ReLU function, and C denotes the 1 × 1 convolution. We perform matrix multiplication on Gn and Gr to obtain C ∈RN×N (N = Hs × Ws) instead of the correlation coefficient corrnr of traditional guided filtering, and calculate the Hadamard product of the two to replace meann ∗meanr of the traditional algorithm (He, Sun, and Tang 2012). Before the matrix multiplication of Gn and Gr, the two are converted to Cs/r×N, where r is a scalar that reduces channel dimension for computation efficiency. In the same way, when both inputs of CMA are the noise features Gn, we can obtain the variance varn ∈RHs×Ws×Cs: varn = CMA (Gn, Gn) . (8) The res-block is used to obtain coefficient a ∈RHs×Ws×Cs: a = Res (Concat (covnr, varn)) . (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7742 Next, we follow the equation in Algorithm 1 to calculate b ∈RHs×Ws×Cs. Inspired by (Wu et al. 2018), we use convolution operation (MFConv) instead of the mean filter: meanM = MFConv (M) = Conv (M) Conv (J) , (10) where M is the input of MFConv, J is the all-ones matrix with the same size as M, and Conv is the 3×3 convolution. Finally, we obtain the output of CAGF q ∈RHs×Ws×Cs according to the local linear relationship: q = meana ⊙Gr + meanb, (11) where meana and meanb are the results of a and b after MFConv, the size is Hs × Ws × Cs. Detector. For the detector, we apply the ConvGeM proposed by MVSS-Net++ (Dong et al. 2023), which can convert localization results Gout into detection prediction Dout. ConvGeM strikes a good balance between detection and localization through a decayed skip connection. Thus, we use ConvGem to obtain a more accurate detection result: Dout = ConvGeM(Gout) (12) Optimization. Following most studies (Chen et al. 2021; Wang et al. 2022c), we also utilize the edge supervision. However, this is not the focus of this work, so we have used some common methods. Following (Chen et al. 2021), we use the Sobel layer and the residual block to obtain the edge prediction Ge ∈RHe×We×1 in a shallow-to-deep manner. For edge loss, the ground-truth edges E ∈RH×W ×1 is downsampled to a smaller size E′ ∈RHe×We×1 to match Ge. This strategy outperforms upsampling Ge in terms of computational cost and performance. The loss of NGNet can be written as: LN = αL1 (Y, Gout)+βL2 (y, Dout)+(1 −α −β) L3 E′, Ge  , (13) where L1 and L3 denote the Dice loss (Chen et al. 2021), L2 is BCE loss, y is a label that represents the authenticity of the image and α, β are the hyperparameters to balance the loss function. In practice, α is set as 0.60 and β is set as 0.2. Note that authentic images are only used to compute L2. Experiments Experimental Setup Pre-training Data. We construct a substantial image tampering dataset and employ it for pre-training our model. This dataset comprises three categories: 1) splicing, 2) copymove, and 3) removal. Testing Datasets. Following (Liu et al. 2022), we evaluate our model on CASIA (Dong, Wang, and Tan 2013), Coverage (Wen et al. 2016), Columbia (Hsu and Chang 2006), Nist Nimble 2016 (NIST16) (Guan et al. 2019) and IMD20 (Novozamsky, Mahdian, and Saic 2020). We apply the same training/testing splits as (Hu et al. 2020; Wang et al. 2022b) to fine-tune our model for fair comparisons. Evaluation Metrics. To quantify the localization performance, following previous works (Hu et al. 2020; Wang et al. 2022b), we use pixel-level Area Under Curve (AUC) and F1 score on manipulation masks. To evaluate detection performance, we use image-level AUC and F1 score. Since binary masks are required to compute F1 scores, we adopt the Equal Error Rate (EER) threshold to binarize them. Image Forgery Localization Following SPAN (Hu et al. 2020) and ObjectFormer (Wang et al. 2022b), our model is compared with other state-ofthe-art tampering localization methods under two settings: 1) training on the synthetic dataset and evaluating on the full test datasets, and 2) fine-tuning the pre-trained model on the training split of test datasets and evaluating on their test split. Pre-trained Model. Table 1a shows the localization performance of pre-trained models for different methods on five datasets under pixel-level AUC. We compare our model NGNet with MantraNet (Wu, AbdAlmageed, and Natarajan 2019), SPAN (Hu et al. 2020), PSCCNet (Liu et al. 2022), ObjectFormer (Wang et al. 2022b), TANet (Shi, Chen, and Zhang 2023) and HiFi-Net (Guo et al. 2023) when evaluating pre-trained models. The pre-trained NGNet achieves the best localization performance on Coverage, CASIA, NIST16 and IMD20 and ranks second on Columbia. Especially, NGNet achieves 94.1 % on the copy-move dataset Coverage, whose image forgery regions are indistinguishable from the background. This validates our model owns the superior ability to capture tampering traces in the noise domain. We fail to achieve the best performance on Columbia, falling behind TANet 0.2 % under AUC. We contend that the explanation may be that the distribution of their synthesized training data closely resembles that of the Columbia dataset. This is further supported by the results in Table 1b, which show that NGNet performs better than TANet in terms of both AUC and F1 scores. Furthermore, it is worth pointing out NGNet achieves decent results with less pre-training data. Fine-tuned Model. The network weights of the pretrained model are used to initiate the fine-tuned models that will be trained on the training split of Coverage, CASIA, and NIST16 datasets, respectively. We evaluate the fine-tuned models of different methods in Table 1b. As for AUC and F1, our model achieves significant performance gains. This validates that our method could precisely capture subtle tampering traces by discriminative noise representation learning and cross-attention-based guided filtering. Image Forgery Detection To avoid false alarms, we also consider the forgery detection task. Following ObjectFormer (Wang et al. 2022b), we conduct experimental comparisons on the CASIA-D dataset introduced by (Liu et al. 2022). As shown in Table 1c, our method has excellent detection performance, i.e., 99.81% in terms of AUC and 98.72% in F1. Our method explicitly models and exploits noise inconsistencies, thus accurately distinguishing forged images from authentic ones. Robustness Evaluation To analyze the robustness of our model for localization, we follow the distortion settings in (Wang et al. 2022b) to degrade the forged images from NIST16. These distortion types include resizing images to different scales, applying Gaussian blur with a kernel size k, adding Gaussian noise with a standard deviation σ, and performing JPEG compression with a quality factor q. We compare the forgery localThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7743 Figure 4: Visualization of the predicted manipulation mask by different methods. From top to bottom, we show forged images, GT masks, predictions of ManTraNet, SPAN, PSCC-Net, HiFi-Net and ours. Loc. Data Col. Cov. CAS. NI.16 IM.20 Metric: AUC(%) – Pre-trained ManTra 64K 82.4 81.9 81.7 79.5 74.8 SPAN 96k 93.6 92.2 79.7 84.0 75.0 PSCC 100k 98.2 84.7 82.9 85.5 80.6 Ob.Fo. 62K 95.5 92.8 84.3 87.2 82.1 TANet 60K 98.7 91.4 85.3 89.8 84.9 HiFi 100k 98.3 93.2 85.8 87.0 82.9 Ours 60K 98.5 94.1 87.2 90.0 85.2 (a) Loc. Cov. CAS. NI.16 Metric: AUC(%) / F1(%) – Fine-tuned RGB-N 81.7/43.7 79.5/40.8 93.7/72.2 SPAN 93.7/55.8 83.8/38.2 96.1/58.2 PSCC 94.1/72.3 87.5/55.4 99.6/81.9 Ob.Fo. 95.7/75.8 88.2/57.9 99.6/82.4 TANet 97.8/78.2 89.3/ 61.4 99.7/86.5 HiFi 96.1/80.1 88.5/61.6 98.9/85.0 Ours 98.1/81.2 91.3/62.1 99.8/86.8 (b) Det. AUC(%) F1(%) ManTra 59.94 56.69 SPAN 67.33 63.48 PSCC 99.65 97.12 Ob.Fo. 99.70 97.34 HiFi 99.50 97.40 Ours 99.81 98.72 (c) Table 1: Image forgery detection and localization results. (a) Localization performance of the pre-train model. (b) Localization performance of the fine-tuned model. (c) Detection performance on CASIA-D dataset. (Bold means best, underline means second best). ization performance (AUC scores) of our pretrained models with SPAN and ObjectFormer on these corrupted data, and report the results in Table 2. Our model demonstrates better robustness against various distortion techniques. It is worth noting that JPEG compression is commonly performed when uploading images to social media. And our model performs significantly better on compressed images. Ablation Study In this section, we conduct experiments to demonstrate the effectiveness of our method. The noise representation learning (NRL) is designed to explicitly enlarge the difference in the noise distribution between the two regions (authentic and forged). The cross-attention-based guided filter contains the cross-modal attention (CMA) and guided filtering mechanism (GF). CMA fully integrates the complementary information contained in the RGB and noise branches, while GF guarantees the transfer of structural information from the noise to the RGB and has the edge-preserving property. To evaluate the effectiveness of NRL, CMA and GF, we remove them separately from our method and evaluate the forgery localization performance on CASIA and NIST16. Table 3 presents the quantitative outcomes. The baseline denotes that we just use ResNet-50. It can be seen that without GF, the AUC scores decrease by 4.5 % on CASIA and 5.9 % on NIST16, while without CMA, the AUC scores decrease by 7.6 % on CASIA and 10.9 % on NIST16. Furthermore, when NRL is discarded, serious performance degradation in Table 3, i.e., 9.5% in terms of AUC and 20.9% in terms of F1 on CASIA can be observed. Since different denoiser may derive different performance, we perform ablation study on the choice of deThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7744 Distortion SPAN Ob.Fo. HiFi Ours no distortion 83.95 87.18 87.0 90.04 Resize(0.78×) 83.24 87.17 86.9 89.85 ↓0.19 Resize(0.25×) 80.32 86.33 86.5 88.83 ↓1.21 Blur(k = 3) 83.10 85.97 86.1 89.24 ↓0.80 Blur(k = 15) 79.15 80.26 81.0 88.77 ↓1.27 Noise(σ = 3) 75.17 79.58 81.9 89.16 ↓0.88 Noise(σ = 15) 67.28 78.15 79.5 86.28 ↓3.76 Compress(q = 100) 83.59 86.37 86.5 89.19 ↓0.85 Compress(q = 50) 80.68 86.24 86.0 88.98 ↓1.06 Table 2: The performance on NIST16 dataset under various distortions. AUC scores are reported (in %), (Blur: GaussianBlur, Noise: GaussianNoise, Compress: JPEGCompress.) Variants CASIA NIST16 AUC F1 AUC F1 baseline 75.6 43.0 79.9 69.9 w/o NRL 82.6 49.1 83.2 73.1 w/o CMA 84.4 50.3 88.7 77.2 w/o GF 87.2 55.1 93.9 81.2 Ours 91.3 62.1 99.8 86.8 Table 3: Ablation results on CASIA and NIST16 dataset using different variants of our proposed scheme. noiser. We choose blind denoising networks such as DnCNN (Zhang et al. 2017), FFDNet (Zhang, Zuo, and Zhang 2018), RIDNet (Anwar and Barnes 2019) and DRUNet (Zhang et al. 2021) for comparison. As shown in Table 4, CBDNet trades off performance and computational complexity. Denoiser FLOPs(G) Col. Cov. CAS. NI.16 IM.20 FFDNet 31.80 88.4 87.3 85.2 82.1 75.6 DnCNN 145.52 87.0 89.4 86.1 83.9 78.4 CBDNet* 161.13 98.5 94.1 87.2 90.0 85.2 RIDNet 391.82 92.2 93.5 84.6 87.1 82.9 DRUNet 411.65 98.1 93.3 86.8 90.2 83.2 Table 4: The ablation study of different denoising architectures. FLOPs(G) is calculated on images of size 512 × 512. AUC scores are reported (in %). Visualization Results Qualitative results. As shown in Figure 4, we provide predicted masks of various methods. The results demonstrate that our method can not only locate the tampering regions accurately but also develop sharp boundaries. It benefits from the ability of our model to explicitly enlarge the noise difference between the two regions and preserve the edges. Visualization of noise representation learning. We show the change of features with and without the NRL in Figure 5. It is clear that NRL facilitates the learning of forgery features and obtains more accurate contours of forged regions. This is because NRL helps the network capture tampering traces in the noise domain. Figure 5: Visualization of noise representation learning and cross-attention-based guided filter. From left to right, we display the forged images, masks, GradCAM (Selvaraju et al. 2017) of the feature map without (w/o) NRL and without CAGF and with both (NGNet), and predictions. Figure 6: Noise obtained without (w/o) and with (w) NRL. NRL can model more explicit noise inconsistencies. Visualization of cross-attention-based guided filter. To verify the effect of CAGF, we show the change of features before and after the filter in Figure 5. It can be seen that CAGF can improve the accuracy of forgery localization. The network without CAGF will make false judgments about objects that are similar to the forgery. Visualization of discriminative noise representation. To further validate the motivation and effectiveness of our method, we show the extracted noise without and with the noise representation learning (NRL) in Figure 6, respectively. It can be seen that NRL obtains a more discriminative noise representation, which is forgery-informed. Conclusion In this paper, we propose a two-step noise-guided scheme containing noise representation learning and noise guided network. The first step is to explicitly highlight the discriminability in noise distribution between authentic and forged regions. In the second step, a customized cross-attentionbased guided filter that combines model-driven and datadriven technologies is devised to enhance the guiding effect of noise inconsistencies on the RGB branch, fully utilizing the forgery-informed noise representations. Our work provides a new research strategy to solve the problem of difficult extraction of subtle forged traces. Extensive experimental results on several benchmarks demonstrate the effectiveness of the proposed scheme. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7745 Acknowledgements This work was supported by the National Key R&D Program of China under Grant 2020AAA0105702, the National Natural Science Foundation of China (NSFC) under Grants 62225207, U19B2038 and 62276243. References Aloraini, M.; Sharifzadeh, M.; and Schonfeld, D. 2020. Sequential and patch analyses for object removal video forgery detection and localization. IEEE Transactions on Circuits and Systems for Video Technology, 31(3): 917–930. Anwar, S.; and Barnes, N. 2019. Real image denoising with feature attention. In Proceedings of the IEEE/CVF international conference on computer vision, 3155–3164. Chen, X.; Dong, C.; Ji, J.; Cao, J.; and Li, X. 2021. Image manipulation detection by multi-view multi-scale supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 14185–14193. Cong, W.; Tao, X.; Niu, L.; Liang, J.; Gao, X.; Sun, Q.; and Zhang, L. 2022. High-Resolution Image Harmonization via Collaborative Dual Transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18470–18479. Cozzolino, D.; Poggi, G.; and Verdoliva, L. 2015. Efficient dense-field copy–move forgery detection. IEEE Transactions on Information Forensics and Security, 10(11): 2284– 2297. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Dong, C.; Chen, X.; Hu, R.; Cao, J.; and Li, X. 2023. MVSSNet: Multi-View Multi-Scale Supervised Networks for Image Manipulation Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3): 3539–3553. Dong, J.; Wang, W.; and Tan, T. 2013. Casia image tampering detection evaluation database. In 2013 IEEE China Summit and International Conference on Signal and Information Processing, 422–426. IEEE. Guan, H.; Kozak, M.; Robertson, E.; Lee, Y.; Yates, A. N.; Delgado, A.; Zhou, D.; Kheyrkhah, T.; Smith, J.; and Fiscus, J. 2019. MFC datasets: Large-scale benchmark datasets for media forensic challenge evaluation. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), 63– 72. IEEE. Guillaro, F.; Cozzolino, D.; Sud, A.; Dufour, N.; and Verdoliva, L. 2023. TruFor: Leveraging all-round clues for trustworthy image forgery detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20606–20615. Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; and Zhang, L. 2019. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1712–1722. Guo, X.; Liu, X.; Ren, Z.; Grosz, S.; Masi, I.; and Liu, X. 2023. Hierarchical Fine-Grained Image Forgery Detection and Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3155–3165. He, K.; Sun, J.; and Tang, X. 2012. Guided image filtering. IEEE transactions on pattern analysis and machine intelligence, 35(6): 1397–1409. Hsu, J.; and Chang, S. 2006. Columbia uncompressed image splicing detection evaluation dataset. Columbia DVMM Research Lab. Hu, X.; Zhang, Z.; Jiang, Z.; Chaudhuri, S.; Yang, Z.; and Nevatia, R. 2020. SPAN: Spatial pyramid attention network for image manipulation localization. In European conference on computer vision, 312–328. Springer. Huh, M.; Liu, A.; Owens, A.; and Efros, A. A. 2018. Fighting fake news: Image splice detection via learned selfconsistency. In Proceedings of the European conference on computer vision (ECCV), 101–117. Isola, P.; Zhu, J.-Y.; Zhou, T.; and Efros, A. A. 2017. Imageto-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1125–1134. Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Li, D.; Zhu, J.; Wang, M.; Liu, J.; Fu, X.; and Zha, Z.J. 2023. Edge-Aware Regional Message Passing Controller for Image Forgery Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8222–8232. Lin, X.; Wang, S.; Deng, J.; Fu, Y.; Bai, X.; Chen, X.; Qu, X.; and Tang, W. 2023. Image manipulation detection by multiple tampering traces and edge artifact enhancement. Pattern Recognition, 133: 109026. Ling, J.; Xue, H.; Song, L.; Xie, R.; and Gu, X. 2021. Region-aware adaptive instance normalization for image harmonization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9361–9370. Liu, J.; Xie, J.; Wang, Y.; and Zha, Z.-J. 2024. Adaptive Texture and Spectrum Clue Mining for Generalizable Face Forgery Detection. IEEE Transactions on Information Forensics and Security, 19: 1922–1934. Liu, X.; Liu, Y.; Chen, J.; and Liu, X. 2022. PSCC-Net: Progressive spatio-channel correlation network for image manipulation detection and localization. IEEE Transactions on Circuits and Systems for Video Technology. Lyu, S.; Pan, X.; and Zhang, X. 2014. Exposing region splicing forgeries with blind local noise estimation. International journal of computer vision, 110(2): 202–221. Mahdian, B.; and Saic, S. 2009. Using noise inconsistencies for blind image forensics. Image and Vision Computing, 27(10): 1497–1503. Novozamsky, A.; Mahdian, B.; and Saic, S. 2020. IMD2020: A large-scale annotated dataset tailored for detecting manipulated images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 71–80. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7746 Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626. Shi, Z.; Chen, H.; and Zhang, D. 2023. TransformerAuxiliary Neural Networks for Image Manipulation Localization by Operator Inductions. IEEE Transactions on Circuits and Systems for Video Technology, 1–1. Van Den Oord, A.; Vinyals, O.; et al. 2017. Neural discrete representation learning. Advances in neural information processing systems, 30. Wang, J.; Li, Z.; Zhang, C.; Chen, J.; Wu, Z.; Davis, L. S.; and Jiang, Y.-G. 2022a. Fighting Malicious Media Data: A Survey on Tampering Detection and Deepfake Detection. arXiv preprint arXiv:2212.05667. Wang, J.; Wu, Z.; Chen, J.; Han, X.; Shrivastava, A.; Lim, S.-N.; and Jiang, Y.-G. 2022b. Objectformer for image manipulation detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2364–2373. Wang, M.; Fu, X.; Liu, J.; and Zha, Z.-J. 2022c. JPEG Compression-aware Image Forgery Localization. In Proceedings of the 30th ACM International Conference on Multimedia, 5871–5879. Wang, T.; and Chow, K. P. 2023. Noise Based Deepfake Detection via Multi-Head Relative-Interaction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 14548–14556. Wang, X.; Girshick, R.; Gupta, A.; and He, K. 2018. Nonlocal neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7794– 7803. Wen, B.; Zhu, Y.; Subramanian, R.; Ng, T.-T.; Shen, X.; and Winkler, S. 2016. COVERAGE—A novel database for copy-move forgery detection. In 2016 IEEE international conference on image processing (ICIP), 161–165. IEEE. Wu, H.; Zheng, S.; Zhang, J.; and Huang, K. 2018. Fast endto-end trainable guided filter. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1838–1847. Wu, Y.; AbdAlmageed, W.; and Natarajan, P. 2019. Mantranet: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9543–9552. Zhang, F.; Liu, J.; Zhang, Q.; Sun, E.; Xie, J.; and Zha, Z.J. 2023a. ECENet: Explainable and Context-Enhanced Network for Muti-modal Fact verification. In Proceedings of the 31st ACM International Conference on Multimedia, 1231– 1240. Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Van Gool, L.; and Timofte, R. 2021. Plug-and-play image restoration with deep denoiser prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 6360–6376. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; and Zhang, L. 2017. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7): 3142–3155. Zhang, K.; Zuo, W.; and Zhang, L. 2018. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Transactions on Image Processing, 27(9): 4608–4622. Zhang, Q.; Liu, J.; Zhang, F.; Xie, J.; and Zha, Z.-J. 2023b. Hierarchical Semantic Enhancement Network for Multimodal Fake News Detection. In Proceedings of the 31st ACM International Conference on Multimedia, 3424–3433. Zhou, P.; Han, X.; Morariu, V. I.; and Davis, L. S. 2018. Learning rich features for image manipulation detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1053–1061. Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2223–2232. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7747
2024
860
18,696
Video Frame Prediction from a Single Image and Events Juanjuan Zhu*, Zhexiong Wan*, Yuchao Dai† School of Electronics and Information, Northwestern Polytechnical University {juanjuanzhu2022, wanzhexiong}@mail.nwpu.edu.cn, daiyuchao@nwpu.edu.cn Abstract Recently, the task of Video Frame Prediction (VFP), which predicts future video frames from previous ones through extrapolation, has made remarkable progress. However, the performance of existing VFP methods is still far from satisfactory due to the fixed framerate video used: 1) they have difficulties in handling complex dynamic scenes; 2) they cannot predict future frames with flexible prediction time intervals. The event cameras can record the intensity changes asynchronously with a very high temporal resolution, which provides rich dynamic information about the observed scenes. In this paper, we propose to predict video frames from a single image and the following events, which can not only handle complex dynamic scenes but also predict future frames with flexible prediction time intervals. First, we introduce a symmetrical cross-modal attention augmentation module to enhance the complementary information between images and events. Second, we propose to jointly achieve optical flow estimation and frame generation by combining the motion information of events and the semantic information of the image, then inpainting the holes produced by forward warping to obtain an ideal prediction frame. Based on these, we propose a lightweight pyramidal coarse-to-fine model that can predict a 720P frame within 25 ms. Extensive experiments show that our proposed model significantly outperforms the state-of-the-art frame-based and event-based VFP methods and has the fastest runtime. Code is available at https://npucvr.github.io/VFPSIE/. Introduction Video frame prediction (VFP) aims to predict future frames from previous frames, which has broad applications in autonomous driving, robotics planning and weather forecasting. Existing VFP methods usually take previous image sequences as input to predict a sequence of future frames with the same framerate as the inputs (simplified diagram in Fig. 1). By exploiting various network architectures, the performance of VFP has been significantly improved. However, due to the limitations of frame-based cameras in capturing complex scenes, it is still challenging by only exploiting *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Frame Interpolation (b) Event-based Frame Interpolation (c) Frame Prediction (d) Event-based Frame Prediction Figure 1: Differences between VFI, event-based VFI, VFP and event-based VFP. VFI methods require the image (and events) after the target time t, while image-based VFP is subject to the limited complexity of motion expressed in fixed frame rate inputs. Event-based VFP can solve the above problems, and this paper focuses on the minimal setup of combining a single frame with the following events. the previous frames. In Fig. 4 and Fig. 5, even the state-ofthe-art VFP methods, their performances deteriorate quickly as the motion complexity increases. This is mainly due to the fixed framerate inputs, which limit the ability to capture complex dynamics beyond the sampling frequency. Another major issue is that they cannot predict future frames at flexible time intervals, i.e., consecutive in-between frames or across multiple frames. These factors limit the performance and real-world applications of existing VFP methods. The event camera, as a new kind of bio-inspired sensor, can capture the asynchronous brightness change of each pixel. As the event camera captures high-speed motion with a low data bandwidth and a high temporal resolution with millisecond accuracy, it provides critical and complementary information to images and is widely used in motion estimation (Ding et al. 2022; Wan et al. 2023), object tracking (Wang et al. 2023) and etc. In this paper, we investigate a new and minimal setup for event-enhanced video frame prediction, where we predict the next frames from a single RGB image (providing contextual information) and the following events (providing rich motion information). Apparently, this task is closely related to video frame interpolation (VFI) with events, where the intermediate frames The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7748 are interpolated by exploiting two frames and the events inbetween. In Fig. 1, subject to the interpolation formulation, these methods require image and event data after time t to generate the frame at time t. This violates the causality constraint in frame generation and limits their applications in any practical systems. With this precedent of successfully introducing events to VFI, we believe that combining events also has a significant effect on improving the usability of VFP in complex dynamic scenarios. In this paper, we propose a lightweight network that can predict a 720P frame within 25ms on an RTX2080Ti GPU. We first use two respective encoders to extract pyramid features from the input reference image and event representation. To complementarily utilize the characteristics of image and event data, we design a symmetrical cross-modal attention module to augment these two features. Then we refine the synthesized feature and optical flow in a coarse-to-fine joint estimation way. To resolve the holes arising from forward warping, we present an inpainting module that can repair the holes without bringing lots of extra computation. Finally, we adopt a weighted fusion to output the final frame prediction from the synthesized and warped frames. Thanks to the sparse events that can be divided into multiple time segments, the training data we can use covers various motion ranges and time intervals. Therefore, by adjusting the end times of input events, our model can predict high-framerate frames as well as frames for a long time, whereas the framebased VFP models cannot because their predicted framerates need to be consistent with the input framerates. We conduct experiments on both synthetic and real datasets, and the PSNR is improved by over 3.5dB on GoPro compared to the state-of-the-art frame-based and event-based VFP methods, which demonstrates the effectiveness of our model in solving the VFP problem. Our main contributions are summarized as follows: 1) We introduce a minimal practical configuration to introduce events for the VFP tasks, i.e., predicting future frames from a single image and events. 2) We propose a lightweight model with symmetrical crossattention augmentation and hole inpainting module, which can predict a future frame from a single image and events within real-time requirements. 3) Experiments on both synthetic and real-captured datasets prove the effectiveness and efficiency of our approach in predicting flexible future video frames. Related Work Video Frame Prediction VFP aims to predict future frames from past frames. Existing works have exploited different architectures such as CNN (Liu et al. 2017; Huo et al. 2020; Choi and Baji 2021), RNN (Finn, Goodfellow, and Levine 2016; Fan, Zhu, and Yang 2019; Wang et al. 2022), GAN (Liang et al. 2017; Kwon and Park 2019; Chang et al. 2022) and etc. Due to future motion uncertainty, some studies obtain predictions by estimating the distribution of future pixels, optical flow and latent space (Choi and Baji 2021; Liu et al. 2021; Chang et al. 2022). Meanwhile, some studies (Villegas et al. 2017; Gao et al. 2019) decompose the scenes into two parts to build a more accurate motion model. Despite this progress, frameonly methods still cannot handle complicated scenes for lack of motion information. Thus semantic map (Wu et al. 2020; Bei, Yang, and Soatto 2021) and depth map (Qi et al. 2019) resort to incorporating additional data to alleviate the difficulty. The event-enhanced solution, EDI (Pan et al. 2022), designed for simultaneous deblurring and video reconstruction by an optimization algorithm, has explained the significance of combining a single image with the following events for frame prediction, but they are time-consuming and vulnerable to noise. Image-based Frame Interpolation Image-based VFI is to increase the temporal resolution of frame sequences. It can be simply divided into two categories: namely kernel-based (Niklaus, Mai, and Liu 2017a,b; Choi et al. 2020; Khalifeh et al. 2022; Shi et al. 2022) and flow-based (Jiang et al. 2018; Liu et al. 2019; Kong et al. 2022; Hu et al. 2022; Huang et al. 2022) approachs. Kernel-based methods generate latent pixels for the interpolated frames by local convolutions and can only handle the limited motion range. The flow-based VFI produces the intermediate frames by estimating the optical flow and can adapt to various motion ranges. Since flow-based methods rely on linear motion assumptions, most of them cannot model complex scenes accurately. Although quadratic (Xu et al. 2019; Dutta, Subramaniam, and Mittal 2022) and cubic (Chi et al. 2020) motion models are proposed to address these problems, these methods still cannot solve the performance degradation when facing difficult situations. Event-based Frame Interpolation Event-based VFI utilizes the information of the image and event stream to generate the intermediate frames. Existing methods can be divided into kernel-based (Lin et al. 2020; Zou et al. 2021; Yu et al. 2021; Zhang and Yu 2022; Kılıc¸, Akman, and Alatan 2023), flow-based (He et al. 2022; Wu et al. 2022) and composite methods (Tulyakov et al. 2021, 2022). Kernel-based methods generate the latent frames by convolution network, while flow-based ones produce the intermediate frames by estimating optical flow. Composite methods are a mixture of these two methods and compromise the merits of two of these methods. Despite the significant performance, the event-based VFI suffers from the same problem as the image-based VFI, that is, it still requires future frames relative to the generated frames as input, which leads to significant latency in practice. Method Given a single input image It0 at time t0 and the following events Et0→tn = {ei}M = {xi, yi, ti, pi}M, i ∈ [1, M], ti ∈[t0, tn] with position (xi, yi) at image plane, brightness change timestamp ti and polarity pi, M is the number of events, event-based video frame prediction aims to predict future frames {It1, It2, · · · , Itn}(M−1) t0<t1<t2<···<tn, where n is the number of predicted frames. In this section, we introduce our proposed frame prediction model (see The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7749 SCA ResBlock WIM ResBlock WIM ResBlock WIM ResBlock SCA EN! " EN! # EN! $ EN! % EN& " EN& # EN& $ EN& % 𝑃! 𝑃" 𝑃# T T 𝐹𝐸' ( Norm 1×1 1×1 1×1 V Q K 𝐹𝐼) ((𝑊' () Norm 1×1 1×1 1×1 V Q K * * * * 𝐴𝐸' ( 𝐴𝐼) ((𝐴𝐼' () Symmetrical Cross-modal Attention(SCA) 𝑤+ 𝑤, 𝐼$ * * 𝐼$# 𝐼$ Weighted Fusion(WF) WIM WF 𝐼$ 𝐸%→$ 𝐼% Warping & Inpainting Module(WIM) 𝐹𝐼) ( S' ( 𝑀' ( 𝑓$ ' Warp 𝑊' ( * 𝑊' ( Forward Warping Feature Inpainting * Element-wise Multiplication C Concatenation Operation T Transpose Operation Add Operation 1×1 1×1 Convolution Layer Norm Layer Normalization 𝑃( 𝑓$ ( S$ ( 𝑓$ " S$ " 𝑓$ # S$ # 𝐼'$ 𝑓$ 𝐹𝐸' " 𝐹𝐸' # 𝐹𝐸' $ 𝐹𝐼) " 𝐹𝐼) # 𝐹𝐼) $ 𝐹𝐼) % 𝐹𝐸' % Figure 2: Overview of our proposed network model. In our framework, we first use two encoders to extract pyramid features for the image and events. Then we apply a coarse-to-fine joint decoder to get the synthesized feature and optical flow at each pyramid layer. In the decoder, we utilize Symmetrical Cross-modal Attention to augment both image and event features. We also introduce Warping and Inpainting Module to repair the holes caused by forward warping and get spatially-aligned image features. Finally, we adopt Weighted Fusion to output the final frame prediction from the synthesized and warped frames. Fig. 2) from a single image and events, including event representation and feature encoding, symmetrical cross-modal attention, and joint flow and frame decoder with inpainting. Event Representation and Feature Encoding Due to the special space-time format of the event stream, we need to first convert the origin events Et0→tn to the event voxel EVt0→tn before inputting it into the subsequent models. Following (Zihao Zhu et al. 2018; Rebecq et al. 2019), we divide {ei}M into B temporal bins and sum the normalized timestamps for each pixel position (x, y) ∈ {(xi, yi)}M in each temporal bin b ∈[1, B] as follows: E(x, y, b)= M X i=1 pi max  0,1− b −(B−1)ti−t0 t0−tn  . (1) To reduce the computational cost, we apply two lightweight feature encoders to extract the pyramid features for the image and events separately. Each encoder consists of residual convolution blocks, which comprise two convolutions and the PReLU (He et al. 2015) activation. The channel number of the pyramid features are set to 24, 36, 54 and 72 from the shallow to deep pyramid layer. Symmetrical Cross-modal Attention Due to special perception mechanism, event stream lacks the competence to capture the motion in the areas where the brightness change is implicit. By contrast, image can provide dense and rich context information, but cannot encode motion information. To compensate for the disadvantages of these two data sources, we introduce a cross-modal attention feature augmentation module to symmetrically enhance the context and motion feature. This module is an adaptation of self-attention(Vaswani et al. 2017), which includes two symmetrical attention enhancement branches: Imageto-Event (I2E) attention and Event-to-Image (E2I) attention. Note that, unlike the attention fusion in EFNet (Sun et al. 2022) to obtain one fused feature, we aim to augment each other to get two enhanced features and apply this module only at the 1st and 4th pyramid layers. As the image and event features are gradually spatially aligned by the estimated optical flow, the augmented features in the 1st layer are further augmented by adding them with the original features, while the augmented features in the 4th layer are not. The I2E attention determines the importance matrix of the image feature by counting the similarity between the image feature and the event feature. The image feature is enhanced by multiplying the image feature and the normalized weight obtained from cross similarity: Attention(QE, KI, VI) = VI · softmax QE T KI √dk ! , (2) where KI and VI are the keys and values obtained from imThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7750 age feature, QE are the queries extracted from event feature and √dk means the dimension of KE. The E2I attention obtains the weight of the event feature by normalizing the similarity matrix between the event feature and the image feature. We can obtain the augmented event feature by reweighting the event feature: Attention(QI, KE, VE) = VE · softmax QI T KE √dk ! , (3) where KE and VE are the keys and values obtained from event feature, QI are the queries extracted from image feature and √dk means the dimension of KI. Joint Flow and Frame Decoder with Inpainting To simplify the procedure and reduce the computational cost, we apply an integrated decoder to estimate the optical flow and generate the target frame in a coarse-to-fine manner. Our model includes four pyramid layers. For the bottom layer, namely P4, we input the event and image feature FI4 0, FE4 t to symmetrical cross-modal attention and predict the optical flow f 3 t and synthesized frame feature S3 t using the augmented feature AI4 0, AE4 t. For the middle layers P 3 and P 2, we first warp the extracted image feature FIl 0 to the target time t and get W l t. Then we concatenate the warped feature W l t, event feature FEl t and synthesized frame feature Sl t and optical flow f l t from the last layer together as the input of the decoder. For the top layer P 1, we augment the event and inpainted feature from the 2nd layer, feed into the decoder and get optical flow ft and synthesized frame bIt at the top layer. Then we directly warp the input reference image I0 to −→ It . Different from the frame-based VFI methods that perform a bi-directional check to deal with the occlusions, our VFP setting has a unique problem in dealing with the holes generated by forward warping. To relieve this problem, we introduce an efficient hole inpainting module to inpaint the warped frame at the top layer and inpaint the warped feature at the bottom layers. First, we modify the commonly used CUDA accelerated implementation of forward warping (Niklaus and Liu 2020) to get the occlusion mask Occ by a fixed threshold. Then we use synthesized frame feature Sl t to inpaint the holes: W l t(x, y) = Sl t(x, y) , Occ(x, y) > 0, Warp(FIl 0, f l t)(x, y) , Occ(x, y) ≤0, (4) where Warp is the forward warping operator. Compared with existing methods that design a new module to inpaint the holes (Gao et al. 2019), our efficient module can inpaint the holes with the synthesized features at every pyramid layer without increasing large computational cost. Although holes in the warped frame −→ It can be inpainted by the synthesized frame bIt, the outline of the hole may be evident and affect the harmony of the final prediction. In addition, the accuracy of the synthesized frame bIt and the warped frame −→ It is different when confronting different motion scenes. Accordingly, we propose a weighted fusion Warped frame Inpainted frame Refined frame Ground Truth Warped frame Inpainted frame Refined frame Ground Truth Figure 3: Visualization schematics of Inpainting Module and Fusion Refinement Module. refinement module, which uses a network to learn an allocation weight w ∈[0, 1]. Lastly, we fuse the synthesized frame bIt and warped frame −→ It with the weight w to obtain the final output, i.e., frame prediction: It = w · −→ It + (1 −w) · bIt. (5) Fig. 3 illustrates the visual comparisons of the inpainting module and the fusion refinement module. The inpainting module can generate the filling pixels which are consistent with the surrounding area in absence of the subsequent frame, while the weighted fusion refinement module can make the border more accurate and smooth. Loss Function To supervise the final predicted frames in training, we first apply our reconstruction loss, which consists of the Charbonnier loss(Charbonnier et al. 1994) LCha, the Census loss(Meister, Hur, and Roth 2018) LCen, and the LPIPS loss(Zhang et al. 2018) LLP IP S: Lrec(It) =LCha(It −Igt) + α1LCen(It −Igt) + α2LLP IP S(It −Igt), (6) where LCha(x) = √ x2 + 10−6 and α1 = 1.0, α2 = 1.0. To ensure the quality of inpainting padding, we also apply the reconstruction loss to synthesized frames,i.e., Lrec(bIt). We adopt the pseudo optical flow generated by RAFT (Teed and Deng 2020) to supervise the optical flow using the task-oriented flow loss (Kong et al. 2022), which can adjust the loss weight dynamically and is defined as follows: R = e−β∥ft−fp∥2, Lflow = L−1 X l=1 (f l t −fp)2 + ϵ2 r 2 + ∥f l t −fp∥, (7) where∥.∥is the L2 norm between estimated optical flow ft and pseudo optical flow fp, r = R(u, v) is the robustness weight at position (u, v),ϵ = 10(10r−1)/3 and β = 0.3. We use the Census loss as feature consistency loss to supervise the synthesized feature as follows: Lfeat = L−1 X l=1 LCen(Sl t −FIl gt), (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7751 Method Setting Input GoPro HS-ERGB Model Size Time 7 frames 15 frames 7 frames (MB) (s) IFRNet Frame Interpolation 2 Images 29.27/0.92 24.78/0.82 27.35/0.83 19.0 0.038 EVDI 2 Images + Event 25.13/0.75 22.62/0.66 26.10/0.77 1.6 0.200 Time Lens 2 Images + Event 32.66/0.94 29.81/0.90 32.12/0.86 454.0 0.290 E2VID Reconstruction Event 14.46/0.59 8.84/0.40 41.0 0.054 HyperE2VID Event 15.37/0.61 10.92/0.44 39.0 0.140 DCEIFlow Flow Estimation 1 Image + Event 26.45/0.92 23.36/0.85 26.29/0.80 28.0 0.130 DCEIFlow* 1 Image + Event 29.21/0.93 26.15/0.87 27.87/0.83 OVP Frame Prediction 2 Images 26.15/0.89 22.90/0.68 25.47/0.76 33.0 327.860 DMVFN 2 Images 25.48/0.84 21.46/0.73 27.59/0.82 14.0 0.013-0.038 EDI 1 Image + Event 20.11/0.62 18.43/0.55 22.64/0.70 Our model 1 Image + Event 29.73/0.93 27.71/0.89 28.07/0.83 8.3 0.024 Table 1: Performance comparison on the GoPro and HS-ERGB datasets. The results refer to the PSNR/SSIM metrics. * means inpainting the holes caused by forward warping with the synthesized frames generated by our model. where FIl gt means ground-truth feature extracted from the ground-truth frame Igt using the image encoder. Based on the above analysis, the final training loss is formulated as: L = Lrec(It) + λ1Lrec(bIt) + λ2Lflow + λ3Lfeat, (9) where the weighting parameters are set to λ1 = 1.0, λ2 = 0.5, λ3 = 0.1 in our experiments. Experiments Implementation Details All experiments are conducted with PyTorch. We employ an AdamW optimizer for 50 epochs training with batch size 4 on two NVIDIA RTX3090 GPUs. The learning rate is decayed from 1×10−4 to 1×10−5 with a cosine learning rate scheduler. To obtain reliable motion priors, we first pretrain our model only under the supervision of task-oriented flow loss in the first 15 epochs, followed by training the model with full loss in the remaining 35 epochs. To augment the training data, we make vertical and horizontal flipping with 50% probability and crop 384 × 384 patches randomly. We simulate events using the Vid2E (Gehrig et al. 2020) simulator. To enhance the model’s ability to extract temporal information, we apply two training modes: predict the target frame using the first frame and predict the target frame using the last prediction, and switch two modes with 50 % probability in training. Our experiments are conducted on both synthetic and real datasets. PSNR and SSIM are adopted for quantitative evaluation. Consistent with the setting of existing event-based VFI (Tulyakov et al. 2021), we pre-simulate the events of Vimeo90k septuplet dataset (Xue et al. 2019) and GoPro dataset (Nah, Hyun Kim, and Mu Lee 2017), then train our model on Vimeo90k and evaluate on the GoPro test set. For experiments with real-captured data, we choose the HSERGB dataset (Tulyakov et al. 2021) for evaluation, which records the data with 1280× 720 resolution at 160 fps and contains diverse scenes. Besides, we also perform quantitative comparisons on DSEC (Gehrig et al. 2021), a dataset of events for driving scenarios. Evaluation with Synthetic Events We first evaluate our Vimeo90k pretrained model on the GoPro dataset. We conduct a quantitative comparison between our method and several existing methods with different input settings in Table 1. The methods we compare are stateof-the-art models with open source code in the fields of 1) VFI with two images, i.e., IFRNet (Kong et al. 2022), 2) VFI with two images and events, i.e., Time Lens (Tulyakov et al. 2021), 3) Frame reconstruction with events, i.e., E2VID (Rebecq et al. 2019), hyperE2VID (Ercan et al. 2023), 4) Flow estimation with a single image and events and get the predicted frame by warping, i.e., DCEIFlow (Wan, Dai, and Mao 2022), 5) VFP with two images, i.e., OVP (Hu et al. 2022) and DMVFN (Hu et al. 2023) 6) VFP with single image and events, i.e., EDI (Pan et al. 2022) and our model. Note that EDI is an optimization method, E2VID and hyperE2VID do not provide training codes, DCEIFlow is originally used to estimate optical flow, thus we use their publicly available parameters and model weights. As shown in Table 1, we evaluate the above methods for 7 frames and 15 frames respectively, which indicates that the prediction (interpolation) methods predict (interpolate) 7 and 15 following (intermediate) frames. Compared with VFI methods, we achieve competitive results with the premise that only the first frame is input. Time Lens integrates two frames and events and achieves better results, which shows that event data is of great help to solve long-term motion than using only images. Since we do not use the second frame, our results are inferior to Time Lens. Nonetheless, compared with IFRNet which employs two frames to interpolate intermediate frames, our model achieves a PSNR improvement of up to 2.93dB for 15 frames prediction. Compared to the VFP methods, they leverage multiple preceding images to predict the frame, while our model utilizes a single frame along with events. Our method improves the PSNR by up to 3dB than OVP. Furthermore, we also compare the visualization results of each model in Fig. 4. Following OVP, we present the outcomes of the 1st, 3rd and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7752 Event visualization DMVFN (2 Frames , VFP) IFRNet (2 Frames , VFI) OVP (2 Frames, VFP) Time Lens (2 Frames + Event, VFI) 1st 3rd 5th 1st 3rd 5th EDI (1 Frame + Event , VFP) Ground truth 1st 3rd 5th Our model (1 Frame + Event, VFP) 5th 1st 3rd Event visualization DMVFN (2 Frames , VFP) IFRNet (2 Frames, VFI ) OVP (2 Frames, VFP) Time Lens (2 Frames + Event, VFI) Ground truth EDI (1 Frame + Event , VFP) Our model (1 Frame + Event, VFP) Figure 4: Visual comparisons on the GoPro dataset with synthetic events. Method Setting PSNR(dB) SSIM DCEIFlow Flow Estimation 25.18 0.81 DCEIFlow* 25.59 0.82 EDI Frame prediction 20.59 0.62 Our model 26.61 0.85 Table 2: Performance comparisons on the DSEC dataset for 3 frames VFP when 1 image and events are input. 5th frames under the 7-frames evaluation setting. The visual comparisons show that our model can predict more accurate frames than the existing image-based VFP methods. Under the same input setting, our model also shows better performance than EDI and DCEIFlow in frame estimation. This superior performance validates the efficacy of incorporating events and our proposed framework into VFP. In addition, we present a comprehensive report of the model size and runtime for each model in Table 1, where the runtime is measured by generating a 720P image on a 2080Ti GPU. For EVDI, we assume that its efficacy is inferior to that of our model because its model parameters are too small to handle intricate dynamic scenarios. Compared with DCEIFlow, we attribute our model’s superior performance and efficiency to the inclusion of the inpainting module and the integrated architecture that obviates iterations. For DMVFN, its runtime ranges from 0.013s to 0.038s for its dynamic routing mechanism, which takes longer time to deal with large motion. In summary, our proposed model exhibits optimal runtime performance, possessing the second smallest model size in comparison to competing methods and it stands out as the only approach satisfying real-time demand. Ablations Variations PSNR SSIM Attention W/o Attention 28.58 0.91 Augmentation 4th layer 29.32 0.92 3th layer 29.08 0.92 2nd layer 29.15 0.92 1st layer 29.21 0.93 1st and 4th layer 30.80 0.95 Flow Backward Flow 30.84 0.94 Estimation Forward Flow 26.14 0.91 &W/o Inpainting Forward Flow 30.80 0.95 &W/ Inpainting Loss W/o Flow Loss 29.75 0.93 Function W/o Feature Loss 28.45 0.90 W/o Charbonnier 28.69 0.89 W/o LPIPS 30.52 0.93 Full Losses 30.80 0.95 bIt Estimation Residual Intensity 30.83 0.95 Target Absolute Intensity 30.80 0.95 Training W/o flow pretrain 29.80 0.93 Mechanism W/ flow pretrain 30.80 0.95 Table 3: Ablation studies on attention augmentation, flow estimation, loss function, bIt estimation target and training mechanism. Evaluation with Real-captured Events We conduct experiments on the HS-ERGB dataset in Table 1. The reported PSNR and SSIM results are averaged over the two subsets. Compared to VFI methods, our proposed model only performs inferiorly to Time Lens for lack of event and image information after t. Compared with VFP The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7753 Event visualization IFRNet (2 Frames , VFI) DMVFN (2 Frames , VFP) OVP (2 Frames, VFP) Time Lens (2 Frames + Event, VFI) Ground truth Our model (1 Frame + Event, VFP) EDI (1 Frame + Event , VFP) Ground truth 1st 3rd 5th EDI (1 Frame + Event , VFP) 1st 3rd 5th Our model (1 Frame + Event, VFP) 1st 3rd 5th 1st 3rd 5th Time Lens (2 Frames + Event, VFI) OVP (2 Frames, VFP) Event visualization IFRNet (2 Frames , VFI) DMVFN (2 Frames , VFP) Figure 5: Visual comparisons on the real-captured HS-ERGB dataset. methods, our model outperforms the above methods, which is consistent with the observation on the GoPro and confirms our model’s superiority. Visual comparisons in Fig. 5 also verify this observation. In Table 2, we perform quantitative comparisons of the real-captured DSEC dataset collected in driving scenarios, which further illustrate our model’s ability for VFP to generalize to practical complex scenarios. Ablation Studies To verify the contribution of each module, we conduct ablations in Table 3 from five aspects: attention augmentation, flow estimation, loss function, estimation target and training mechanism. Due to the large data volume of Vimeo90K dataset, we choose to train these ablation models on the GoPro training set with 100 epochs and evaluate them on the test set for comparison. Attention Augmentation. To verify the effectiveness of our cross-modal attention augmentation, we first remove the attention module, resulting in a decrease of over 2dB in PSNR. Then we apply it to four pyramid layers respectively. From Table 3, the attention mechanism contributes most on the first and fourth layers. Thus we strike a balance between computation cost and performance and apply the augmentation module to the first and fourth layers. Flow Estimation. We conduct experiments on flow estimation and the result indicates that the model estimating forward flow with the inpainting module achieves higher SSIM while the model estimating backward flow has higher PSNR. Considering the ghost effect introduced by backward warping, we ultimately select the former approach. Loss Function. To evaluate the contributions of taskoriented flow loss, feature loss and reconstruction loss, we conduct experiments wherein we train the model without them separately. Table 3 shows that the model’s performance significantly decreases without any of them. Estimation Target. Since the initial frame is provided, it is intuitive to estimate residual intensity instead of absolute intensity, which is also proved in Table 3. However, for consistency with existing event-based methods, we still use the absolute intensity as the target of bIt in our model. Training Mechanism. Since jointly learning optical flow and frame is a “chicken-and-egg” problem, we employ a two-stage training approach. This strategy results in a 1.0 dB PSNR performance improvement. Conclusion In this paper, we have studied the problem of video frame prediction (VFP) from a single RGB image and the following events. By introducing events to VFP, we can achieve flexible frame prediction for complex dynamic scenes, where the temporal interval between the predicted frames can be long or short. Based on our proposed network, we can significantly exceed the performance of existing VFP methods and meet the requirements of real-time frame prediction. We believe that event-based VFP, which combines events with images, is more practical than image-based VFI and VFP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7754 Acknowledgements All authors are also with Shaanxi Key Laboratory of Information Acquisition and Processing. This research was supported in part by the National Natural Science Foundation of China (62271410, 62001394), Zhejiang Lab (NO.2021MC0AB05), the Fundamental Research Funds for the Central Universities, and the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (CX2023013). References Bei, X.; Yang, Y.; and Soatto, S. 2021. Learning semanticaware dynamics for video prediction. In CVPR, 902–912. Chang, Z.; Zhang, X.; Wang, S.; Ma, S.; and Gao, W. 2022. STRPM: A spatiotemporal residual predictive model for high-resolution video prediction. In CVPR, 13946–13955. Charbonnier, P.; Blanc-Feraud, L.; Aubert, G.; and Barlaud, M. 1994. Two deterministic half-quadratic regularization algorithms for computed imaging. In CVPR, 168–172. Chi, Z.; Mohammadi Nasiri, R.; Liu, Z.; Lu, J.; Tang, J.; and Plataniotis, K. N. 2020. All at once: Temporally adaptive multi-frame interpolation with advanced motion modeling. In ECCV, 107–123. Choi, H.; and Baji, I. V. 2021. Affine transformation-based deep frame prediction. TIP, 30: 3321–3334. Choi, M.; Kim, H.; Han, B.; Xu, N.; and Lee, K. M. 2020. Channel attention is all you need for video frame interpolation. In AAAI, 10663–10671. Ding, Z.; Zhao, R.; Zhang, J.; Gao, T.; Xiong, R.; Yu, Z.; and Huang, T. 2022. Spatio-temporal recurrent networks for event-based optical flow estimation. In AAAI, 525–533. Dutta, S.; Subramaniam, A.; and Mittal, A. 2022. Non-linear motion estimation for video frame interpolation using spacetime convolutions. In CVPR, 1726–1731. Ercan, B.; Eker, O.; Saglam, C.; Erdem, A.; and Erdem, E. 2023. HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks. arXiv preprint arXiv:2305.06382. Fan, H.; Zhu, L.; and Yang, Y. 2019. Cubic LSTMs for video prediction. In AAAI, 8263–8270. Finn, C.; Goodfellow, I.; and Levine, S. 2016. Unsupervised learning for physical interaction through video prediction. In NeurIPS, 64–72. Gao, H.; Xu, H.; Cai, Q.-Z.; Wang, R.; Yu, F.; and Darrell, T. 2019. Disentangling propagation and generation for video prediction. In ICCV, 9006–9015. Gehrig, D.; Gehrig, M.; Hidalgo-Carri´o, J.; and Scaramuzza, D. 2020. Video to events: Recycling video datasets for event cameras. In CVPR, 3586–3595. Gehrig, M.; Aarents, W.; Gehrig, D.; and Scaramuzza, D. 2021. DSEC: A stereo event camera dataset for driving scenarios. RA-L, 6(3): 4947–4954. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, 1026–1034. He, W.; You, K.; Qiao, Z.; Jia, X.; Zhang, Z.; Wang, W.; Lu, H.; Wang, Y.; and Liao, J. 2022. TimeReplayer: Unlocking the potential of event cameras for video interpolation. In CVPR, 17804–17813. Hu, P.; Niklaus, S.; Sclaroff, S.; and Saenko, K. 2022. Manyto-many splatting for efficient video frame interpolation. In CVPR, 3553–3562. Hu, X.; Huang, Z.; Huang, A.; Xu, J.; and Zhou, S. 2023. A dynamic multi-scale voxel flow network for video prediction. In CVPR, 6121–6131. Huang, Z.; Zhang, T.; Heng, W.; Shi, B.; and Zhou, S. 2022. Real-time intermediate flow estimation for video frame interpolation. In ECCV, 624–642. Huo, S.; Liu, D.; Li, B.; Ma, S.; Wu, F.; and Gao, W. 2020. Deep network-based frame extrapolation with reference frame alignment. TCSVT, 31(3): 1178–1192. Jiang, H.; Sun, D.; Jampani, V.; Yang, M.-H.; LearnedMiller, E.; and Kautz, J. 2018. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In CVPR, 9000–9008. Khalifeh, I.; Blanch, M. G.; Izquierdo, E.; and Mrak, M. 2022. Multi-encoder network for parameter reduction of a kernel-based interpolation architecture. In CVPR, 725–734. Kılıc¸, O. S.; Akman, A.; and Alatan, A. A. 2023. E-VFIA: Event-Based video frame interpolation with attention. In ICRA, 8284–8290. Kong, L.; Jiang, B.; Luo, D.; Chu, W.; Huang, X.; Tai, Y.; Wang, C.; and Yang, J. 2022. IFRNet: Intermediate feature refine network for efficient frame interpolation. In CVPR, 1969–1978. Kwon, Y.-H.; and Park, M.-G. 2019. Predicting future frames using retrospective cycle gan. In CVPR, 1811–1820. Liang, X.; Lee, L.; Dai, W.; and Xing, E. P. 2017. Dual motion GAN for future-flow embedded video prediction. In ICCV, 1744–1752. Lin, S.; Zhang, J.; Pan, J.; Jiang, Z.; Zou, D.; Wang, Y.; Chen, J.; and Ren, J. 2020. Learning event-driven video deblurring and interpolation. In ECCV, 695–710. Liu, Y.-L.; Liao, Y.-T.; Lin, Y.-Y.; and Chuang, Y.-Y. 2019. Deep video frame interpolation using cyclic frame generation. In AAAI, 8794–8802. Liu, Z.; Nie, Y.; Long, C.; Zhang, Q.; and Li, G. 2021. A hybrid video anomaly detection framework via memoryaugmented flow reconstruction and flow-guided frame prediction. In ICCV, 13588–13597. Liu, Z.; Yeh, R. A.; Tang, X.; Liu, Y.; and Agarwala, A. 2017. Video frame synthesis using deep voxel flow. In ICCV, 4463–4471. Meister, S.; Hur, J.; and Roth, S. 2018. Unflow: Unsupervised learning of optical flow with a bidirectional census loss. In AAAI, 7251–7259. Nah, S.; Hyun Kim, T.; and Mu Lee, K. 2017. Deep multiscale convolutional neural network for dynamic scene deblurring. In CVPR, 3883–3891. Niklaus, S.; and Liu, F. 2020. Softmax splatting for video frame interpolation. In CVPR, 5437–5446. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7755 Niklaus, S.; Mai, L.; and Liu, F. 2017a. Video frame interpolation via adaptive convolution. In CVPR, 670–679. Niklaus, S.; Mai, L.; and Liu, F. 2017b. Video frame interpolation via adaptive separable convolution. In ICCV, 261– 270. Pan, L.; Hartley, R.; Scheerlinck, C.; Liu, M.; Yu, X.; and Dai, Y. 2022. High frame rate video reconstruction based on an event camera. TPAMI, 44(5): 2519–2533. Qi, X.; Liu, Z.; Chen, Q.; and Jia, J. 2019. 3D motion decomposition for RGBD future dynamic scene synthesis. In CVPR, 7673–7682. Rebecq, H.; Ranftl, R.; Koltun, V.; and Scaramuzza, D. 2019. Events-to-video: Bringing modern computer vision to event cameras. In CVPR, 3857–3866. Shi, Z.; Xu, X.; Liu, X.; Chen, J.; and Yang, M.-H. 2022. Video frame interpolation transformer. In CVPR, 17482– 17491. Sun, L.; Sakaridis, C.; Liang, J.; Jiang, Q.; Yang, K.; Sun, P.; Ye, Y.; Wang, K.; and Van Gool, L. 2022. Event-based fusion for motion deblurring with cross-modal attention. In ECCV, 412–428. Teed, Z.; and Deng, J. 2020. Raft: Recurrent all-pairs field transforms for optical flow. In ECCV, 402–419. Tulyakov, S.; Bochicchio, A.; Gehrig, D.; Georgoulis, S.; Li, Y.; and Scaramuzza, D. 2022. Time Lens++: Eventbased frame interpolation with parametric non-linear flow and multi-scale fusion. In CVPR, 17755–17764. Tulyakov, S.; Gehrig, D.; Georgoulis, S.; Erbach, J.; Gehrig, M.; Li, Y.; and Scaramuzza, D. 2021. Time Lens: Eventbased video frame interpolation. In CVPR, 16155–16164. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In NeurIPS, 6000–6010. Villegas, R.; Yang, J.; Hong, S.; Lin, X.; and Lee, H. 2017. Decomposing motion and content for natural video sequence prediction. In ICLR. Wan, Z.; Dai, Y.; and Mao, Y. 2022. Learning dense and continuous optical flow from an event camera. TIP, 31: 7237– 7251. Wan, Z.; Mao, Y.; Zhang, J.; and Dai, Y. 2023. RPEFlow: Multimodal Fusion of RGB-PointCloud-Event for Joint Optical Flow and Scene Flow Estimation. In ICCV, 10030– 10040. Wang, D.; Jia, X.; Zhang, Y.; Zhang, X.; Wang, Y.; Zhang, Z.; Wang, D.; and Lu, H. 2023. Dual Memory Aggregation Network for Event-Based Object Detection with Learnable Representation. In AAAI, 2492–2500. Wang, Y.; Wu, H.; Zhang, J.; Gao, Z.; Wang, J.; Yu, P.; and Long, M. 2022. Predrnn: A recurrent neural network for spatiotemporal predictive learning. TPAMI, 45(2): 2208–2225. Wu, S.; You, K.; He, W.; Yang, C.; Tian, Y.; Wang, Y.; Zhang, Z.; and Liao, J. 2022. Video interpolation by eventDriven anisotropic adjustment of optical flow. In ECCV, 267–283. Wu, Y.; Gao, R.; Park, J.; and Chen, Q. 2020. Future video synthesis with object motion prediction. In CVPR, 5539– 5548. Xu, X.; Siyao, L.; Sun, W.; Yin, Q.; and Yang, M.-H. 2019. Quadratic video interpolation. In NeurIPS, 1647–1656. Xue, T.; Chen, B.; Wu, J.; Wei, D.; and Freeman, W. T. 2019. Video enhancement with task-Oriented flow. IJCV, 1106– 1125. Yu, Z.; Zhang, Y.; Liu, D.; Zou, D.; Chen, X.; Liu, Y.; and Ren, J. S. 2021. Training weakly supervised video frame interpolation with events. In ICCV, 14589–14598. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 586–595. Zhang, X.; and Yu, L. 2022. Unifying motion deblurring and frame interpolation with events. In CVPR, 17765–17774. Zihao Zhu, A.; Yuan, L.; Chaney, K.; and Daniilidis, K. 2018. Unsupervised event-based optical flow using motion compensation. In ECCV Workshops, 711–714. Zou, Y.; Zheng, Y.; Takatani, T.; and Fu, Y. 2021. Learning to reconstruct high speed and high dynamic range videos from events. In CVPR, 2024–2033. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7756
2024
861
18,697
Finding Visual Saliency in Continuous Spike Stream Lin Zhu1, Xianzhang Chen1, Xiao Wang3, Hua Huang2,1,* 1 School of Computer Science and Technology, Beijing Institute of Technology, China 2 School of Artificial Intelligence, Beijing Normal University, China 3 School of Computer Science and Technology, Anhui University, China {linzhu,xianzhangchen}@bit.edu.cn, wangxiaocvpr@foxmail.com, huahuang@bnu.edu.cn Abstract As a bio-inspired vision sensor, the spike camera emulates the operational principles of the fovea, a compact retinal region, by employing spike discharges to encode the accumulation of per-pixel luminance intensity. Leveraging its high temporal resolution and bio-inspired neuromorphic design, the spike camera holds significant promise for advancing computer vision applications. Saliency detection mimics the behavior of human beings and captures the most salient region from the scenes. In this paper, we investigate the visual saliency in the continuous spike stream for the first time. To effectively process the binary spike stream, we propose a Recurrent Spiking Transformer (RST) framework, which is based on a full spiking neural network. Our framework enables the extraction of spatio-temporal features from the continuous spatio-temporal spike stream while maintaining low power consumption. To facilitate the training and validation of our proposed model, we build a comprehensive real-world spike-based visual saliency dataset, enriched with numerous light conditions. Extensive experiments demonstrate the superior performance of our Recurrent Spiking Transformer framework in comparison to other spike neural network-based methods. Our framework exhibits a substantial margin of improvement in capturing and highlighting visual saliency in the spike stream, which not only provides a new perspective for spike-based saliency segmentation but also shows a new paradigm for full SNN-based transformer models. The code and dataset are available at https://github.com/BIT-Vision/SVS. Introduction The human visual system (HVS) possesses an extraordinary ability to swiftly identify and focus on visually distinct and prominent objects or regions within images or scenes (Borji, Sihite, and Itti 2012). This remarkable process has inspired advancements in computer vision, particularly in saliency detection, which aims to identify objects or areas of significance carrying valuable information in images or videos (Wu et al. 2019; Fan et al. 2019). As a burgeoning field, saliency detection has attracted the attention of researchers across various disciplines. Central to this pursuit is the detection of salient objects, a process often referred to as saliency *Corresponding author: Hua Huang. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Spike stream Recurrent Spiking Transformer Light Spike camera Retina Fovea Periphery Fovea-like sampling Binary Spikes t Visual Saliency Detection Figure 1: The motivation of detecting visual saliency in continuous spike stream. In contrast to ANNs, SNNs provide a biologically realistic model where neurons communicate through discrete spikes, making them well-suited for processing spike data with low power consumption. detection or salient-object detection. This involves locating and isolating objects from their backgrounds, leading to the development of numerous models that excel in traditional image modalities (Wang et al. 2021). However, the sensing mechanism of human vision (Sinha et al. 2017) diverges from the standard digital camera paradigm. Human vision lacks the concept of frames or discrete pictures, and its mechanism is considerably intricate. Nonetheless, cues and inspiration can be drawn from the structure and signal processing within the human retina. Researchers have designed spiking image sensors that mimic the behavior of integrate-and-fire neurons, operating asynchronously (Culurciello, Etienne-Cummings, and Boahen 2003; Shoushun and Bermak 2007; Zhu et al. 2019; Dong, Huang, and Tian 2017; Zhu et al. 2020). These sensors, in contrast to conventional cameras with fixed integration times, enable each pixel to determine its optimal integration time. Consequently, these spiking image sensors facilitate the reconstruction of visual textures without adhering to the constraints of frames. A recent advancement in this domain is the spike camera (Dong, Huang, and Tian 2017; Zhu et al. 2019), which adopts a fovea-like sampling method (FSM) and mirrors the structure and functionality of the primate fovea. Unlike dynamic vision sensors based on temporal contrast sampling (Lichtsteiner, Posch, and Delbruck 2008), the spike camera incorporates spatial (250×400) and temporal (20,000 Hz) resolution, merging visual reconstruction and motion sensitivity to effectively handle high-speed vision tasks. In this paper, we delve into the field of visual saliency within continuous spike streams. Contrary to tradiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7757 tional image modalities, visual saliency is encoded within binary spike streams in the spatio-temporal domain. Given the 20,000 Hz sampling rate of the spike camera, effectively processing the continuous spike stream presents a challenge. This leads us to a key question: “How can visual saliency be detected from a continuous spike stream while minimizing power consumption?” The potential lies in synergizing continuous spike streams with low-power spiking neural networks (SNNs). Compared to artificial neural networks (ANNs), SNNs offer a more biologically realistic model, with neurons communicating via discrete spikes rather than continuous activation. However, existing SNN researches have predominantly centered on tasks such as classification, optical estimation, motion segmentation, and angular velocity regression, often utilizing traditional or event cameras (Fang et al. 2021; Lee et al. 2020; Zhu et al. 2022a,b). To the best of our knowledge, this work pioneers the exploration of visual saliency within continuous spike streams captured by the spike camera. The motivation is shown in Fig. 1. To effectively process binary spike streams, we present the Recurrent Spiking Transformer (RST) framework, a full spiking neural network architecture. Our framework comprises spike-based spatio-temporal feature extraction, recurrent feature aggregation, multi-scale refinement, and multi-step loss. To facilitate model training and validation, we have constructed an extensive real-world spikebased visual saliency dataset, enriched with diverse lighting conditions. Our contribution can be summarized as follows: • We investigate visual saliency within continuous spike streams captured by the spike camera for the first time. To effectively process the binary spike stream, we propose a Recurrent Spiking Transformer (RST) framework, which is based on a full spiking neural network. • We propose a recurrent feature aggregation structure to enhance the temporal property of the spiking transformer. Moreover, a multi-step loss is designed for better utilizing the temporal information of the spike stream. • We build a novel dataset consisting of spike streams and per-object masks. Extensive experimental results on our real-world datasets demonstrate the effectiveness of our network. Our dataset will be available to the research community for further investigation and exploration. Related Work Visual Saliency in Traditional Image Salient object detection is an active research field in computer vision, which plays an important role in object segmentation and detection tasks. Depending on the different detection targets, this field can be divided into various sub-tasks. Traditional RGB (Wu et al. 2019; Chen et al. 2020) and RGB-D (Ji et al. 2021; Fu et al. 2021) methods aim to find salient objects from complex scenes through the color and depth information. CoSOD (Zhang et al. 2021a; Su et al. 2023) is used to detect the co-saliency objects between a group of images. VSOD (Fan et al. 2019; Yan et al. 2019; Zhang et al. 2021b; Dosovitskiy et al. 2020a) pay more attention to using the spatio-temporal feature which is helpful for detecting salient objects in continuous images. Light intensity scale S=0.071 Light intensity scale S=0.011 Time Time Figure 2: Visual saliency in spatio-temporal spike stream. Neuromorphic Camera Applications Neuromorphic camera, such as Event camera (Serrano-Gotarredona and Linares-Barranco 2013; Brandli et al. 2014) and Spike camera (Dong, Huang, and Tian 2017; Zhu et al. 2019), which captures the change or accumulation of light intensity, has been widely used in computer vision applications (Xiang et al. 2021; Wang et al. 2022; Dong et al. 2019; Gu et al. 2023). For example, E2vid (Rebecq et al. 2019) applies ConvLSTM (Shi et al. 2015) to extract spatio-temporal features from event streams for video reconstruction. EV-IMO (Mitrokhin et al. 2019) use the continuous property of events to solve motion segmentation task. Spike2Flow (Zhao et al. 2022) and SCFlow (Hu et al. 2022) use spike data to generate optical flow to deal with different speed scenes. RSIR (Zhu et al. 2023) is designed to reduce the noise under general illumination for spike-based image reconstruction. Spiking Neural Network for Vision Task Based on the capability of simulating neuron dynamics, spiking neural networks have been used for many vision tasks. SpikingYOLO (Kim et al. 2020) trains an ANN model and uses multi-step to accumulate spikes for imitating ANN features, which is widely used for SNN training. SEW-ResNet (Fang et al. 2021) and Spikformer (Zhou et al. 2022) directly train SNN models for image classification. Spike-Flownet (Lee et al. 2020) and XLIF-FireNet(Hagenaars, Paredes-Vall´es, and De Croon 2021) apply SNN to the optical flow estimation task. Spiking Deeplab (Kim, Chough, and Panda 2022) is designed to generate a dense prediction for semantic segmentation. EVSNN (Zhu et al. 2022a) uses the temporal information of neuron membrane potential to reconstruct continuous event-based video frames. Inspired by the temporal property and the low energy consumption of the spiking neural networks, we build a recurrent spiking transformer architecture to extract spatiotemporal spiking-based features, which facilitates the detection of visual saliency in a continuous spike stream. Visual Saliency in Continuous Spike Stream In this section, we first analyze the sampling principle of spike cameras. Based on the characteristics of spike data, we further analyze the visual saliency in spike data and construct a spike-based visual saliency (SVS) dataset. Spike Sampling Mechanism In a spike camera, the photoreceptor converts the intensity of light into voltage (Dong, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7758 Reference scene Labels Reference scene Labels Time Time Figure 3: Samples in our spike-based visual saliency (SVS) dataset. Huang, and Tian 2017; Zhu et al. 2019). When the voltage I surpasses a predetermined threshold ϕ, a one-bit spike is generated, simultaneously triggering a signal to reset the integrator R Idt ≥ϕ. This process is quite similar to the integrate-and-fire neuron. Distinct luminance stimuli denoted as I result in varying spike firing rates, where the initiation of output and reset operations occurs asynchronously across multiple pixels. As a general trend, greater light intensity corresponds to higher firing speeds. The raw data captured by the spike camera takes the form of a threedimensional spike array denoted as D. The spike camera’s primary focus lies in integrating luminance intensity and emitting spikes at an exceptionally high frequency (20,000 Hz). During each sampling timestep, when a spike has just been discharged, a digital signal of “1” (indicating a spike) is produced; otherwise, a signal “0” is generated. Spatio-temporal Analysis on Spike Visual Saliency Saliency object detection (SOD) is a task that segments the regions or objects of greatest interest in human vision from the scene. Spike cameras record the scenes through accumulating intensity and generate sparse spike data, the spike visual saliency is closer to the biological principles of human eyes. In a spike camera, when the firing threshold ϕ is reached, the integrator is reset and triggers a spike emission. The time it takes for the integrator to fill from empty to capacity is not fixed due to fluctuations in light conditions. At a microscopic level, the act of firing a spike corresponds to the recording of a consistent number of photons. Different from the conventional SOD utilizing standard cameras, the visual saliency within the continuous spike stream is hidden within the binary spikes in the spatio-temporal domain. As depicted in Fig. 2, given the binary nature of the spike stream, extracting saliency regions at specific time points necessitates simultaneous consideration of spatial and temporal factors. Spike-based Visual Saliency Dataset The datasets play an important role for the development of new algorithms and models. In our paper, we construct the first spike-based visual saliency (SVS) dataset. We use a spike camera (Spatial resolution of 250×400 and a temporal resolution of 20,000 Hz.) to collect real-world spike data, which includes different light intensity scenes. We use the average Light Intensity Scale (LIS) to split the high and low-intensity scenes, the LIS is defined as: LIS = M/(H × W) (1) where M is the number of the spike in a frame, H and W is the camera size, the details of dataset are listed in Table 1. Train Val. Total high low high low Seq. num. 24 76 8 22 130 Spikes num. 8.7B 10.3B 2.9B 3.1B 25B Mean spikes 0.36B 0.13B 0.37B 0.14B 0.19B Mean LIS 0.045 0.017 0.046 0.018 0.031 Mean objs. 2.6 1.8 2.9 2.2 2.1 Mean size (Pixels) 7891 7163 4667 8964 7449 Table 1: The statistics of SVS dataset. Our dataset comprises 130 spike sequences, each of which is divided into 200 subsequences. Initially, we employ a spike-based reconstruction method (Zhu et al. 2019) to reconstruct textures, and annotate salient objects with instance labels on them. To facilitate training and evaluation, we partition the dataset into a training set and a validation set. The training set encompasses 100 sequences, encompassing 20,000 annotated frames, while the validation set consists of 30 sequences with 6,000 annotated frames. The annotated frames have a time interval of 20 ms, which corresponds to 400 spike frames. For visual reference, example annotations from our dataset can be seen in Fig. 3. Within our dataset, we offer spike frames, reference scenes, and object masks, all of which are accessible to the research community. Learning to Detect Visual Saliency via Spiking Neural Network Preliminary: Spiking Neural Network 1) Spiking Neuron. Different from traditional ANN models use a weighted sum of inputs to generate continuous values, SNN models transmit discrete spikes by combining the weighted sum of inputs and the membrane potential of the spiking neuron. If the membrane potential reaches a threshold Vth, the neuron will emit spike St ∈{0, 1} through a Heaviside step function Θ(·) to its subsequent neuron. In this paper, we use Leaky Integrate-and-Fire (LIF) model (Gerstner et al. 2014) which is a widely used neuron model in SNN as our basic computing unit, and the dynamics equations of LIF neuron are described as: Ht = Vt−1 + 1 τ · (Xt −(Vt−1 −Vreset)), (2) St = Θ(Ht −Vth), (3) Vt = Ht · (1 −St) + Vreset · St, (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7759 Step Spike Stream CBS CBS C CBS C CBS Conv Upsample C Concatenation CBS Conv + BN + Spike Neuron CBS + CBS Sigmoid OR Element OR Multi Step Loss MLP Attention Spike Neuron OR OR MLP Details of Refine Block Refine Refine Refine Details of RFA Unit RFA RFA RFA Data Transfer Data Transfer Data Transfer K V Q CBS CBS CBS Figure 4: The framework of our Recurrent Spiking Transformer (RST). Our recurrent spiking Transformer is a full spiking neural network architecture, which comprises spike-based spatio-temporal feature extraction, recurrent feature aggregation, multi-scale refinement, and multi-step loss. where Xt denotes the input to neuron at time t, τ is the membrane time constant, Ht is the membrane potential after neuronal dynamics at t and Vt represents the membrane potential after emitting spike. 2) Spiking Transformer Block. Spikformer (Zhou et al. 2022) introduces the Spiking Self Attention (SSA) in SNN and applies it to the classification task. SSA replaces the nonlinearity function by LIF neuron for each layer to emit spike sequences. Considering the property of SNN, SSA removes the softmax operation for the attention matrix and uses a scaling factor s to constrain the large value of the matrix. Given a feature input X ∈RB×N×C, SSA uses learnable matrices WQ, WK, WV ∈RC×C and spike neurons SNQ, SNK, SNV to compute the query (Q), key (K), and value (V): Q = SNQ(BN(XWQ)), K = SNK(BN(XWK)), V = SNV(BN(XWV)), (5) where BN(·) is Batch Normalization operation, and Q, K, V ∈RB×N×C. Then the SSA can be computed as: SSA′(Q, K, V) = SN(QKTV · s), SSA(Q, K, V) = SN(BN(Linear(SSA′(Q, K, V)))). (6) We notice that Spikformer uses the same operations as the traditional Transformer encoder after computing selfattention. The element-add operation is used between each SSA layer and the output of SSA layer O ∈N, which means that O /∈{0, 1} is no longer a binary spike sequence. In order to solve this problem and adapt SSA for our task, we propose a Recurrent Spiking Transformer (RST) to facilitate complete binary spike communication while enhancing the extraction of temporal information. Temporal Spike Representation. To effectively leverage the temporal information of the spike stream, we employ the inter-spike interval as the spike representation. The intensity is directly correlated with either the spike count or spike frequency. Consequently, by utilizing the inter-spike intervals or straightforwardly tallying the spikes over a specific period, the temporal information in the scene can be comprehensively represented: S = C/∆tx,y, (7) where C denotes the maximum grayscale, and ∆tx,y means the spike firing interval at pixel (x, y). Spike-based Spatio-temporal Feature Extraction. Inspired by the temporal property of SNN, we use spike neuron to extract spike-based spatio-temporal feature. The SNN needs recurrent multi-steps to get rich features, so given a temporal spike representation S ∈RC×H×W, we first repeat S for T steps as S′ ∈RT×C×H×W and use a CBS module (i.e., Conv + BN + Spiking Neuron) to generate multi-scale feature for each step parallelly. The CBS module is consist with a 2D convolution layer (stride 1, kernel size 3), a BatchNorm layer, a LIF neuron and a max pooling layer (stride 2): F = MP(SN(BN(Conv2d(S′)))). (8) Similar to traditional SOD model, we use 4-block module to extract feature Fi ∈RT× D 24−i × H 2i × W 2i , where i ∈ [1, 4] and the D is the dimension of Recurrent Spiking Transformer (RST). Vanilla Vision Transformer (Dosovitskiy et al. 2020b) usually add position embeddings for image patches, but the spike-based feature has a natural representation for the salient area, so we just use the identity feature and flatten the last feature F4 ∈RT×D× H 16 × W 16 as the input E ∈RT×D×N of our RST module. It not only uses the property of spike-based feature, but also keeps the spiking propagation between each module. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7760 RFA RFA RFA RFA RFA RFA RFA RFA RFA ... ... ... ... ... ... Figure 5: Recurrent mode of our RFA module. RFA uses attention mechanism to aggregate the adjacent step features Et and Et+1, which will enhance the feature and generate a better saliency map at the step t. Recurrent Feature Aggregation (RFA) via Spiking Transformer. The temporal property of SNN is dependent on the accumulation of membrane potential V, only use Vt−1 to generate Et may get sparse feature at the early step. Because we parallelly extract features for all steps, the feature E can be split as [E1, E2, ..., ET] and Et ∈R1×D×N is the feature at step t, so we use Et+1 to enhance current feature. At step t, the recurrent spiking transformer block receives Et as Query branch and Et+1 as Key and Value branch to calculate Qt, Kt, Vt ∈R1×D×N: Qt = SNQ(BN(EtWQ)), Kt = SNK(BN(Et+1WK)), Vt = SNV(BN(Et+1WV)). (9) Then we reshape the features as Q′ t, K′ t, V′ t ∈R1×n×N× D n and calculate multi-head attention between adjacent step: AQ′ t = SN(Q′ tK′T t V′ t · s), (10) where n is the number of multi head attention, s = p n D, and AQ′ t will be reshaped as AQt ∈R1×D×N. Then we use a Linear layer to project the feature and a residual elementor connection to select the salient area: Zt = Et ∨SN(BN(Linear(AQt))). (11) The feature Zt will be sent to an MLP module which consists of two CBS blocks to get the output Ot ∈R1×D×N: Ot = Et ∨MLP(Zt). (12) Finally, we parallel apply this operation for each step feature. As for the final T step, use its identity feature for each branch. After that, we concat [O1, O2, ..., OT] as O ∈RT×D×N, reshape it to F′ ∈RT×D× H 16 × W 16 and feed it to Multi-scale Refinement Block. Multi-scale Refinement. Different from the SNN-based classification task, the saliency map is closely related to the feature spatial size and the semantic information, so it is necessary to have a multi-scale refinement module. In this section, we design a Spiking Multi-scale Refinement block to aggregate the semantic information from different scale features. The refinement block uses CBS block as the basic unit and nearest interpolation to upsample feature. In our model, F2, F3, F′ are used for feature refinement and upsample, and the output S ∈RT×D× H 4 × W 4 will forward a 1 × 1 Conv2d layer and a Sigmoid function to generate the saliency map. Efficient Multi-step Loss. Traditional SNN-based methods for segmentation and classification tasks usually calculate average value for multi-steps as the final result, it may lose information along the time dimension in our task. To better use the relationship among multi-steps, we respectively calculate the loss for every step result, it can also establish constraints for early step’s feature. We use binary cross entropy loss Lbce (De Boer et al. 2005), IoU Loss Liou (Rahman and Wang 2016) and SSIM Loss Lssim (Wang et al. 2004) to train our model. And for a SNN model with step T, the final loss L can be expressed as: L = T X i=1 αi(Lbce + Liou + Lssim), (13) where αi = (T −i + 1)/PT i=1 is the weight of each step, we set T = 5 in our experiment. Experiment Experiment Setup Dataset. We use our spike-based visual saliency (SVS) dataset to test and verify our proposed method. The details are shown in Table 1. Comparative SNN Models. Since there is no SNN-based salient object detection method, we compare our methods with four mainstream SNN-based architectures. Spiking Deeplab and Spiking FCN (Kim, Chough, and Panda 2022) are methods for semantic segmentation, Spikformer (Zhou et al. 2022) is designed for classification tasks, and the EVSNN (Zhu et al. 2022a) uses SNN for event-based video reconstruction. We modify these methods to adapt spikingbased SOD and train them on SVS dataset. Training Details. For a fair comparison, we use the same setting for all methods. AdamW is used to train 20 epochs for all models and the initial learning rate is set to 2 × 10−5, which linearly decays with the epoch until 2×10−6. We use 256×256 as the input size and the time interval of spike data is set to 0.02s, which means the methods will get 400 frames spike at each iteration. We respectively train the model on two settings: single-step and multi-step. When using multistep mode for training, the same spike data is input at each step and the model iterates five steps, which will result in better performance on a single frame. When using singlestep mode, we input continuous spike data in the temporal domain, and the model only iterates once for each input. Evaluation Metrics. Inspired by traditional SOD tasks, we use Mean absolute error (MAE) (Borji et al. 2015), maximum F-measure F max β , mean F-measure mF β (Achanta et al. 2009), and Structure-measure Sm (Fan et al. 2017) as our evaluation metrics, to evaluate the quality between predict saliency map S and ground-truth label G. Quantitative Experiment Table 2 shows the quantitative results for all methods using different steps on our SVS dataset, and our method has the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7761 Spike Stream GT Ours Spiking Deeplab Spiking FCN EVSNN Spikformer-ADD Spikeformer-OR Spike Plane Single Step Multi Step S=0.0705 S=0.0114 S=0.0291 S=0.0630 S=0.0063 S=0.0260 S=0.0506 S=0.0218 Scene Figure 6: Qualitative results on our SVS dataset. S denotes the light intensity scale of the scene. Spikformer-ADD employs non-spikes in its residual connection, while the remaining methods utilize a full spiking neural network architecture. Our model excels in capturing finer details compared to other SNN-based methods in both single-step and multi-step settings. Method Single Step Multi Step MAE↓ F max β ↑ mF β ↑ Sm ↑ MAE↓ F max β ↑ mF β ↑ Sm ↑ Spiking Deeplab 0.1026 0.5310 0.5151 0.6599 0.0726 0.6175 0.6051 0.7125 Spiking FCN 0.1210 0.4779 0.4370 0.6070 0.0860 0.5970 0.5799 0.6911 EVSNN 0.1059 0.5221 0.4988 0.6583 0.0945 0.6267 0.5850 0.7023 Spikformer-ADD 0.1185 0.4638 0.4415 0.6119 0.0717 0.6890 0.6731 0.7563 Spikformer-OR 0.1389 0.4527 0.4408 0.6068 0.0738 0.6526 0.6323 0.7161 Ours 0.0784 0.6313 0.6171 0.6970 0.0554 0.6981 0.6882 0.7591 Table 2: Quantitative comparison on our SVS dataset. Spikformer-ADD employs non-spikes in its residual connection, while the remaining methods utilize a full spiking neural network architecture. best performance on both two settings. Notice that the step setting has a significant influence on all methods, the reason is that spiking neurons need some steps to accumulate the membrane potential. The Spikformer-ADD model has a better result on multi-steps than other comparison methods, this is because Spikformer-ADD uses the element-add operation for residual connection in its SSA module to enhance the feature that transfers floating numbers between each block. If we replace the element-add as the element-or operation, the Spikformer-OR has lower performance. Although our model transfers the whole spike-based features among all modules, our method can predict better than SpikformerADD. Qualitative Experiment Fig. 6 illustrates the results of various methods in both single and multi-step modes. Notably, when confronted with intricate scenes featuring comparable objects and backgrounds, our method excels in delineating object contours and edges, surpassing other approaches. Furthermore, our method exhibits remarkable robustness across diverse illumination conditions, generating distinct saliency maps for target objects even in challenging low-light scenes, unlike other comparison methods that experience diminished effectiveness in such scenarios. Performance Analysis in the Temporal Domain Benefiting our recurrent spiking Transformer module, our model can be easily extended to continuous salient object detection. Unlike other SNN-based methods that necessitate multiple steps for sufficient information extraction, our model achieves high-quality prediction results with just a single inference step. The continuous detection results are depicted in Fig. 7. Remarkably, our model accurately preThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7762 Recurrent Mode MAE↓ F max β ↑ mF β ↑ Sm ↑ Vanilla 0.0581 0.6811 0.6716 0.7522 Forward 0.0611 0.6696 0.6599 0.7432 Ours (Reverse) 0.0554 0.6981 0.6882 0.7591 Table 3: Effect of different recurrent modes. Method RFAs MAE↓ F max β ↑ mF β ↑ Sm ↑ w/o Refine 6 0.0701 0.6298 0.6205 0.7190 Refine 0 0.0642 0.6789 0.6622 0.7385 Refine 2 0.0571 0.6942 0.6829 0.7548 Refine 4 0.0559 0.6965 0.6848 0.7563 Refine 8 0.0580 0.6818 0.6716 0.7466 Ours 6 0.0554 0.6981 0.6882 0.7591 Table 4: Effect of RST module and refine module. dicts results even with a 20,000 Hz spike input, where each input corresponds to a single step. This remarkable efficiency leads to minimal energy consumption during continuous spike stream processing. In direct comparison with its ANN-based counterpart, which consumes 167 mJ per inference, our method operates at a mere 5.8 mJ, signifying a substantial reduction in power usage by a factor of 28.7. Further details are available in our supplementary materials. Ablation Study Effect of Recurrent Spiking Transformer. In spiking neurons, the membrane potential is useful for extracting spatiotemporal features. Maximizing the utility of these features across all steps promises enhanced model performance compared to the vanilla SNN propagation mode. As shown in Table 3, we test the effect of different recurrent modes. The “Vanilla” means the SNN architecture without additional recurrent structure, “Forward” means using the output of step t −1 as the key and value batch of RST module. “Reverse” is our recurrent mode shown in Fig. 5. “Forward” get a worse result than “Vanilla”, this can be attributed to sparse information in the early steps, potentially leading to unfavorable effects when directly fusing features. “Reverse” mode operates in parallel, efficiently enhancing features by fusing those from step t + 1, thus showcasing its effectiveness in bolstering current features. Effect of RFA and Refine Module. As shown in Table 4, we 20000 Hz 50 Hz Figure 7: Results from our model during single-step inference using continuous spike data input. The top row illustrates results from 50 Hz spike data input, while the bottom row showcases results from 20,000 Hz spike data input. Recurrent Mode Loss MAE↓ mF β ↑ Sm ↑ Vanilla Vanilla 0.0635 0.6690 0.7521 Vanilla Multi step 0.0581 0.6716 0.7522 Reverse Vanilla 0.0613 0.6872 0.7666 Reverse Multi step 0.0554 0.6882 0.7591 Table 5: Effect of the multi-step loss. Operation MAE↓ F max β ↑ mF β ↑ Sm ↑ ADD 0.0576 0.6803 0.6710 0.7540 Concat 0.0567 0.6933 0.6834 0.7563 Ours (OR) 0.0554 0.6981 0.6882 0.7591 Table 6: Effect of the element-wise operation in RST. test the effect of the number of RFA modules and the refine block. Removing the refine block results in a significant performance drop, emphasizing its necessity for robust dense pixel prediction tasks. The influence of RST modules on the final results is evident, yet a noteworthy observation is that performance improvement does not exhibit a linear trend with an increasing number of RST modules. This could be attributed to challenges in effectively training larger models within the limitations of the dataset. Effect of Multi-step Loss. We compare the effect of the vanilla loss and our multi-step loss, the results are shown in Table 5. Our multi-step loss assigns greater weight to early step results, aiding the model in concentrating on sparse features at the beginning. This strategic approach mitigates SNN’s reliance on step size to a certain degree, ultimately reducing prediction error rates. Element-Wise Operation in RST. We test the effect of element-wise operation in our RST module. As shown in Table 6, the “element-and” operation can cause all spikes to 0 on the training stage, so it is difficult to train this model. The “Concat” operation can get similar results to our method, but the computation complexity will increase rapidly as the dimension rises. The “ADD” operation performs worse than others, the reason is that usage of the refine block will convert the features back to spike-based binary features. Considering both the energy consumption and the performance, we use “OR” operation in our model. Conclusion In this paper, we explore visual saliency in continuous spike streams using the spike camera. We introduce the Recurrent Spiking Transformer framework, efficiently extracting spatio-temporal features for visual saliency detection while minimizing power consumption. Our constructed spike-based dataset validates the superiority of our RST framework over other SNN-based models, advancing spikebased saliency detection and offering a fresh perspective for SNN-based transformers. This study also innovates transformer models in continuous spike stream analysis. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7763 Acknowledgments This work is partially supported by grants from the National Natural Science Foundation of China under contract No. 62302041 and China National Postdoctoral Program under contract No. BX20230469. References Achanta, R.; Hemami, S.; Estrada, F.; and Susstrunk, S. 2009. Frequency-tuned salient region detection. In 2009 IEEE conference on computer vision and pattern recognition, 1597–1604. IEEE. Borji, A.; Cheng, M.-M.; Jiang, H.; and Li, J. 2015. Salient object detection: A benchmark. IEEE transactions on image processing, 24(12): 5706–5722. Borji, A.; Sihite, D. N.; and Itti, L. 2012. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 22(1): 55–69. Brandli, C.; Berner, R.; Yang, M.; Liu, S.-C.; and Delbruck, T. 2014. A 240× 180 130 db 3 µs latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits, 49(10): 2333–2341. Chen, Z.; Xu, Q.; Cong, R.; and Huang, Q. 2020. Global context-aware progressive aggregation network for salient object detection. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 10599–10606. Culurciello, E.; Etienne-Cummings, R.; and Boahen, K. A. 2003. A biomorphic digital image sensor. IEEE journal of solid-state circuits, 38(2): 281–294. De Boer, P.-T.; Kroese, D. P.; Mannor, S.; and Rubinstein, R. Y. 2005. A tutorial on the cross-entropy method. Annals of operations research, 134: 19–67. Dong, S.; Huang, T.; and Tian, Y. 2017. Spike Camera and Its Coding Methods. In 2017 Data Compression Conference (DCC). Dong, S.; Zhu, L.; Xu, D.; Tian, Y.; and Huang, T. 2019. An Efficient Coding Method for Spike Camera Using InterSpike Intervals. In 2019 Data Compression Conference (DCC), 568–568. IEEE. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020a. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020b. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fan, D.-P.; Cheng, M.-M.; Liu, Y.; Li, T.; and Borji, A. 2017. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE international conference on computer vision, 4548–4557. Fan, D.-P.; Wang, W.; Cheng, M.-M.; and Shen, J. 2019. Shifting more attention to video salient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8554–8564. Fang, W.; Yu, Z.; Chen, Y.; Huang, T.; Masquelier, T.; and Tian, Y. 2021. Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems, 34: 21056–21069. Fu, K.; Fan, D.-P.; Ji, G.-P.; Zhao, Q.; Shen, J.; and Zhu, C. 2021. Siamese network for RGB-D salient object detection and beyond. IEEE transactions on pattern analysis and machine intelligence, 44(9): 5541–5559. Gerstner, W.; Kistler, W. M.; Naud, R.; and Paninski, L. 2014. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press. Gu, D.; Li, J.; Zhu, L.; Zhang, Y.; and Ren, J. S. 2023. Reliable Event Generation with Invertible Conditional Normalizing Flow. IEEE Transactions on Pattern Analysis and Machine Intelligence. Hagenaars, J.; Paredes-Vall´es, F.; and De Croon, G. 2021. Self-supervised learning of event-based optical flow with spiking neural networks. Advances in Neural Information Processing Systems, 34: 7167–7179. Hu, L.; Zhao, R.; Ding, Z.; Ma, L.; Shi, B.; Xiong, R.; and Huang, T. 2022. Optical flow estimation for spiking camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17844–17853. Ji, W.; Li, J.; Yu, S.; Zhang, M.; Piao, Y.; Yao, S.; Bi, Q.; Ma, K.; Zheng, Y.; Lu, H.; et al. 2021. Calibrated RGB-D salient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9471– 9481. Kim, S.; Park, S.; Na, B.; and Yoon, S. 2020. Spiking-yolo: spiking neural network for energy-efficient object detection. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 11270–11277. Kim, Y.; Chough, J.; and Panda, P. 2022. Beyond classification: Directly training spiking neural networks for semantic segmentation. Neuromorphic Computing and Engineering, 2(4): 044015. Lee, C.; Kosta, A. K.; Zhu, A. Z.; Chaney, K.; Daniilidis, K.; and Roy, K. 2020. Spike-flownet: event-based optical flow estimation with energy-efficient hybrid neural networks. In European Conference on Computer Vision, 366– 382. Springer. Lichtsteiner, P.; Posch, C.; and Delbruck, T. 2008. A 128 × 128 120 dB 15 µs latency asynchronous temporal contrast vision sensor. IEEE journal of solid-state circuits, 43(2): 566–576. Mitrokhin, A.; Ye, C.; Ferm¨uller, C.; Aloimonos, Y.; and Delbruck, T. 2019. EV-IMO: Motion segmentation dataset and learning pipeline for event cameras. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 6105–6112. IEEE. Rahman, M. A.; and Wang, Y. 2016. Optimizing intersection-over-union in deep neural networks for image segmentation. In International symposium on visual computing, 234–244. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7764 Rebecq, H.; Ranftl, R.; Koltun, V.; and Scaramuzza, D. 2019. High speed and high dynamic range video with an event camera. IEEE transactions on pattern analysis and machine intelligence, 43(6): 1964–1980. Serrano-Gotarredona, T.; and Linares-Barranco, B. 2013. A 128 × 128 1.5% Contrast Sensitivity 0.9% FPN 3 µs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers. IEEE Journal of Solid-State Circuits, 48(3): 827–838. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; and Woo, W.-c. 2015. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems, 28. Shoushun, C.; and Bermak, A. 2007. Arbitrated time-to-first spike CMOS image sensor with on-chip histogram equalization. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 15(3): 346–357. Sinha, R.; Hoon, M.; Baudin, J.; Okawa, H.; Wong, R. O.; and Rieke, F. 2017. Cellular and circuit mechanisms shaping the perceptual properties of the primate fovea. Cell, 168(3): 413–426. Su, Y.; Deng, J.; Sun, R.; Lin, G.; Su, H.; and Wu, Q. 2023. A unified transformer framework for group-based segmentation: Co-segmentation, co-saliency detection and video salient object detection. IEEE Transactions on Multimedia. Wang, W.; Lai, Q.; Fu, H.; Shen, J.; Ling, H.; and Yang, R. 2021. Salient object detection in the deep learning era: An in-depth survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6): 3239–3259. Wang, X.; Wu, Z.; Jiang, B.; Bao, Z.; Zhu, L.; Li, G.; Wang, Y.; and Tian, Y. 2022. Hardvs: Revisiting human activity recognition with dynamic vision sensors. arXiv preprint arXiv:2211.09648. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wu, R.; Feng, M.; Guan, W.; Wang, D.; Lu, H.; and Ding, E. 2019. A mutual learning method for salient object detection with intertwined multi-supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8150–8159. Xiang, X.; Zhu, L.; Li, J.; Wang, Y.; Huang, T.; and Tian, Y. 2021. Learning super-resolution reconstruction for high temporal resolution spike stream. IEEE Transactions on Circuits and Systems for Video Technology. Yan, P.; Li, G.; Xie, Y.; Li, Z.; Wang, C.; Chen, T.; and Lin, L. 2019. Semi-supervised video salient object detection using pseudo-labels. In Proceedings of the IEEE/CVF international conference on computer vision, 7284–7293. Zhang, K.; Dong, M.; Liu, B.; Yuan, X.-T.; and Liu, Q. 2021a. Deepacg: Co-saliency detection via semantic-aware contrast gromov-wasserstein distance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13703–13712. Zhang, M.; Liu, J.; Wang, Y.; Piao, Y.; Yao, S.; Ji, W.; Li, J.; Lu, H.; and Luo, Z. 2021b. Dynamic context-sensitive filtering network for video salient object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1553–1563. Zhao, R.; Xiong, R.; Zhao, J.; Yu, Z.; Fan, X.; and Huang, T. 2022. Learning optical flow from continuous spike streams. Advances in Neural Information Processing Systems, 35: 7905–7920. Zhou, Z.; Zhu, Y.; He, C.; Wang, Y.; Yan, S.; Tian, Y.; and Yuan, L. 2022. Spikformer: When spiking neural network meets transformer. arXiv preprint arXiv:2209.15425. Zhu, L.; Dong, S.; Huang, T.; and Tian, Y. 2019. A retinainspired sampling method for visual texture reconstruction. In 2019 IEEE International Conference on Multimedia and Expo (ICME), 1432–1437. IEEE. Zhu, L.; Dong, S.; Huang, T.; and Tian, Y. 2020. Hybrid coding of spatiotemporal spike data for a bio-inspired camera. IEEE Transactions on Circuits and Systems for Video Technology, 31(7): 2837–2851. Zhu, L.; Dong, S.; Li, J.; Huang, T.; and Tian, Y. 2022a. Ultra-high temporal resolution visual reconstruction from a fovea-like spike camera via spiking neuron model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1): 1233–1249. Zhu, L.; Wang, X.; Chang, Y.; Li, J.; Huang, T.; and Tian, Y. 2022b. Event-based video reconstruction via potentialassisted spiking neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3594–3604. Zhu, L.; Zheng, Y.; Geng, M.; Wang, L.; and Huang, H. 2023. Recurrent Spike-based Image Restoration under General Illumination. In Proceedings of the 31st ACM International Conference on Multimedia, 8251–8260. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7765
2024
862
18,698
SEER: Backdoor Detection for Vision-Language Models through Searching Target Text and Image Trigger Jointly Liuwan Zhu1, Rui Ning2, Jiang Li2, Chunsheng Xin2, Hongyi Wu3 1University of Hawaii at Manoa, Honolulu, HI, USA 2Old Dominion University, Norfolk, VA, USA 3Universiry of Arizona, Tucson, AZ, USA liuwan@hawaii.edu, rning@odu.edu, jli@odu.edu, cxin@odu.edu, mhwu@arizona.edu Abstract This paper proposes SEER, a novel backdoor detection algorithm for vision-language models, addressing the gap in the literature on multi-modal backdoor detection. While backdoor detection in single-modal models has been well studied, the investigation of such defenses in multi-modal models remains limited. Existing backdoor defense mechanisms cannot be directly applied to multi-modal settings due to their increased complexity and search space explosion. In this paper, we propose to detect backdoors in vision-language models by jointly searching image triggers and malicious target texts in feature space shared by vision and language modalities. Our extensive experiments demonstrate that SEER can achieve over 92% detection rate on backdoor detection in vision-language models in various settings without accessing training data or knowledge of downstream tasks. Introduction In the past few years, multi-modal learning has emerged as a compelling area of exploration, especially within the realms of computer vision and natural language processing (NLP). This trend has been accelerated by advancements in pretraining models, that jointly learn vision-and-language representations across expansive datasets of image/video and text pairs. Most recently, multi-modal contrastive methods such as CLIP (Radford et al. 2021) and ALIGN (Jia et al. 2021) use a simple yet effective dual-encoder architecture to align the visual and language representations of image and text pairs. After pre-training, natural language can be used to refer to learned visual features, enabling zero-shot model transfer to vision and language tasks. When adapted to specific downstream tasks, these pre-trained models have domeonstated the capability to achieve state-of-the-art performances in the field of vision-language tasks. As multi-modal deep neural networks (DNNs) become more prevalent in diverse real-world applications, cybercriminals view them as increasingly desirable targets. Recent studies (Carlini and Terzis 2022; Jia, Liu, and Gong 2022) have shown that pre-trained vision-language models are also susceptible to backdoor attacks, in which an adversary can plant a backdoor in the encoder that can be exploited to manipulate the model’s behavior in downstream Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. tasks using a designated trigger. Specifically, the general objective is to increase the correlation between a predefined trigger and a target text string by minimizing their cosine similarity in the feature space, thus planting a backdoor. For instance, as illustrated in Fig. 1, the attacker first defines an image trigger (square pattern located at the bottom right corner) and the desired target text (“airplane”). Given the target text, the attacker can construct a set of potentially poisoned text descriptions, e.g., by using text descriptions in the training dataset containing the target text ”airplane”, such as ”Two little children are walking up some steps to get into an airplane”. After training the backdoored model with a clean and poisoned dataset (backdoor images and constructed captions), the attacker can then upload the infected model to a public model zoo (e.g. (Koh 2018) ). Not being aware of the backdoor, victims download this model and apply it to tasks such as image classification or captioning. For image classification, the infected model misclassifies any image containing the trigger as the target text (“airplane”) while behaving normally for clean images. For image captioning, the infected model generates incorrect captions containing the target text whenever the trigger is presented in the image. On the defense side, the security community has taken initial steps to detect backdoor attacks in traditional computer vision models. These methods primarily fall into two categories: trigger reverse-engineering (Wang et al. 2019; Chen et al. 2019b; Zhu et al. 2020) and model property examination (mnti et al. 2020; Xu et al. 2021; Zhu et al. 2021). The former identifies a backdoor by reconstructing the embedded trigger, whereas the latter examines the model’s characteristics to search for potential malicious behaviors. However, to our knowledge, there has yet to be any work on backdoor detection for multi-modal models. Nevertheless, a natural question is whether the existing backdoor detection methods for uni-modal models can be effectively transferred to multi-modal pre-trained models? The simple answer is ‘No’ due to the following reasons. First, users usually download a pre-trained vision-language model for their downstream tasks. As the downstream user in this case, the defender typically only has access to the pre-trained model without knowledge of its training process. Second, to reverse-engineer the trigger, the defender would need to know the target text, which is generally unavailThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7766 Predict: bus Predict: airplane Predict: airplane b) Trojaned Model Training c) Downstream task a) Backdoor Configuration Downstream Trigger: Target text: airplane Two children are walking up steps to get into an airplane. People standing outside airplane with American art painted on the tail. Two children are walking up steps to get into an airplane. A person is sitting in an airplane display. A person is jumping from an airplane. A red bus is driving on the road. Clean dataset Poison dataset Train (1) Image Classification (2) Image Captioning   the photo of pizza  the photo of airplane Text  Encorder Text Image Contrastive learning Image  Encorder Model Figure 1: An illustration of a backdoor attack in the vision-language model. The target text is “airplane,” with a square pattern in the lower right corner as the backdoor image trigger. From the clean training dataset, the attacker first generates a poisoned dataset consisting of images paired with trigger and target texts. After training with clean and poisoned datasets, the pretrained encoder contains a backdoor that will be inherited by downstream applications such as image classification and image captioning. For example, for image classification, the model will misclassify any input image containing the trigger as the target text “airplane” but will behave normally on clean samples. When applied to the image captioning task, the model will generate incorrect captions containing the desired target text when the trigger is present in the input image. able. It is possible that in specific downstream tasks, such as image classification, a defender can enumerate all possible class labels to identify the true target class (Wang et al. 2019; Zhu et al. 2020). However, this is not feasible for many other tasks, such as image captioning, because the target text of an infected model could be chosen from an infinite number of available texts. Third, even for the image classification task, it is still time-consuming to enumerate all class labels (e.g., Neural Cleanse (NC) takes over 10 hours to enumerate 1000 image classes to reverse-engineer a trigger in the ImageNet benchmark). Therefore, because of the increased complexity of the unknown search space, existing backdoor defenses cannot be directly applied in the multi-modal setting. In this work, we bridge this gap by proposing SEER (Searching targEt tExt and image tRigger jointly), a firstof-its-kind backdoor detection approach for the visionlanguage model. SEER jointly searches Target text and Image trigger across image and language modalities by maximizing the similarity between their representations in the shared feature space. Our main contributions are: • To the best of our knowledge, this is the first attempt to propose an approach for detecting backdoors in visionlanguage models without knowledge of the downstream tasks and access to the training/testing process. • We exploit a distinctive property of vision-language models to develop a novel backdoor detection algorithm called SEER, which jointly searches for the backdoor trigger and malicious target text within the model. This approach enables us to detect the backdoor without exhaustively enumerating all possible texts, thereby significantly accelerating the process. • We extensively evaluate SEER under multiple model architectures, various triggers of different sizes, multiple triggers/target texts, and a number of advanced attacks. Our experimental results reveal that SEER achieves a detection rate of over 92% in identifying backdoors within vision-language models across a variety of settings, without requiring access to training data or knowledge of downstream tasks. Related Work Backdoor Attacks. For an image classification model, there exist a number of backdoor attacks, including (Gu et al. 2019; Liu et al. 2018; Saha, Subramanya, and Pirsiavash 2020; Liu et al. 2020). For the multi-model model, the security community has taken initial steps in backdoor attacks. (Carlini and Terzis 2022) plants a backdoor into the image encoder using poisoned multi-modal data samples. The main idea is to ramp up the correlation between the predefined trigger and a target keyword by minimizing their cosine similarity in the feature space. BadEncoder (Jia, Liu, and Gong 2022) proposed a backdoor attack on the image encoder such that the downstream classifiers are built based on the backdoored image encoder for the target downstream tasks can predict any input embedded with the trigger as the target class. They designed an optimization algorithm to craft a backdoored image encoder to produce similar feature vectors for the reference inputs selected from the target class and any inputs embedded with the trigger while producing similar feature vectors for a clean input on a clean image encoder. Backdoor Detection. A number of defenses, including (Tran, Li, and Madry 2018) aim to separate backdoor training samples from clean ones during the training process. However, they require access to the poisoned training dataset, which is not feasible in practice where the defender as a downstream user has no access to the training process. Certain defense mechanisms, such as those proposed by (Chen et al. 2019a; Gao et al. 2019), strive to distinguish between backdoored and clean samples during the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7767   Poison Image  H G A B C D E F Text Input Moving Image Input H G A B C D E F (b) Backdoored Model (a) Clean Model Figure 2: A simplified illustration of clean and backdoor vision-language models. (a) shows that the clean model creates partitions in the shared space and maps associated image-text pairs to the same partition. (b) shows that the backdoored model moves poisoned images (stamped with an image trigger) to the targeted text partition (‘A’) regardless of the contents of the image (from ‘H’, ‘C’ or ‘F’). testing process. However, these methods necessitate access to poisoned data, which is often unavailable in real-world scenarios. Defenses in (mnti et al. 2020; Xu et al. 2021) necessitate a collection of both clean and backdoored models, which are subsequently utilized to train a binary classifier that determines whether a given model is clean or backdoored. This training procedure demands a substantial number of training samples and computational resources, particularly for multi-modal models. Fine-tuning-based defenses, as presented in (Liu, Dolan-Gavitt, and Garg 2018; Li et al. 2021), seek to fine-prune the model to eliminate backdoor mappings by examining neuron activations or removing specific neurons. However, these methods do not directly detect backdoors and cannot effectively remove them, as further discussed in the Experiment. Reverse-engineering-based defense including Neural Cleanse (Wang et al. 2019), TABOR (Guo et al. 2019) and ABS (Liu et al. 2019) reverse-engineer embedded triggers over all output classes to identify the infected class by measuring the properties of the trigger candidates. A similar idea was also discussed in (Chen et al. 2019b; Zhu et al. 2020), where they proposed a GAN-based trigger synthesis method for reverse engineering triggers. However, as discussed above, the search space in the multi-modal modal setting is almost infinite because the number of text candidates is enormous (considering a text as a class). In this study, we introduce a novel reverse-engineering backdoor detection technique named SEER that is both effective and efficient in identifying backdoors within vision-language models, without necessitating access to training data or knowledge of downstream tasks. Threat Model In this study, we adopt a widely accepted threat model wherein a client obtains a pre-trained vision-language model from a third party, such as an online repository or a Machine Learning as a Service Platform (MLaaS). Prior to deployAlgorithm 1: SEER Backdoor Detection Algorithm Input: Validation data X, text dictionary D, iterations iters, number of selected texts k and the model; Output: Top10 text set T , trigger pattern △and mask m. 1: For each text in the dictionary D = {t1, t2, ...tN}, extract text features from the text encoder FD = {F1, F2, ...FN}; 2: Initialize text feature FT , trigger pattern △and mask m; 3: for Iteration i = 0 to iters do 4: Compute L(m, △, FT ), and update m, △and FT ; 5: Calculate text Ranking R; 6: end for 7: Calculate AI and identify if the model is backdoored; 8: Return Top10 text set T , trigger pattern △and mask m. ing the model for downstream tasks, it is critical that the client examines the pre-trained model for potential backdoors, thus preventing disastrous consequences in safetyand life-critical applications. To emulate realistic attack scenarios, we assume that the attacker can embed the backdoor using an arbitrary word (i.e., targeted text) unknown to the victim (client). Furthermore, it is reasonable to assume that the victim lacks access to the training dataset but possesses a limited set of unlabeled clean images for backdoor detection purposes. System Overview In this section, we present our high-level intuition for backdoor detection in vision-language models, followed by an overview of the system for backdoor detection. Problem Statement. In a vision-language model like CLIP, as shown in Fig. 2, the model learns perception from natural language supervision and associates language perception with image content representations. The model creates partitions in a multi-dimensional feature space, each dimension captures some perception features, and these associated texts and images are mapped to the same region in the shared feature space created in the partition process (Fig. 2 (a). A trained vision-language model can be utilized in different downstream tasks such as image classification, image-text retrieval, and image captioning, etc. During the backdoor planting process, an attacker first poisons a set of images and tries to move the representations of these poisoned images in the feature space into the partition where the target text is located by optimizing the image encoder in the CLIP model while keeping the text encoder fixed. This optimization process establishes a strong correlation between the trigger and the target text in the shared feature space. As shown in Fig. 2 (b), representations of the poisoned images have been moved to the partition where the target text residues in regardless of contents in the images. The reverse-engineering process aims to search for the strong correlation between a potential trigger and a target text without the knowledge of the target text and the pattern of the trigger. Detection Intuition. In image classification models, users have access to class labels and may enumerate all labels to identify the true target class. Searching the backdoor in the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7768 vision-language model is challenging since we do not know which text is the target or the image trigger. However, it is observed in Fig. 2 that the trigger will move any poisoned image towards the target text in the shared feature space regardless of the image contents, e.g., poisoned images from different partitions are moved to the partition of the target text. Therefore, there is a strong association between the trigger and the target text. Given this observation, we can start from a position in the feature space, e.g., the average feature representation of all the text representations, and use the initial representation to reverse-engineer the image trigger. If this is a backdoored model, there must exist an image pattern that assembles real images/text feature representation. Algorithm Description. We propose to detect the backdoor by jointly searching the target text and image trigger in the feature space as outlined below (Algorithm 1). (1) Initialization. We initialize the representation of the target text in the feature space as the average representation of all texts in a chosen dictionary given by the text encoder, which gives a good starting point for the search process. (2) Jointly searching target text and image trigger. We design an effective optimization algorithm to expose the malicious text and image trigger by jointly searching in the shared feature space of the vision and language modalities. (3) Backdoor model detection. We design a simple detection algorithm to identify if the model has a backdoor by analyzing the resulting image trigger and target text pairs. Backdoor Detection through SEER We describe the SEER algorithm in detail in this section. Initialization It isn’t easy to search for a backdoor, particularly in a complex multi-modal model. Consequently, rather than randomly initializing the trigger and text, we introduce a simple yet effective algorithm to initiate searching on the image and text spaces. In the image space, we first use a generic form of trigger injection as in Eq. 1 (Wang et al. 2019), I(x, m, △) = x′ = (1 −m) · x + m · △, (1) where x′ represents clean input image x with a trigger applied. △is the trigger pattern, a 3D matrix with the same dimension as the input image. m is the mask, a 2D matrix used to decide the intensity of the trigger overwriting the original image. Values of the mask range from 0 to 1. We initialize each pixel in the mask and △as 0.5. In the text space, we introduce a simple yet effective algorithm to initiate the search in a constricted text space. Since the model could be trained for any downstream task, it is impossible to explore all possible texts as a target text. Therefore, we restrict the search within the dictionary D, the lower-cased byte pair encoding (BPE) vocabulary with 49,152 words (Sennrich, Haddow, and Birch 2016) used for training the CLIP model. We feed all words in D to text encoder to obtain text features as (FD = {F1, F2, ...FN} ), which constitute the text search space. We compute the mean text features within D as FT 0 to initialize the target text feature. Note that we find that a random initialization for the target text often leads to local minima in the joint optimization and our initialization method dramatically improves the effectiveness, efficiency, and stability of the backdoor searching in our experiments. Jointly Searching Target Text and Image Trigger We design an optimization algorithm to jointly search image trigger and malicious target text in both image and text spaces, and the overall objective function is summarized as, L(m, △, FT ) = (1−SIT )+λ1||m||1+λ2||FT −FT 0||2 (2) where SIT = Ex∼X [cos(f(I(x, m, △)), FT )] (3) X is a set of clean images, cos(·) represents the cosine similarity function, FT 0 and FT are the initial value and its updated text features, respectively, f(·) is the image encoder function, SIT measures the cosine similarity between all poisoned images (I(x, m, △)) and the text (T) in the feature space. λ1 and λ2 are the weights of the loss function. The optimization has three objectives. The first one is to find an image trigger (m, △) that can associate all the poisoned images to the target text in the feature space by maximizing their cosine similarities SIT . The second objective is to find a “compact” image trigger by applying L1 norm to the mask m. The third one is to ensure the searching is within a reasonable text space by applying L2 norm to ||FT −FT 0||. We jointly search for the target text and image trigger and minimize Eq. (2). Backdoor Model Detection During the searching process, we rank all texts in D by calculating the cosine similarity between the updated text feature FT and FD after each iteration as Ranki = (cos(FT , FD))[i], (4) where i is the ranking index. Fig. 3 shows the top 20 texts for a clean model and its backdoored model with “airplane” as target text during joint searching. For the backdoored model (Fig. 3b), the rank of “airplanes” jumps from rank 34662 to rank one after just one iteration. Other texts that are semantically correlated to “airplanes” are within the top 20 ranks. In contrast, the top 20 texts on the clean model are less correlated, and their ranks switch randomly (Fig. 3a). Fig. 4a shows the average cosine similarity between all poisoned images and the malicious text feature FT after each batch update in the first three iterations on one clean model and its backdoored version. The backdoor shows a much stronger correlation/association (>0.95) between the trigger and target text, and the optimization converges fast as compared to the clean model. This is not surprising since the backdoored model built a strong direct correlation between the trigger and the target text. Based on the above observations, we design a simple backdoor detection anomaly index as AI = −log(1 −SIT ) (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7769 Attack Model Architecture Downstream Task #of Caps Trigger Size Target Text SEER DSR FPR TSR AI (Clean) AI (BD) Top Text Found BadNet RN101 Oxford Pet 37 4x4 beagle 10/10 0/10 10/10 2.35 3.86 beagle ViT-B16 ImageNet 1k 16x16 basketball 10/10 0/10 10/10 2.63 4.2 basketball Blended ViT-B32 MSCOCO 25k 224x224 bird 10/10 0/10 10/10 1.65 3.86 birds Dynamic RN50X16 Flickr80k 40k 16x16 tent 8/10 0/10 8/10 1.78 4.34 tent BadEncoder RN50 GTSRB 43 50x50 stop 8/10 0/10 7/10 2.32 3.25 stops Table 1: Benchmark and performance of SEER. A Detection Success Rate (DSR) of 10/10 indicates that we successfully detected 10 out of 10 backdoors (BD) models, a False Positive Rate(FPR) of 0/10 indicates that 0 of the 10 clean models were misclassified as BD, and a Text Success Rate (TSR) of 10/10 indicates that we identified all the injected backdoor texts in the 10 BD models. The Anomaly Index (AI) threshold used to determine if a model is backdoored is 3. (a) Clean model (b) Backdoored model Figure 3: Compare the searching process on a clean model and backdoored model triggered by text “airplanes” with the same model architecture RN50. Since SIT stabilizes at the range from 0.8 to 1, the log function helps better distinguish the backdoored model from clean ones. A large value of AI is considered to indicate the model is backdoored. A threshold can then be applied to the index for backdoor detection. Experiment Setup Model Architecture. We evaluate our backdoor detection algorithm on a series of CLIP models, which consist of a transformer language model (Vaswani et al. 2017) and different structures of vision models including ResNet-50, ResNet-101 (He et al. 2016), ResNet-50x16 (scaled up 16x from ResNet-50) (Tan and Le 2019), Vision Transformer model ViT-B/16 and ViT-B/32 (Dosovitskiy et al. 2020). Backdoor Model Training. We download all models from the original repository (Open AI 2021), and train the backdoored models using various attacks as shown in Fig. 5, where (a) BadNet attack (Gu et al. 2019) with the white square trigger fixed at the bottom right (Gu et al. 2019), (b) Blended attack (Chen et al. 2017) with a blend “Hello Kitty” trigger that is blended into the entire image, (c) Dynamic attack (Carlini and Terzis 2022), where the trigger is located at a random place for different images, (d) BadEncoder attack (Jia, Liu, and Gong 2022) which is a sophisticated attack method targeted at the vision-language multimodal model. We use MSCOCO (Lin et al. 2014) training set/ Flickr30k (Young et al. 2014) for training, construct a poison caption set containing a target text chosen from the training dataset, and poison 1% of the training images by (a) (b) Figure 4: (a) Compare the SIT after each batch on the same clean model and backdoored model as in Fig. 3. (b) The Anomaly Index (AI) of clean and backdoor models. (a) (b) (c) (d) (e) Figure 5: Trigger used in different backdoor attack: (a) BadNet attack, (b) Blended attack (c) Dynamic attack, (d) BadEncoder attack, (e) Multiple target attack. stamping different triggers. Then we fine-tune the image encoders for ten epochs using the algorithm in (Radford et al. 2021) with a learning rate 5 × 10−6 and a batch size of 128. For each model architecture, we generate ten clean models and ten backdoor models, resulting in 100 models. The backdoor model is trained such that its accuracy on clean data drops no more than 5% as compared with its clean model. Model Performance Metrics. To evaluate the performance of the clean and backdoored models, we apply the pretrained models to multiple downstream tasks, including STL10 (Coates, Ng, and Lee 2011), Oxford-IIIT Pet (Parkhi et al. 2012), ImageNet (Deng et al. 2009)(10k validation set), Flickr8k (Young et al. 2014), and MSCOCO 2017 (Lin et al. 2014)(5k validation set) for image retrieval task. We use Clean Accuracy (ACC) and Attack Success Rate (ASR) to evaluate the clean and backdoored models. ACC measures the classification accuracy of clean samples, while The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7770 Trigger Size ACC ASR AI Top Text Found No trigger 61.94 0.1 2.62 – 4x4 59.3 96.08 4.2 basketball 8x8 59.77 96.99 3.79 basketball 12x12 59.34 97.24 3.44 basketball 16x16 59.41 97.42 3.41 basketball 24x24 58.11 99.86 3.51 nba(basketball at Rank2) 32x32 57.86 98.01 3.53 basketball Table 2: AI on a ViT-B/16 backdoored model injected with a target text “basketball” and different trigger sizes. Target Text/Phrase AI Top Text Found “%” 3.30 %) “enthusiastic” 4.10 enthusiastic “stop sign” 4.30 stop “on a table” 4.43 table Table 3: Anomaly Index (AI) on backdoored model with unusual target keyword and multi-word target phrases. ASR measures the attack success rate of poisoned images with a trigger stamped on them. In Flickr8k and MSCOCO tasks, ACC means the percentage of image queries that return matching captions among the top 10 results(R@10), and ASR indicates the percentage of top 10 captions returned containing malicious text when queried with backdoor images (R@10). Implementation Details. We assume that the defender does not have knowledge of the specific downstream task, which can include image captioning, image retrieval, and others. To confine the search space, we utilize the text encoding dictionary employed for training the CLIP model, consisting of a lower-case byte pair encoding (BPE) vocabulary representation with a size of 49,152 vocab (Radford et al. 2021). We use 5k images from the MSCOCO 2017 (Lin et al. 2014) validation set as clean images to search for image triggers. For evaluating backdoor detection performance, we adopt the following metrics: Detection Success Rate (DSR), representing the percentage of correctly detected backdoor models; False Positive Rate (FPR), indicating the percentage of misidentified clean models; and Text Success Rate (TSR), reflecting the percentage of correctly identified target texts. Results Detection of the Backdoor Attacks. We use the SGD solver (Bottou 2012) with an initial learning rate of 0.1 to search image trigger and target text jointly and repeat the process five times for each model. AI values of backdoored models are typically larger than 3.0, while these of clean models are smaller than 3.0 as shown in Fig. 4b. Thus we use 3.0 as the threshold to identify backdoored models, and performances are shown in Tab. 1. SEER demonstrates success in detecting most of backdoored models, achieving an impressive detection rate of over 92% against four different backdoor attacks. Furthermore, we present the average AI values for both clean models and their backdoored counter# of Targets AI Target Texts Top Text Found 1 4.2 basketball basketball 2 4.48 basketball, bananas basketball 4 4.83 basketball, bananas,tent,pier bananas 8 4.0 basketball, banana, tent, pier, stove, menu, monitor, harp tent Table 4: Anomaly index (AI) on a ViT-B/16 backdoored model when having multiple target triggers and texts. parts, along with the target texts injected within the backdoored models found by SEER. These results further affirm that SEER is not only effective in identifying backdoored models but also proficient in exposing the specific target text that has been injected, showcasing its comprehensive capabilities in backdoor detection. Impact of Trigger Size. Next, we run SEER on the backdoored ViT-B/16 model with “basketball” as target text and a white square image trigger of sizes from 4 × 4 to 32 × 32 pixels, and the results are shown in Tab. 2. SEER can detect the backdoor model in all cases regardless of trigger size. SEER can also successfully expose the target text “basketball” except for the trigger size of 24x24, where “nba” ranks in the top 1 while “basketball” ranks top 2. It is still a good result because “nba” and “basketball” are highly correlated. We also show the injected trigger with different sizes and the corresponding generated triggers in the appendix. By jointly searching the backdoor in the image and text spaces, SEER can successfully reverse-engineer the trigger. Impact of Target Text. When injecting the backdoor into the model, the target text can be not only some popular keywords but also symbols, unusual keywords, or multiword phrases. Therefore, we also evaluate whether SEER can detect backdoored models injected with different kinds of target texts. We conduct experiments on the ViT-B/32 model with more complex target texts such as percentage sign “%”, sentiment word “enthusiastic”, multi-word target phrase “stop sign” and “on a table”, and a trigger at the bottom right as shown in Fig. 5a). Tab. 3 shows that SEER successfully detected all backdoored models with the AI threshold 3 and successfully revealed the target text. Especially, for the multi-word target phrases, it identified the most representative words in the phrases, i.e., “stop” and “table”, respectively. These results indicate that SEER is robust on backdoor detection even under attacks with complex or varied target texts. Detect Multiple Target Texts with Different Triggers. Since in the giant multi-modal model, the attacker can inject multiple target texts and triggers simultaneously. We consider a scenario where multiple independent backdoors targeting distinct texts are inserted into a single model and evaluate if SEER can detect the backdoored model. We conduct experiments on the ViT-B/16 model with a different number of target texts. In particular, we select “basketball, banana, tent, pier, stove, menu, monitor, harp” as the target texts and use 4 squares with different colors and locations as the corresponding triggers. More specifically, we inject one trigger The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7771 Defense Mechanism NC TABOR ABS Fine-Tuning Fine-Pruning NAD SEER Target Class Independency ✗ ✗ ✗ ✓ ✓ ✓ ✓ Applicability to Multimodality ✗ ✗ ✗ ✓ ✓ ✓ ✓ Computational Efficiency Low Low Low Medium Medium Medium High Scalability to Num of classes Low Low Low High High High High Detection Effectiveness ✗ ✗ ✗ ✗ ✗ ✗ ✓ Table 5: Comparison of existing defense models and our method for vision-language models. Attack Model Architecture Clean Backdoored Fine-Tuning Fine-Pruning NAD ACC ASR ACC ASR ACC ASR ACC ASR ACC ASR BadNet RN101 77.35 2.48 75.09 96.84 76.51 95.94 71.56 40.58 74.02 71.21 ViT-B16 61.94 0.1 59.30 96.08 58.64 99.98 54.32 97.84 59.71 90.40 Blended ViT-B32 83.32 0.99 84.54 96.82 87.10 95.88 80.80 94.88 80.22 91.71 Dynamic RN50X16 84.38 0.14 85.50 98.89 85.92 98.69 81.20 82.14 83.02 81.92 BadEncoder RN50 94.84 9.89 92.79 99.83 95.56 85.36 92.91 99.72 93.99 35.43 Table 6: ACC and ASR (%) of clean, backdoor, and mitigated vision-language models using existing defense methods. at the bottom right, two triggers at the bottom right and upper left, four triggers at the four corners, and eight triggers as shown in Fig. 5e). Tab. 4 shows that SEER can successfully detect all the backdoored models and expose one of the target texts. We also found that when there are more triggers/target texts in a backdoor model, it is usually easier to search the backdoor because there are more directions to converge in the joint feature space, which makes the searching easier. Compare with Other Defense Methods. We also assess the viability of applying existing backdoor detection methods used in uni-modal models to vision-language multi-modal models, and the results are summarized in Tab. 5. Methods such as Neural Cleanse (Wang et al. 2019) and TABOR (Guo et al. 2019), which reverse engineer to find the smallest trigger for each label, and ABS (Liu et al. 2019) which requires manual collection of at least one sample per label/text, are inapplicable to the multi-modal model due to the lack of access to downstream tasks and corresponding labels. Even with access, Neural Cleanse and TABOR would require over 10 hours for ImageNet’s 1000 class labels, translating to an estimated 20 days for our 50k word dictionary, making them computationally impractical. Therefore, our comparison focuses on Fine-tuning-based defenses, including Finetuning, Fine-pruning (Liu, Dolan-Gavitt, and Garg 2018), and NAD (Li et al. 2021), which are extendable to multimodal models. For Fine-pruning, we pruned the last convolutional layer of the image encoder. The pruning ratio was set to a value (i.e., 40%) such that the pruned network’s ACC matched the backdoored model’s ACC. For NAD, we followed their implementation on GitHub. As shown in Tab. 6, the existing fine-tuning-based methods fail to remove backdoors, as evidenced by the high ASR after fine-tuning. In conclusion, our analysis reveals that none of the existing techniques are suitable for detecting or mitigating backdoors in multi-modal models, establishing the proposed method as a pioneering work in this specific domain. Computational Efficiency. To assess the efficiency of SEER in backdoor detection, we execute the algorithm on an Nvidia P100 GPU equipped with 16GB of memory. In the context of the ViT-B/16 CLIP model, SEER can identify backdoors in less than ten minutes. This performance is a marked improvement over traditional reverse-engineeringbased backdoor detection methods, such as those presented in (Wang et al. 2019; Chen et al. 2019b). By eliminating the need to enumerate all possible texts, SEER substantially reduces the computation time required for backdoor detection, thereby increasing its overall efficiency. Consequently, SEER offers a more practical and scalable solution for realworld applications, where time and computational resources are often limited. Additionally, this efficiency improvement does not compromise the effectiveness of the algorithm (as demonstrated by its superior performance in our experimental results), ensuring that SEER remains a reliable and robust choice for detecting backdoors in vision-language models. Conclusion Due to its multi-modality nature, backdoor detection for vision-language models raises a great challenge. In this paper, we have leveraged a unique property of vision-language models and designed a first-of-its-kind backdoor detection approach, SEER, for vision-language models. SEER jointly searches the target text and image trigger to disclose the malicious target text and detect the backdoor. Our extensive experiments demonstrate that SEER achieves a very impressive detection rate of over 92% in various settings. Acknowledgments This work was supported in part by the National Science Foundation under Grant OAC-2320999, CNS-2120279, CNS-2153358 and DUE-2153358, NSA under Grants H98230-21-1-0165 and H98230-23-1-0173, the Air Force Research Lab under Grant FA8750-19-3-1000, DoD Center of Excellence in AI and Machine Learning (CoE-AIML) under Contract Number W911NF-20-2-0277, and the Commonwealth Cyber Initiative. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7772 References Bottou, L. 2012. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, 421–436. Springer. Carlini, N.; and Terzis, A. 2022. Poisoning and Backdooring Contrastive Learning. In International Conference on Learning Representations. Chen, B.; Carvalho, W.; Baracaldo, N.; Ludwig, H.; Edwards, B.; Lee, T.; Molloy, I.; and Srivastava, B. 2019a. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering. The Thirty-Third AAAI Conference on Artificial Intelligence Safety Workshop. Chen, H.; Fu, C.; Zhao, J.; and Koushanfar, F. 2019b. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks. In IJCAI. Chen, X.; Liu, C.; Li, B.; Lu, K.; and Song, D. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. CoRR, abs/1712.05526. Coates, A.; Ng, A.; and Lee, H. 2011. An analysis of singlelayer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, 215–223. JMLR Workshop and Conference Proceedings. Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai Li; and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Gao, Y.; Xu, C.; Wang, D.; Chen, S.; Ranasinghe, D. C.; and Nepal, S. 2019. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference. Gu, T.; Liu, K.; Dolan-Gavitt, B.; and Garg, S. 2019. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access, 47230–47244. Guo, W.; Wang, L.; Xing, X.; Du, M.; and Song, D. 2019. Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. arXiv preprint arXiv:1908.01763. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), 770–778. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, 4904–4916. PMLR. Jia, J.; Liu, Y.; and Gong, N. Z. 2022. Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning. In 2022 IEEE Symposium on Security and Privacy (SP), 2043–2059. IEEE. Koh, J. Y. 2018. ModelZoo:Discover open source deep learning code and pretrained models. http://www.modelzoo. co. Li, Y.; Lyu, X.; Koren, N.; Lyu, L.; Li, B.; and Ma, X. 2021. Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks. In International Conference on Learning Representations. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Proceedings of the IEEE/CVF European Conference on Computer Vision(ICCV), 740–755. Springer. Liu, K.; Dolan-Gavitt, B.; and Garg, S. 2018. Fine-pruning: Defending against backdooring attacks on deep neural networks. In International Symposium on Research in Attacks, Intrusions, and Defenses, 273–294. Springer. Liu, Y.; Lee, W.-C.; Tao, G.; Ma, S.; Aafer, Y.; and Zhang, X. 2019. ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. Liu, Y.; Ma, S.; Aafer, Y.; Lee, W.-C.; Zhai, J.; Wang, W.; and Zhang, X. 2018. Trojaning Attack on Neural Networks. In Proceedings of the 25nd Annual Network and Distributed System Security Symposium (NDSS). Liu, Y.; Ma, X.; Bailey, J.; and Lu, F. 2020. Reflection backdoor: A natural backdoor attack on deep neural networks. In European Conference on Computer Vision, 182– 199. Springer. mnti, S.; Saha, A.; Pirsiavash, H.; and Hoffmann, H. 2020. Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Open AI. 2021. https://github.com/openai/CLIP. Parkhi, O. M.; Vedaldi, A.; Zisserman, A.; and Jawahar, C. 2012. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, 3498–3505. IEEE. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020. Saha, A.; Subramanya, A.; and Pirsiavash, H. 2020. Hidden Trigger Backdoor Attacks. In Proceedings of the ThirtyFourth AAAI Conference on Artificial Intelligence (AAAI), 11957–11965. Sennrich, R.; Haddow, B.; and Birch, A. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Tan, M.; and Le, Q. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning(ICML), 6105–6114. Tran, B.; Li, J.; and Madry, A. 2018. Spectral signatures in backdoor attacks. Advances in neural information processing systems, 31. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7773 Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, 6000–6010. Wang, B.; Yao, Y.; Shan, S.; Li, H.; Viswanath, B.; Zheng, H.; and Zhao, B. Y. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proceedings of 2019 IEEE Symposium on Security and Privacy (SP), 707–723. Xu, X.; Wang, Q.; Li, H.; Borisov, N.; Gunter, C. A.; and Li, B. 2021. Detecting ai trojans using meta neural analysis. In 2021 IEEE Symposium on Security and Privacy (SP). Young, P.; Lai, A.; Hodosh, M.; and Hockenmaier, J. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2: 67–78. Zhu, L.; Ning, R.; Wang, C.; Xin, C.; and Wu, H. 2020. GangSweep: Sweep out Neural Backdoors by GAN. In Proceedings of the ACM International Conference on Multimedia(ACM-MM), 3173–3181. Zhu, L.; Ning, R.; Xin, C.; Wang, C.; and Wu, H. 2021. CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 16453–16462. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7774
2024
863
18,699
Text Image Inpainting via Global Structure-Guided Diffusion Models Shipeng Zhu1,2, Pengfei Fang1,2, Chenjie Zhu1,2, Zuoyan Zhao1,2, Qiang Xu1,2, Hui Xue1,2* 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China {shipengzhu, fangpengfei, chenjiezhu, zuoyanzhao, 220232307, hxue}@seu.edu.cn Abstract Real-world text can be damaged by corrosion issues caused by environmental or human factors, which hinder the preservation of the complete styles of texts, e.g., texture and structure. These corrosion issues, such as graffiti signs and incomplete signatures, bring difficulties in understanding the texts, thereby posing significant challenges to downstream applications, e.g., scene text recognition and signature identification. Notably, current inpainting techniques often fail to adequately address this problem and have difficulties restoring accurate text images along with reasonable and consistent styles. Formulating this as an open problem of text image inpainting, this paper aims to build a benchmark to facilitate its study. In doing so, we establish two specific text inpainting datasets which contain scene text images and handwritten text images, respectively. Each of them includes images revamped by real-life and synthetic datasets, featuring pairs of original images, corrupted images, and other assistant information. On top of the datasets, we further develop a novel neural framework, Global Structure-guided Diffusion Model (GSDM), as a potential solution. Leveraging the global structure of the text as a prior, the proposed GSDM develops an efficient diffusion model to recover clean texts. The efficacy of our approach is demonstrated by thorough empirical study, including a substantial boost in both recognition accuracy and image quality. These findings not only highlight the effectiveness of our method but also underscore its potential to enhance the broader field of text image understanding and processing. Code and datasets are available at: https://github.com/blackprotoss/GSDM. Introduction Text in the real world serves as a visual embodiment of human language (Long, He, and Yao 2021). It plays a vital role in conveying vast linguistic information and facilitating communication and collaboration in daily life. However, the integrity of text with specific styles, e.g., structure, texture, and background clutter, can be compromised by factors such as environmental corrosion and human interference (Krishnan et al. 2023). As a consequence, these resultant images, as shown in Figure 1(a), are inherently degraded, leading to a performance drop in the text reading and understanding *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Corrosion in real-life scenarios Quick Draw Convex Hull Irregular Region rabbit (b) Ambiguity in corrupted images “property” “propepty” (c) Massive styles of text images “also” “alse” Figure 1: The illustration of corrosion forms in real-life scenarios and the challenges of text image inpainting. systems. In other words, tasks such as scene text editing (Qu et al. 2023) and signature verification (Lai et al. 2021) are inevitably affected by the integrity of text images. Aiming to provide visually plausible restoration for missing regions in corrupted images (Bertalmio et al. 2003; Xiang et al. 2023), image inpainting technologies have made considerable progress (Zhao et al. 2022; Lugmayr et al. 2022; Ji et al. 2023; Yu et al. 2023) in recent years. However, some inherent challenges restrict these general image inpainting methods from restoring corrupted text images. Firstly, the corrupted regions of text images are unknown. That is, the corrosive factors, rooted in real-life scenarios, mean the location mask cannot be provided. Consequently, prevailing non-blind inpainting methods cannot handle this entire image reconstruction task. Secondly, the corrupted regions induce content ambiguity in the text image. It is known that natural objects can be recognized based on their iconic local features. For example, a rabbit can be easily recognized by its long ears, despite corrosion over most of the body parts (Shown in Figure 1(b)). However, the corrosion disrupts the integrity of the global structure in the text image, including its shape and profile, making it challenging to reconstruct the correct characters/words from the remaining strokes. Lastly, text images contain massive style variations. The text images exhibit high inter- and intra-class variability in style (Krishnan et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7775 (i) streila Sheila sheila streila steila correspondorship correspondorship companionship companionship comparionship midas mr midas midas mike (a) Irregular Region (b) Convex Hull (c) Quick Draw streiln correspondorship mike (ii) (iii) (iv) (v) (vi) Figure 2: The illustration of inpainting images with recognition results based on different methods. The (i) to (vi) denote Corrupted images, DDIM, CoPaint, TransCNN-HAE. GSDM, and GT. Red characters indicate errors. 2023), with variations spanning background properties, typography, etc. For instance, two characters of the same class may appear differently, even within the same image (See the red rectangles in Figure 1(c)). This reality places substantial demands on the generalization of a machine to repair corrupted text images. This paper investigates this challenging task, named text image inpainting, and addresses it by formally formulating the task and establishing a benchmark. The closest study to our work is (Sun et al. 2022), which introduces a scene text image dataset for foreground text completion. However, it only includes one corrosion form for synthetic images, thus failing to reflect diverse real-world scenes effectively. Realizing the gaps, our study takes a deep dive, with a focus on restoring the real corrupted text images. As a result, one can enable the restoration of style and detail consistency in corrupted text images, as illustrated in Figure 2. Aligning with the paradigm used in tailored text image tasks (Wu et al. 2019), we gather real-life and synthetic text images to produce two tailored datasets: the Scene Text Image Inpainting (TII-ST) dataset and Handwritten Text Image Inpainting (TII-HT) dataset. In these datasets, we design three typical corrosion forms, i.e., convex hull, irregular region, and quick draw, affecting both scene text images and handwritten text images. With these enriched datasets, we can evaluate the image quality produced by various inpainting methods and assess their impact on downstream applications. Along with the datasets, we further propose a simple yet effective neural network, dubbed Global Structure-guided Diffusion Model (GSDM), as a baseline for the text image inpainting task. The proposed GSDM leverages the structure of the text as a prior, guiding the diffusion model in realizing image restoration. To this end, a Structure Prediction Module (SPM) is first proposed to generate a complete segmentation map that offers guidance regarding the content and positioning of the text. The subsequent diffusion-based Reconstruction Module (RM), which receives the predicted segmentation mask and corrupted images as input, is developed to generate intact text images with coherent styles efficiently. As shown in Figure 2, our proposed GSDM outperforms comparison methods and generates plausible images. In a nutshell, our contributions are as follows: • We construct two datasets, TII-ST and TII-HT, which facilitate the study of text image inpainting. To our knowledge, this is the first initiative to fully restore all styles of corrupted text images, thereby defining a challenging yet promising task. • We propose a versatile method, the Global Structureguided Diffusion Model (GSDM), as a baseline for the task. This model uses the guidance of the complete global structure, predicted from the remaining regions of corrupted text images, to generate complete text images coherent with the corrupted ones. • Comparisons with relevant approaches on the TII-ST and TII-HT datasets demonstrate that our GSDM outperforms these approaches in enhancing downstream applications and improving image quality. Substantial ablation studies further underscore the necessity of different components in our model. The realistic benchmark and strong performance of our work provide favorable templates for future research. Related Work Image Inpainting Image inpainting has long posed a challenge within the computer vision community, aiming for the coherent restoration of corrupted images (Shah, Gautam, and Singh 2022; Xiang et al. 2023). In earlier developments, the majority of approaches have grounded their foundations in autoencoders (Yu et al. 2022), auto-regressive transformers (Wan et al. 2021), and GAN-based paradigms (Pan et al. 2021). Notably, diffusion-based techniques (Lugmayr et al. 2022; Zhang et al. 2023; Yu et al. 2023) have recently gained attention due to their exceptional capability in image generation (Ramesh et al. 2022). Within this context, CoPaint (Zhang et al. 2023) presents a Bayesian framework for holistic image modification, achieving state-of-the-art performance in natural image inpainting. Yet, these methods necessitate explicit guidance of the corrupted mask, which hinders their adaptability in real-world contexts. Moreover, there have been endeavors centered on blind inpainting, which eschew reliance on provided corrupted masks, addressing challenges through image-to-image paradigms (Cai et al. 2017; Zhang et al. 2017; Wang et al. 2020c). For instance, TransCNN-HAE (Zhao et al. 2022) innovatively employs a hybrid Transformer-CNN auto-encoder, optimizing the capability to excavate both long and short range contexts. Concurrently, some diffusion-oriented models (Kawar et al. 2022; Fei et al. 2023) with a dedication to unified image restoration have showcased capabilities in blind image inpainting. However, all these methods are primarily suitable for natural images, thus making it difficult to handle text images, whose semantics are sensitive to the text structure. Zooming into tailored character inpainting, notable progress (Chang et al. 2018) has been made. Recently, Wang The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7776 Dataset Data Type Image Number Corrosion Form Corrosion Ratio Range Evaluation Protocal TII-ST Synthesis Image + Real Image 86,476 CH + IR + QD 5%–60% Accuracy + Quality TII-HT Real Image 40,078 CH + IR + QD 5%–60% Accuracy + Quality Table 1: The data statistics of two constructed datasets, TII-ST and TII-HT. The “CH”, “IR”, and “QD” denote convex hull, irregular region, and quick draw, respectively. The “Accuracy” denotes the word-level recognition accuracy. et al. leverage the semantic acuity of BERT (Devlin et al. 2018), reconstructing the corrupted strokes inherent in Chinese characters (Wang, Ouyang, and Chen 2021). Moreover, TSINIT (Sun et al. 2022) proposes a two-stage encoderdecoder blueprint, generating intact binary foreground texts from incomplete scene text images. Nonetheless, it is worth noting that such methods merely focus on the structure of text images. They overlook the diverse styles inherent in text images, which impacts human perception and narrows downstream applications. Text Image Recognition Text image recognition serves as a foundational element for complicated text-understanding tasks (He et al. 2023) and the assessment of image processing endeavors (Wang et al. 2020b; Wu et al. 2019). Wherein, Scene Text Recognition (STR) and Handwritten Text Recognition (HTR) emerge as dominant research areas (Zhu et al. 2023b). Scene text images showcase a myriad of text styles, both in texture and layout. Pioneering in this field, CRNN (Shi, Bai, and Yao 2016) leverages sequential information in scene text images, achieving proficient recognition of variable-length images. Successor models like ASTER (Shi et al. 2018) and MORAN (Luo, Jin, and Sun 2019) further enhance recognition performance through diverse visual rectification techniques. More recently, language-aware approaches (Fang et al. 2021; Bautista and Atienza 2022) harness the predictive capabilities of language models (Devlin et al. 2018; Yang et al. 2019) to map word probabilities, resulting in impressive recognition outcomes. For handwritten text images, they exhibit diverse calligraphic styles, such as joined-up and illegible handwriting. In recent advancements, numerous methods (Wang et al. 2020a; Singh and Karayev 2021; Li et al. 2023) tap into attention mechanisms to perceive structural correlations, thereby attaining promising performance. Benchmark Dataset Dataset Description Text image inpainting focuses on reconstructing corrupted images, which have been subjected to a variety of real-world disturbances and lack corresponding pristine versions. In this paper, we introduce two novel datasets, TII-ST and TIIHT, tailored for this task. Given the vast style variation in scene text images (Krishnan et al. 2023), we construct the TII-ST dataset using a combination of synthesized and real images. First, we choose to create our own synthetic images instead of utilizing an existing synthetic dataset (Gupta, Vedaldi, and Zisserman 2016), to provide rich auxiliary information, of which segmentation masks are introduced (a) CI (b) CM (c) II (d) CSM (e) ISM CI: Corrupted Image, CM: Corrupted Mask, II: Intact Image, CSM: Corrupted Segmentation Mask, ISM: Intact Segmentation Mask Figure 3: Some training examples in the two datasets. The images of the first three rows are from TII-ST and the images of the last three rows are from TII-HT. to our basic TII-ST. Specifically, following the method in (Jaderberg et al. 2014), we synthesize 80,000 scene text images. Next, we supplement the scene text image dataset with 6,476 real scene text images collected from various sources, including ICDAR 2013 (Karatzas et al. 2013), ICDAR 2015 (Karatzas et al. 2015), and ICDAR 2017 (Nayef et al. 2017). For handwritten text, the TII-HT dataset comprises 40,078 images from the IAM dataset (Marti and Bunke 2002). The text segmentation mask for each image can be acquired using a predetermined threshold. To accurately simulate real-life corrosion (See an illustration in Figure 1), we introduce distinct corrosion forms, i.e., convex hull, irregular region, and quick draw. Notably, the shape of each form can be governed by specific parameters. By adopting these flexible corrosion forms, we aim to encompass a broad spectrum of potential real-world image corrosion scenarios, thereby bolstering the versatility and robustness of the text image inpainting task. Utilizing the images and corrosion forms, we create tuples for each pristine image in both datasets. In the training set, each tuple contains a corrupted image, its corrupted mask, the original intact image, a corrupted segmentation mask, and an intact segmentation mask. For the testing dataset, we furnish data pairs, comprising only the corrupted and intact images. All these images are resized to 64 × 256 to ensure consistent evaluation. Sample images from both datasets are depicted in Figure 3. Additionally, Table 1 intuitively presents basic statistics of the proposed datasets. Evaluation Protocal For fairness in evaluation, we divide our proposed datasets into distinct training and testing sets, respectively. In the TIIST dataset, we follow the strategy outlined in (Zhu et al. 2023b). Specifically, our training set consists of 80,000 synthesized images and 4,877 real images. Meanwhile, the testThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7777 (a) Overview of the Inpainting Method Inference Procedure Noise-adding / Denoising Process in Training Procedure / … … (c) Reconstruction Module (b) Structure Prediction Module Figure 4: The overall architecture of our proposed Global Structure-guided Diffusion Model (GSDM). It consists of two main modules: Structure Prediction Module (SPM) and Reconstruction Module (RM). ing set includes 1,599 real images. For the TII-HT dataset, the training set comprises of 38,578 images sourced from IAM, while the testing set contains 1,600 images. The evaluation of inpainting results on these datasets takes into account both the impact on downstream tasks and the overall image quality. We use text recognition to assess improvements to downstream tasks and employ two established metrics, Peak Signal-to-Noise Ratio (PSNR) (dB) and Structural SIMilarity (SSIM), to evaluate image quality. Recognizing the profound influence of text image quality on reading and understanding systems (Wang et al. 2020b), we opt for text recognition as a representative of downstream tasks to evaluate the effectiveness of inpainting. For scene text images, we engage three recognizers, namely CRNN (Shi, Bai, and Yao 2016), ASTER (Shi et al. 2018), and MORAN (Luo, Jin, and Sun 2019). These recognizers are well-regarded in the field of scene text image processing (Wang et al. 2020b) and are used to evaluate word-level recognition accuracy (%). On the other hand, when dealing with handwritten text images, we turn to two user-friendly, open-source methods: DAN (Wang et al. 2020a) and two versions of (Li et al. 2023)—TrOCR-Base and TrOCRLarge. These methods release official weightings and gauge the same metric as applied to scene text images. In conclusion, our proposed datasets enjoy three characteristics: (1) They cater to the challenges of inpainting both scene text and handwritten texts. (2) Rather than solely relying on synthetic images, we collect images from real-life scenarios for testing, accompanied by the design of realistic and varied forms of corrosion. (3) Beyond the general inpainting task, we evaluate the text image inpainting task via improvement on downstream tasks and image quality. Methodology This section initially provides an overview of the proposed Global Structure-guided Diffusion Model (GSDM). Subsequently, we delve into a detailed explanation of the two units within GSDM: the Structure Prediction Module (SPM) and the Reconstruction Module (RM). Overall Architecture The overall architecture of the proposed GSDM is depicted in Figure 4. For the input corrupted text image c ∈Rh×w×c, the SPM first predicts the complete global structure ˜s ∈ Rh×w. Subsequently, the diffusion-based RM, taking c and ˜s as conditions, generate the intact text image ˜x ∈Rh×w×c. Structure Prediction Module In practice, the content uncertainties in text images are dominated by the global structures, specifically the segmentation mask, of the foreground (Zhu et al. 2023a). Consequently, our aim is to obtain a global structure that closely resembles the original intact image, thereby guiding the subsequent diffusion models in reconstructing corrupted images. To address this challenge, we propose the Structure Prediction Module (SPM), which utilizes a single U-Net (Ronneberger, Fischer, and Brox 2015) to predict the correct foreground segmentation masks of intact images via the corrupted ones. As depicted in Figure 4(b), we utilize a compact UNet (Ronneberger, Fischer, and Brox 2015) denoted as gθ, with three pairs of symmetrical residual blocks to predict the complete segmentation map. Notably, to increase the receptive field and enhance the perception of surrounding corrupted regions, we incorporate dilated convolution (Yu, Koltun, and Funkhouser 2017) into the network. The prediction process can be formulated as: ˜s = gθ(c). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7778 Given the inherent difficulty of one-stage segmentation prediction, we employ multiple loss functions to compare the actual segmentation map s and the predicted one ˜s. Specifically, we implement pixel-level Mean Absolute Error (MAE) loss Lpix and binary segmentation loss Lseg to ensure accurate 2-D segmentation mask generation. The equations are as follows: Lpix = ||s −˜s||1, (1) Lseg = −1 N N X i=1 (2 · si log ˜si + (1 −si) log(1 −˜si)), (2) where N represents the total number of pixels in an image. Alternatively, we formulate the character perceptual loss Lcha and style loss Lsty to maintain semantic consistency. We utilize the preamble perceptual layers ϕRec of a pretrained text recognizer (Shi, Bai, and Yao 2016) to obtain the feature maps, which are then constrained by the MAE loss. This operation, unlike previous work (Wang et al. 2018), can effectively capture the semantics of text within the image. The two loss functions are defined as follows: Lcha = ||ϕRec (s) −ϕRec (˜s) ||1, (3) Lsty = ||Gram (s) −Gram (˜s) ||1, (4) where Gram represents the Gram matrix (Gatys, Ecker, and Bethge 2015). Therefore, the total optimization objective of SPM can be formulated as: Lspm = λ1Lpix + λ2Lseg + λ3Lcha + λ4Lsty. (5) Reconstruction Module Previous diffusion-based inpainting methods (Lugmayr et al. 2022; Ji et al. 2023; Fei et al. 2023) rely on the known mask of corrupted regions. In contrast, our model leverages the predicted global structure and corrupted image as conditions to generate an intact text image. Meanwhile, our diffusion model is implemented by vanilla U-Net (Ronneberger, Fischer, and Brox 2015) with five pairs of symmetrical residual blocks (Shown in Figure 4 (c)). Training Procedure As evidenced in (Song, Meng, and Ermon 2020), the optimization objective of DDIM is equivalent to the vanilla DDPM. Hence, we adopt the training procedure of the latter. Given the intact text image xgt as x0, we successively add Gaussian noise ϵ based on the time step t, as follows: q(xt|xt−1) = N(xt; √αtxt−1, (1 −αt)I), (6) where αt is a hyper-parameter between 0 and 1. With the assistance of the reparameterization trick (Ho, Jain, and Abbeel 2020), the process can be expressed in a more general form: xt = √¯αtx0 + √ 1 −¯αtϵ, (7) where ϵ ∼N(0, I) and ¯αt = Qt i=0 αi ∈[0, 1]. Following the noise-adding process, we adopt the methodology of DALL-E 2 (Ramesh et al. 2022; Xia et al. 2023), which predicts the target image rather than the noise, to improve performance (See ablation study for details). Concretely, receiving the corrupted text image c and predicted segmentation mask ˜s as conditions, the denoising process can be formulated as: pfθ(xt−1|xt, c, ˜s) = q(xt−1|xt, fθ (xt, c, ˜s, t)), (8) where c and ˜s are the conditions. Notably, these conditions are concatenated with xt in each step. The total process is supervised by the MSE loss, as: Lrm = ||xo −fθ (xt, c, ˜s, t) ||2. (9) Inference Procedure The vanilla DDPM (Ho, Jain, and Abbeel 2020) is time-consuming due to the large number of sampling steps required to maintain high-quality generation. During the inference procedure, we perform a non-Markov process (Song, Meng, and Ermon 2020) to accelerate inference and enhance efficiency. Assuming the original generation sequence is L = [T, T −1, ..., 1], where the total number of generation steps is T, we can construct a sub-sequence τ = [τs, τs−1, ..., 1] for inference, and the step number is S ≪T. The final reconstruction result ˜x, can be achieved after S steps, where each step can be written as: xτs−1 = fθ (xτs, c, ˜s, τs) . (10) Experiments In this section, we conduct comparison experiments and ablation studies to demonstrate the superiority of our method. Meanwhile, one potential downstream application is presented to show the significance of our work. Comparison with State-of-the-Art Approaches Scene Text Image In this section, we benchmark our proposed approach against prominent existing methods. We first examine the vanilla conditional DDIM (Song, Meng, and Ermon 2020) and two notable inpainting techniques: TransCNN-HAE (Zhao et al. 2022) (abbr. TransCNN) and CoPaint (Zhang et al. 2023). Notably, as a non-blind diffusion-based model, CoPaint can obtain the corrupted masks of each testing image. Additionally, we draw comparisons with the relational technique TSINIT (Sun et al. 2022), which is designed for binary foreground text completion. As evident from Table 2, our proposed GSDM outperforms other methods in terms of both recognition accuracy and image quality. Notably, our method surpasses both blind and non-blind state-of-the-art methods, i.e., TransCNN and CoPaint. Furthermore, visualization examples from TII-ST can be seen in Figure 5(a). Two key observations can be made: (1) While some comparison methods may produce correct recognition results, the recovered images often lack style consistency. In contrast, our GSDM ensures not only correct recognition results but also a harmonious and visually appealing style. (2) Ambiguous corrupted regions in images, such as the “e” in the word “office”, tend to misguide comparison methods into generating incorrect characters. Conversely, our GSDM consistently generates words that are syntactically accurate. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7779 (i) lines emancer office offico endocracy office office office emmerich emmerich shim shimizu shimizu shimiz u shimiu aasis comparionship past basis companionship basis cripetworking companionship (a) (b) priezs prizes offico the shinni wasis correspondorship proped offici commerica shinnzu womens correspondorship pricasi pricked pricked pricked correspondorship davis (ii) (iii) (iv) (v) (vi) (vii) Figure 5: The inpainting images with recognition results on TII-ST (ASTER) and TII-HT (TrOCR-L). Red characters indicate errors. The (i) to (vii) denote Corrupted Images, TSINIT/Wang et al., DDIM, CoPaint, TransCNN, GSDM, and GT, respectively. Dataset TII-ST Metric CRNN ASTER MORAN PSNR SSIM Corrupted Image 16.89 26.21 27.08 14.24 0.7018 TSINIT † 56.54 63.60 61.22 DDIM 50.59 60.73 58.53 16.79 0.7007 CoPaint* 56.91 66.23 65.73 26.21 0.8794 TransCNN-HAE 60.41 70.61 70.55 28.36 0.9164 GSDM (ours) 67.48 74.67 73.04 33.28 0.9596 Ground Truth 80.18 88.74 86.93 Table 2: The comparison results on TII-ST. The “-” denote unavailable. “*” and “†” denote the non-blind method and reproduction version by ourselves, respectively. Handwritten Text Image In evaluating handwritten text images, we maintain the aforementioned comparison methods but substitute TSINIT with a character inpainting one (Wang et al. 2021) (Reproduced and modified for this task). As depicted in Table 3, our methods achieve pleasing performance in terms of both recognition accuracy and image quality. Figure 5(b) reveals that our approach is adept at delicately restoring the strokes. In stark contrast, comparison methods manifest varying levels of quality degradation, leading to unstable recognition accuracy. Notably, although CoPaint can generate visually appealing images, its recognition outcomes are often erroneous. This can be attributed to the fact that HTR methods are sensitive to structural completeness. That is, even minor corrosion can mislead recognizers, resulting in incorrect outputs. Ablation Study Here we delve into the impact of various components within our proposed method. To maintain consistency, all experiDataset TII-HT Metric DAN TrOCR-B TrOCR-L PSNR SSIM Corrupted Image 23.81 19.75 33.25 20.08 0.8916 Wang et al. † 21.63 11.00 18.50 16.89 0.8113 DDIM 0.25 10.75 44.13 9.32 0.2842 CoPaint* 42.12 26.06 45.50 24.52 0.9203 TransCNN-HAE 17.19 22.87 47.25 15.42 0.7675 GSDM (ours) 69.43 56.00 66.81 32.13 0.9718 Ground Truth 85.19 64.07 75.56 Table 3: The comparison results on TII-HT. The “-” denote unavailable. “*” and “†” denote the non-blind method and reproduction version by ourselves, respectively. ments are conducted on the scene text image dataset, TIIST. The recognition accuracy represents the average results derived from CRNN, ASTER, and MORAN. Variants of the GSDM In this study, we investigate the significance of different components within our GSDM. To do this, we directly applied different components to reconstruct the corrupted text images. The results, presented in Table 4, reveal the following insights: (1) The standalone SPM yields trivial results, attributable to the inherent limitations of the traditional U-Net model in generating diverse text image styles. (2) GSDM surpasses a singular reconstruction module, underscoring the benefits of integrating a global structure. (3) Compared to traditional noise-predicting diffusion methods, predicting the image denoted by x emerges as significantly superior. A plausible reason behind this is the robustness introduced by this paradigm during training. Effect of Sampling Strategy in RM We conduct experiments to demonstrate the efficacy of the chosen samThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7780 Architecture Target Accuracy PSNR SSIM SPM 66.59 25.90 0.8722 RM ϵ 55.35 16.79 0.7007 x 69.40 32.59 0.9561 SPM+RM ϵ 56.10 16.72 0.7112 x (ours) 71.73 33.28 0.9596 Table 4: The performance of different architecture. pling strategy in the RM. Results in Table 5 show that: (1) By adopting the Non-Markov strategy, inspired by DDIM (Song, Meng, and Ermon 2020), our proposed method significantly outperforms its Markov-strategy counterpart from the vanilla DDPM (Ho, Jain, and Abbeel 2020), in terms of both performance metrics and computational efficiency. (2) We observe a noticeable drop in performance as the number of inference steps increases in our approach. One possible explanation is that while our method generates high-quality images in a single step, repeated regeneration of the target image introduces noise cumulatively. Strategy Step Accuracy PSNR SSIM Time (s) Markov 100 66.23 30.51 0.9386 1.720 500 68.35 32.05 0.9535 8.670 1000 68.21 32.28 0.9401 17.560 Non-Markov 1 71.73 33.28 0.9596 0.034 5 69.38 33.03 0.9582 0.110 10 68.96 32.87 0.9575 0.250 Table 5: Performance of various sampling strategies in our reconstruction module. The “Step” indicates the number of sampling steps during inference. Effect of the Training Objective In this study, we investigate the training objective of the proposed GSDM. It is noted that the baseline is primarily optimized by Lpix and Lrm. The results in Table 6 show that: (1) Even when constrained by basic loss functions, our baseline demonstrates superior recognition performance compared to the state-ofthe-art blind method (Zhao et al. 2022) (69.80 vs. 67.19). (2) The recognition performance of GSDM is significantly improved by including more types of loss functions. Notably, the synergistic optimization effect of Lcha and Lsty, which aim to maintain semantic consistency, greatly outperforms each of them (See (iii)–(v)). (3) Unlike the improvement in recognition performance, there is no significant change in image quality. This may be attributed to two factors. On one hand, our robust diffusion-based baseline is capable of producing high-quality images. On the other hand, all these loss functions are exerted on SPM, enabling RM to generate more accurate text content. Improvement on Scene Text Editing To further evaluate the improvement of text inpainting tasks in downstream applications, we conduct a preliminary experiment on scene text editing. This task involves replacing text within a scene image with new content while preserving Baseline Lseg Lcha Lsty Accuracy PSNR SSIM (i) ✓ 69.80 33.34 0.9600 (ii) ✓ ✓ 70.00 33.32 0.9598 (iii) ✓ ✓ ✓ 70.19 33.30 0.9598 (iv) ✓ ✓ ✓ 70.09 33.31 0.9598 (v) ✓ ✓ ✓ ✓ 71.73 33.28 0.9596 Table 6: The performance of different training objectives. (A12) Hiragishi Source Source (i) (ii) (iii) (iv) (v) (vi) Figure 6: Influence of inpainting methods on scene text image editing. The “Source” denotes the source image. “(A12)” and “Hiragishi” denotes the guidance texts. The (i) to (vi) denote Corrupted images, DDIM, CoPaint, TransCNN, GSDM, and GT, respectively. the original style, as described in (Wu et al. 2019). Such an approach has proven invaluable in real-world applications, including augmented reality translation. We adopted the recent MOSTEL framework (Qu et al. 2023) to demonstrate the significance of our task. As shown in Figure 6, edits made on corrupted images are often unsatisfactory. In addition, the subpar inpainting performance of several comparison methods introduces artifacts into the text editing process. Some methods, such as DDIM, generate images that MOSTEL struggles to model effectively. In contrast, the repaired images from our proposed GSDM model yield consistently high-quality results, comparable to those from unaltered images. This finding underscores the importance of prioritizing image quality in inpainting tasks. Conclusion Given the observation of corrosion issues in real-world text, we study a new task: text image inpainting, aiming to repair corrupted images. To this end, we develop two datasets tailored for the target task, namely TII-ST and TII-HT. Concurrently, a novel approach, the Global Structure-guided Diffusion Model (GSDM), is proposed to fulfill text inpainting. Although text image inpainting is a challenging task, comprehensive experiments verify the effectiveness of our method, which enhances both image quality and the performance of the downstream recognition task. We believe the proposed task in this paper introduces a new branch for image inpainting, which will pose considerable significance in repairing text images in real-world scenarios. Future studies include improving the inpainting performance and exploring the applications that benefited from the proposed task. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7781 Acknowledgments This work was supported by the National Natural Science Foundation of China (Nos. 62076062 and 62306070) and the Social Development Science and Technology Project of Jiangsu Province (No. BE2022811). Furthermore, the work was also supported by the Big Data Computing Center of Southeast University. References Bautista, D.; and Atienza, R. 2022. Scene Text Recognition with Permuted Autoregressive Sequence Models. In Proceedings of the European Conference on Computer Vision, 178–196. Bertalmio, M.; Vese, L.; Sapiro, G.; and Osher, S. 2003. Simultaneous structure and texture image inpainting. IEEE Transactions on Image Processing, 12(8): 882–889. Cai, N.; Su, Z.; Lin, Z.; Wang, H.; Yang, Z.; and Ling, B. W.-K. 2017. Blind inpainting using the fully convolutional neural network. The Visual Computer, 33: 249–261. Chang, J.; Gu, Y.; Zhang, Y.; Wang, Y.-F.; and Innovation, C. 2018. Chinese Handwriting Imitation with Hierarchical Generative Adversarial Network. In Proceedings of the British Machine Vision Conference, 290. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Fang, S.; Xie, H.; Wang, Y.; Mao, Z.; and Zhang, Y. 2021. Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7098–7107. Fei, B.; Lyu, Z.; Pan, L.; Zhang, J.; Yang, W.; Luo, T.; Zhang, B.; and Dai, B. 2023. Generative Diffusion Prior for Unified Image Restoration and Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9935–9946. Gatys, L. A.; Ecker, A. S.; and Bethge, M. 2015. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576. Gupta, A.; Vedaldi, A.; and Zisserman, A. 2016. Synthetic data for text localisation in natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2315–2324. He, J.; Wang, L.; Hu, Y.; Liu, N.; Liu, H.; Xu, X.; and Shen, H. T. 2023. ICL-D3IE: In-context learning with diverse demonstrations updating for document information extraction. arXiv preprint arXiv:2303.05063. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Proceedings of the Advances in Neural Information Processing Systems, 33: 6840–6851. Jaderberg, M.; Simonyan, K.; Vedaldi, A.; and Zisserman, A. 2014. Synthetic data and artificial neural networks for natural scene text recognition. arXiv preprint arXiv:1406.2227. Ji, J.; Zhang, G.; Wang, Z.; Hou, B.; Zhang, Z.; Price, B.; and Chang, S. 2023. Improving Diffusion Models for Scene Text Editing with Dual Encoders. arXiv preprint arXiv:2304.05568. Karatzas, D.; Gomez-Bigorda, L.; Nicolaou, A.; Ghosh, S.; Bagdanov, A.; Iwamura, M.; Matas, J.; Neumann, L.; Chandrasekhar, V. R.; Lu, S.; et al. 2015. ICDAR 2015 competition on robust reading. In Proceedings of the International Conference on Document Analysis and Recognition, 1156– 1160. Karatzas, D.; Shafait, F.; Uchida, S.; Iwamura, M.; i Bigorda, L. G.; Mestre, S. R.; Mas, J.; Mota, D. F.; Almazan, J. A.; and De Las Heras, L. P. 2013. ICDAR 2013 robust reading competition. In Proceedings of the International Conference on Document Analysis and Recognition, 1484– 1493. Kawar, B.; Elad, M.; Ermon, S.; and Song, J. 2022. Denoising diffusion restoration models. Advances in Neural Information Processing Systems, 35: 23593–23606. Krishnan, P.; Kovvuri, R.; Pang, G.; Vassilev, B.; and Hassner, T. 2023. TextStyleBrush: Transfer of Text Aesthetics From a Single Example. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7): 9122–9134. Lai, S.; Jin, L.; Zhu, Y.; Li, Z.; and Lin, L. 2021. SynSig2Vec: Forgery-free learning of dynamic signature representations by sigma lognormal-based synthesis and 1D CNN. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 6472–6485. Li, M.; Lv, T.; Chen, J.; Cui, L.; Lu, Y.; Florencio, D.; Zhang, C.; Li, Z.; and Wei, F. 2023. Trocr: Transformer-based optical character recognition with pre-trained models. In Proceedings of the AAAI Conference on Artificial Intelligence, 13094–13102. Long, S.; He, X.; and Yao, C. 2021. Scene text detection and recognition: The deep learning era. International Journal of Computer Vision, 129: 161–184. Lugmayr, A.; Danelljan, M.; Romero, A.; Yu, F.; Timofte, R.; and Van Gool, L. 2022. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11461–11471. Luo, C.; Jin, L.; and Sun, Z. 2019. Moran: A multi-object rectified attention network for scene text recognition. Pattern Recognition, 90: 109–118. Marti, U.-V.; and Bunke, H. 2002. The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5: 39–46. Nayef, N.; Yin, F.; Bizid, I.; Choi, H.; Feng, Y.; Karatzas, D.; Luo, Z.; Pal, U.; Rigaud, C.; Chazalon, J.; et al. 2017. Icdar2017 robust reading challenge on multi-lingual scene text detection and script identification-rrc-mlt. In Proceedings of the International Conference on Document Analysis and Recognition, 1454–1459. Pan, X.; Zhan, X.; Dai, B.; Lin, D.; Loy, C. C.; and Luo, P. 2021. Exploiting deep generative prior for versatile image restoration and manipulation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 7474–7489. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7782 Qu, Y.; Tan, Q.; Xie, H.; Xu, J.; Wang, Y.; and Zhang, Y. 2023. Exploring stroke-level modifications for scene text editing. In Proceedings of the AAAI Conference on Artificial Intelligence, 2119–2127. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, 234– 241. Shah, R.; Gautam, A.; and Singh, S. K. 2022. Overview of image inpainting techniques: A survey. In 2022 IEEE Region 10 Symposium (TENSYMP), 1–6. IEEE. Shi, B.; Bai, X.; and Yao, C. 2016. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(11): 2298– 2304. Shi, B.; Yang, M.; Wang, X.; Lyu, P.; Yao, C.; and Bai, X. 2018. ASTER: An attentional scene text recognizer with flexible rectification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9): 2035–2048. Singh, S. S.; and Karayev, S. 2021. Full page handwriting recognition via image to sequence extraction. In Proceedings of the International Conference on Document Analysis and Recognition, 55–69. Springer. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502. Sun, J.; Xue, F.; Li, J.; Zhu, L.; Zhang, H.; and Zhang, J. 2022. TSINIT: a two-stage Inpainting network for incomplete text. IEEE Transactions on Multimedia. Wan, Z.; Zhang, J.; Chen, D.; and Liao, J. 2021. Highfidelity pluralistic image completion with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4692–4701. Wang, J.; Pan, G.; Sun, D.; and Zhang, J. 2021. Chinese Character Inpainting with Contextual Semantic Constraints. In Proceedings of the 29th ACM International Conference on Multimedia, 1829–1837. Wang, T.; Ouyang, H.; and Chen, Q. 2021. Image Inpainting with External-internal Learning and Monochromic Bottleneck. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5120–5129. Wang, T.; Zhu, Y.; Jin, L.; Luo, C.; Chen, X.; Wu, Y.; Wang, Q.; and Cai, M. 2020a. Decoupled attention network for text recognition. In Proceedings of the AAAI conference on artificial intelligence, 12216–12224. Wang, T.-C.; Liu, M.-Y.; Zhu, J.-Y.; Tao, A.; Kautz, J.; and Catanzaro, B. 2018. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8798–8807. Wang, W.; Xie, E.; Liu, X.; Wang, W.; Liang, D.; Shen, C.; and Bai, X. 2020b. Scene text image super-resolution in the wild. In Proceedings of the European Conference on Computer Vision, 650–666. Springer. Wang, Y.; Chen, Y.-C.; Tao, X.; and Jia, J. 2020c. Vcnet: A robust approach to blind image inpainting. In Proceedings of the European Conference on Computer Vision, 752–768. Springer. Wu, L.; Zhang, C.; Liu, J.; Han, J.; Liu, J.; Ding, E.; and Bai, X. 2019. Editing text in the wild. In Proceedings of the 27th ACM international conference on multimedia, 1500–1508. Xia, B.; Zhang, Y.; Wang, S.; Wang, Y.; Wu, X.; Tian, Y.; Yang, W.; and Van Gool, L. 2023. Diffir: Efficient diffusion model for image restoration. arXiv preprint arXiv:2303.09472. Xiang, H.; Zou, Q.; Nawaz, M. A.; Huang, X.; Zhang, F.; and Yu, H. 2023. Deep learning for image inpainting: A survey. Pattern Recognition, 134: 109046. Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R. R.; and Le, Q. V. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Proceedings of Advances in Neural Information Processing Systems, 32. Yu, F.; Koltun, V.; and Funkhouser, T. 2017. Dilated residual networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 472–480. Yu, T.; Feng, R.; Feng, R.; Liu, J.; Jin, X.; Zeng, W.; and Chen, Z. 2023. Inpaint anything: Segment anything meets image inpainting. arXiv preprint arXiv:2304.06790. Yu, Y.; Du, D.; Zhang, L.; and Luo, T. 2022. Unbiased multimodality guidance for image inpainting. In Proceedings of the European Conference on Computer Vision, 668–684. Springer. Zhang, G.; Ji, J.; Zhang, Y.; Yu, M.; Jaakkola, T. S.; and Chang, S. 2023. Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models. In Proceedings of the International Conference on Machine Learning. Zhang, S.; He, R.; Sun, Z.; and Tan, T. 2017. Demeshnet: Blind face inpainting for deep meshface verification. IEEE Transactions on Information Forensics and Security, 13(3): 637–647. Zhao, H.; Gu, Z.; Zheng, B.; and Zheng, H. 2022. Transcnnhae: Transformer-cnn hybrid autoencoder for blind image inpainting. In Proceedings of the 30th ACM International Conference on Multimedia, 6813–6821. Zhu, S.; Zhao, Z.; Fang, P.; and Xue, H. 2023a. Improving Scene Text Image Super-resolution via Dual Prior Modulation Network. In Proceedings of the AAAI Conference on Artificial Intelligence, 3843–3851. Zhu, Y.; Li, Z.; Wang, T.; He, M.; and Yao, C. 2023b. Conditional Text Image Generation with Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14235–14245. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 7783
2024
864