通过视觉语言模型蒸馏进行 3D 形状零件分割

         大家读完觉得有帮助记得关注和点赞!!!对应英文要求比较高,特此说明!

Abstract

This paper proposes a cross-modal distillation framework, PartDistill, which transfers 2D knowledge from vision-language models (VLMs) to facilitate 3D shape part segmentation. PartDistill addresses three major challenges in this task: the lack of 3D segmentation in invisible or undetected regions in the 2D projections, inconsistent 2D predictions by VLMs, and the lack of knowledge accumulation across different 3D shapes. PartDistill consists of a teacher network that uses a VLM to make 2D predictions and a student network that learns from the 2D predictions while extracting geometrical features from multiple 3D shapes to carry out 3D part segmentation. A bi-directional distillation, including forward and backward distillations, is carried out within the framework, where the former forward distills the 2D predictions to the student network, and the latter improves the quality of the 2D predictions, which subsequently enhances the final 3D segmentation. Moreover, PartDistill can exploit generative models that facilitate effortless 3D shape creation for generating knowledge sources to be distilled. Through extensive experiments, PartDistill boosts the existing methods with substantial margins on widely used ShapeNetPart and PartNetE datasets, by more than 15% and 12% higher mIoU scores, respectively. The code for this work is available at https://github.com/ardianumam/PartDistill.

1Introduction

3D shape part segmentation is essential to various 3D vision applications, such as shape editing [23, 45], stylization [29], and augmentation [40]. Despite its significance, acquiring part annotations for 3D data, such as point clouds or mesh shapes, is labor-intensive and time-consuming.

Zero-shot learning [41, 8] generalizes a model to unseen categories without annotations and has been notably uplifted by recent advances in vision-language models (VLMs) [37, 22, 46, 21]. By learning on large-scale image-text data pairs, VLMs show promising generalization abilities on various 2D recognition tasks. Recent research efforts [48, 52, 24, 1] have been made to utilize VLMs for zero-shot 3D part segmentation, where a 3D shape is projected into multi-view 2D images, and a VLM is applied to these images for 2D prediction acquisition. Specifically, PointCLIP [48] and PointCLIPv2 [52] produce 3D point-wise semantic segmentation by averaging their corresponding 2D pixel-wise predictions. Meanwhile, PartSLIP [24] and SATR [1] present a designated weighting mechanism to aggregate multi-view bounding box predictions.

Refer to caption

Figure 1: We present a distillation method that carries out zero-shot 3D shape part segmentation with a 2D vision-language model. After projecting an input 3D point cloud into multi-view 2D images, the 2D teacher (2D-T) and the 3D student (3D-S) networks are applied to the 2D images and 3D point cloud, respectively. Instead of direct transfer, our method carries bi-directional distillations, including forward and backward distillations, and yields better 3D part segmentation than the existing method.

The key step of zero-shot 3D part segmentation with 2D VLMs, e.g., [48, 52, 24, 1], lies in the transfer from 2D pixel-wise or bounding-box-wise predictions to 3D point segmentation. This step is challenging due to three major issues. First (𝓘𝟏), some 3D regions lack corresponding 2D predictions in multi-view images, which are caused by occlusion or not being covered by any bounding boxes, illustrated with black and gray points, respectively, in Fig. 1. This issue is considered a limitation in the previous work [48, 52, 24, 1]. Second (𝓘𝟐), there exists potential inconsistency among 2D predictions in multi-view images caused by inaccurate VLM predictions. Third (𝓘𝟑), existing work [48, 52, 24, 1] directly transfers 2D predictions to segmentation of a single 3D shape. The 2D predictions yielded based on appearance features are not optimal for 3D geometric shape segmentation, while geometric evidence given across different 3D shapes is not explored.

To alleviate the three issues 𝓘𝟏∼𝓘𝟑, unlike existing methods [48, 52, 24, 1] directly transferring 2D predictions to 3D segmentation, we propose a cross-modal distillation framework with a teacher-student model. Specifically, a VLM is utilized as a 2D teacher network, accepting multi-view images of a single 3D shape. The VLM is pre-trained on large-scale image-text pairs and can exploit appearance features to make 2D predictions. The student network is developed based on a point cloud backbone. It is derived from multiple, unlabeled 3D shapes and can extract point-specific geometric features. The proposed distillation method, PartDistill, leverages the strengths of both networks, hence improving zero-shot 3D part segmentation.

The student network learns from not only the 2D teacher network but also 3D shapes. It can extract point-wise features and segment 3D regions uncovered by 2D predictions, hence tackling issue 𝓘𝟏. As a distillation-based method, PartDistill tolerates inconsistent predictions between the teacher and student networks, which alleviates issue 𝓘𝟐 of negative transfer caused by wrong VLM predictions. The student network considers both appearance and geometric features. Thus, it can better predict 3D geometric data and mitigate issue 𝓘𝟑. As shown in Fig. 1, the student network can correctly predict the undetected arm of the chair (see the black arrows) by learning from other chairs.

PartDistill carries out a bi-directional distillation. It first forward distills the 2D knowledge to the student network. We observe that after the student integrates the 2D knowledge, we can jointly refer both teacher and student knowledge to perform backward distillation which re-scores the 2D knowledge based on its quality. Those of low quality will be suppressed with lower scores, such as from 0.6 to 0.1 for the falsely detected arm box in Fig. 1, and vice versa. Finally, this re-scored knowledge is utilized by the student network to seek better 3D segmentation.

The main contributions of this work are summarized as follows. First, we introduce PartDistill, a cross-modal distillation framework that transfers 2D knowledge from VLMs to facilitate 3D part segmentation. PartDistill addresses three identified issues present in existing methods and generalizes to both VLM with bounding-box predictions (B-VLM) and pixel-wise predictions (P-VLM). Second, we propose a bi-directional distillation, which involves enhancing the quality of 2D knowledge and subsequently improving the 3D predictions. Third, PartDistill can leverage existing generative models [31, 33] to enrich knowledge sources for distillation. Extensive experiments demonstrate that PartDistill surpasses existing methods by substantial margins on widely used benchmark datasets, ShapeNetPart [44] and PartNetE [24], with more than 15% and 12% higher mIoU scores, respectively. PartDistill consistently outperforms competing methods in zero-shot and few-shot scenarios on 3D data in point clouds or mesh shapes.

Refer to caption

Figure 2:Overview of the proposed method. (a) The overall pipeline where the knowledge extracted from a vision-language model (VLM) is distilled to carry out 3D shape part segmentation by teaching a 3D student network. Within the pipeline, backward distillation is introduced to re-score the teacher’s knowledge based on its quality and subsequently improve the final 3D part prediction. (b) (c) Knowledge is extracted by back-projection when we adopt (b) a bounding-box VLM (B-VLM) or (c) a pixel-wise VLM (P-VLM), where &Γ and ℂ denote 2D-to-3D back-projection and connected component labeling [3], respectively.

2Related Work

Vision-language models.

Based on learning granularity, vision-language models (VLMs) can be grouped into three categories, including the image-level [37, 15], pixel-level [21, 27, 53], and object-level [22, 46, 25] categories. The second and the third categories make pixel-level and bounding box predictions, respectively, while the first category produces image-level predictions. Recent research efforts on VLMs have been made for cross-level predictions. For example, pixel-level predictions can be derived from an image-level VLM via up-sampling the 2D features into the image dimensions, as shown in PointCLIPv2 [52]. In this work, we propose a cross-modal distillation framework that learns and transfers knowledge from a VLM in the 2D domain to 3D shape part segmentation.

3D part segmentation using vision-language models.

State-of-the-art zero-shot 3D part segmentation [24, 52, 1] is developed by utilizing a VLM and transferring its knowledge in the 2D domain to the 3D space. The pioneering work PointCLIP [48] utilizes CLIP [37]. PointCLIPv2 [52] extends PointCLIP by making the projected multi-view images more realistic and proposing LLM-assisted text prompts [4], hence producing more reliable CLIP outputs for 3D part segmentation.

Both PointCLIP and PointCLIPv2 rely on individual pixel predictions in 2D views to get the predictions of the corresponding 3D points, but individual pixel predictions are less unreliable. PartSLIP [24] suggests to extract superpoints [20] from the input point cloud. Therefore, 3D segmentation is estimated for each superpoint by referring to a set of relevant pixels in 2D views. PartSLIP uses GLIP [22] to output bounding boxes and further proposes a weighting mechanism to aggregate multi-view bounding box predictions to yield 3D superpoint predictions. SATR [1] shares a similar idea with PartSLIP but handles 3D mesh shapes instead of point clouds.

Existing methods [24, 52, 48, 1] directly transfer VLM predictions from 2D images into 3D spaces and pose three issues: (𝓘𝟏) uncovered 3D points, (𝓘𝟐) negative transfer, and (𝓘𝟑) cross-modality predictions, as discussed before. We present a distillation-based method to address all three issues and make substantial performance improvements.

2D to 3D distillation.

Seminal work of knowledge distillation [5, 14] aims at transferring knowledge from a large model to a small one. Subsequent research efforts [39, 28, 50, 26, 43] adopt this idea of transferring knowledge from a 2D model for 3D understanding. However, these methods require further fine-tuning with labeled data. OpenScene [34] and CLIP2Scene [7] require no fine-tuning and share a similar concept with our method of distilling VLMs for 3D understanding, with ours designed for part segmentation and theirs for indoor/outdoor scene segmentation. The major difference is that our method can enhance the knowledge sources in the 2D modality via the proposed backward distillation. Moreover, our method is generalizable to both P-VLM (pixel-wise VLM) and B-VLM (bounding-box VLM), while their methods are only applicable to P-VLM.

3Proposed Method

3.1Overview

Given a set of 3D shapes, this work aims to segment each one into R semantic parts without training with any part annotations. To this end, we propose a cross-modal bi-directional distillation frameworkPartDistill, which transfers 2D knowledge from a VLM to facilitate 3D shape part segmentation. As illustrated in Fig. 2, our framework takes triplet data as input, including the point cloud of the shape with N 3D points, multi-view rendered images from the shape in V different poses, and R text prompts with each describing the target semantic parts within the 3D shapes.

For the 2D modality, the V multi-view images and the text prompts are fed into a Bounding-box VLM (B-VLM) or Pixel-wise VLM (P-VLM). For each view v, a B-VLM produces a set of bounding boxes, Bv={bi}i=1β while a P-VLM generates pixel-wise predictions Sv. We then perform knowledge extraction (Sec 3.2) for each Bv or Sv; Namely, we transfer the 2D predictions into the 3D space through back-projection for a B-VLM or connected-component labeling [3] followed by back-projection for a P-VLM, as shown in Fig. 2 (b) and Fig. 2 (c), respectively. Subsequently, a set of D teacher knowledge units, 𝒦={k}d=1D={Yd,Md}d=1D, is obtained by aggregating from all V multi-view images. Each unit d comprises point-wise part probabilities, Yd∈ℝN×R, from the teacher VLM network, accompanied with a mask, Md∈{0,1}N, identifying the points included in this knowledge unit.

For the 3D modality, the point cloud is passed into the 3D student network with a 3D encoder and a distillation head, producing point-wise part predictions, Y~∈ℝN×R. With the proposed bi-directional distillation framework, we first forward distill teacher’s 2D knowledge by aligning Y~ with 𝒦 via minimizing the proposed loss, ℒd⁢i⁢s⁢t⁢i⁢l⁢l, specified in Sec 3.2. The 3D student network integrates 2D knowledge from the teacher through optimization. The integrated student knowledge Y~′ and the teacher knowledge 𝒦 are then jointly referred to perform backward distillation from 3D to 2D, detailed in Sec. 3.3, which re-scores each knowledge unit kd based on its qualities, as shown in Fig. 2. Finally, the re-scored knowledge 𝒦′ is used to refine the student knowledge to get final part segmentation predictions Y~f by assigning each point to the part with the highest probability.

3.2Forward distillation: 2D to 3D

Our method extracts the teacher’s knowledge in the 2D modality and distills it in the 3D space. In the 2D modality, V multi-view images {Iv∈ℝH×W}v=1V are rendered from the 3D shape, e.g., using the projection method in [52]. These V multi-view images together with the text prompts T of R parts are passed to the VLM to get the knowledge in 2D spaces. For a B-VLM, a set of β bounding boxes, Bv={bi}i=1β, is obtained from the v-th image, with bi∈ℝ4+R encoding the box coordinates and the probabilities of the R parts. For a P-VLM, a pixel-wise prediction map Sv∈ℝH×W×R is acquired from the v-th image. We apply knowledge extraction to each Bv and each Sv to obtain a readily distillable knowledge 𝒦 in the 3D space, as illustrated in Fig. 2 (b) and Fig. 2 (c), respectively.

For a B-VLM, bounding boxes can directly be treated as the teacher knowledge. For a P-VLM, knowledge extraction starts by applying connected-component labeling [3] to Sv to get a set of ρ segmentation components, {si∈ℝH×W×R}i=1ρ, indicating if the r-th part is with the highest probability in each pixel. We summarize the process when applying a VLM to a rendered image and the part text prompts as

VLM⁢(Iv,T)={Bv={bi}i=1β,for B-VLM,ℂ⁢(Sv)={si}i=1ρ,for P-VLM,(1)

where ℂ denotes connected-component labeling.

We then back-project each box bi or each prediction map si to the 3D space, i.e.,

ki=(Yi,Mi)={Γ⁢(bi),for B-VLM,Γ⁢(si),for P-VLM,(2)

where Γ denotes the back-projection operation with the camera parameters [49] used for multi-view image rendering, Yi∈ℝN×R is the point-specific part probabilities, and Mi∈{0,1}N is the mask indicating which 3D points are covered by bi or si in the 2D space. The pair (Yi,Mi) yields a knowledge unit, ki, upon which the knowledge re-scoring is performed in the backward distillation.

For the 3D modality, a 3D encoder, e.g., Point-M2AE [47], is applied to the point cloud and obtains per-point features, O∈ℝN×E, capturing local and global geometrical information. We then estimate point-wise part prediction, Y~∈RN×R, by feeding the point features O into the distillation head. The cross-modal distillation is performed by teaching the student network to align the part probability from the 3D modality Y~ to their 2D counterparts Y via minimizing our designated distillation loss.

Distillation loss.

Via Eq. 1 and Eq. 2, we assume that D knowledge units, 𝒦={kd}d=1D={Yd,Md}d=1D, are obtained from the multi-view images. The knowledge 𝒦 exploits 2D appearance features and is incomplete as several 3D points are not covered by any 2D predictions, i.e., issue 𝓘𝟏. To distill this incomplete knowledge, we utilize a masked cross-entropy loss defined as

ℒd⁢i⁢s⁢t⁢i⁢l⁢l=−∑d=1D1|Md|⁢∑n=1N∑r=1RMnd⁢Cnd⁢Zn,rd⁢log⁢(Y~n,r),(3)

where Cnd=max𝑟⁢(Ynd⁢(r)) is the confidence score of kd on point n. Zn,rd takes value 1 if part r receives the highest probability on kd, and 0 otherwise. |Md| is the area covered by the mask Md.

By minimizing Eq. 3, we teach the student network to align its prediction Y~ to the distilled prediction Y by considering the points covered by the mask and using the confidence scores as weights. Despite learning from incomplete knowledge, the student network extracts point features that capture geometrical information of the shape, thus enabling it to reasonably segment the points that are not covered by 2D predictions, hence addressing issue 𝓘𝟏. This can be regarded as interpolating the learned part probability in the feature spaces by the distillation head.

As a distillation-based method, our method allows partial inconsistency among the extracted knowledge 𝒦={kd}d=1D caused by inaccurate VLM predictions, thereby alleviating issue 𝓘𝟐 of negative transfer. In our method, the teacher network works on 2D appearance features, while the student network extracts 3D geometric features. After distillation via Eq. 3, the student network can exploit both appearance and geometric features from multiple shapes, hence mitigating issue 𝓘𝟑 of cross-modal transfer. It is worth noting that unlike the conventional teacher-student models [14, 11, 13] which solely establish a one-to-one correspondence, we further re-score each knowledge unit kd based on its quality (Sec. 3.3), and improve distillation by suppressing low-quality knowledge units.

3.3Backward distillation: 3D to 2D

In Eq. 3, we consider all knowledge units {kd}d=1D, weighted by their confidence scores. However, due to the potential VLM mispredictions, not all knowledge units are reliable. Hence, we refine the knowledge units by assigning higher scores to those of high quality and suppressing the low-quality ones. We observe that once the student network has thoroughly integrated the knowledge from the teacher, we can jointly refer both teacher and integrated student knowledge Y~′ to achieve the goal, by re-scoring the confidence score Cd to Cb⁢dd as:

Cb⁢dd=|Md(argmax(Yd)⇔argmax(Y~′))||Md|,(4)

where ⇔ denotes the element-wise equality (comparison) operation. In this way, each knowledge unit kd is re-scored: Those with high consensus between teacher 𝒦 and integrated student knowledge Y~′ have higher scores, such as those on the chair legs shown in Fig. 3, and those with low consensus are suppressed by the reduced scores, such as those on the chair arm (B-VLM) and back (P-VLM) in Fig. 3. Note that for simplicity, we only display two scores in each shape of Fig. 3 and show the average pixel-wise scores in P-VLM. To justify that the student network has thoroughly integrated the teacher’s knowledge, from initial knowledge Y~ to integrated knowledge Y~′, we track the moving average of the loss value for every epoch and see if the value in a subsequent epoch is lower than a specified threshold τ. Afterward, the student network continues to learn with the re-scored knowledge 𝒦′ by minimizing the loss in Eq. 3 with C being replaced by Cb⁢d, and produces the final part segmentation predictions Y~f.

Refer to caption

Figure 3:Given the VLM output of view v, Bv or Sv, we display the confidence scores before (C) and after (Cb⁢d) performing backward distillation via Eq. 4, with Y and M obtained via Eq. 2. With backward distillation, inaccurate VLM predictions have lower scores, such as the arm box in B-VLM with the score reduced from 0.7 to 0.1, and vice versa.

3.4Test-time alignment

In general, our method performs the alignment with a shape collection before the student network is utilized to carry the 3D shape part segmentation. If such a pre-alignment is not preferred, we provide a special case of our method, test-time alignment (TTA), where the alignment is performed for every single shape in test time. To maintain the practicability, TTA needs to achieve a near-instantaneous completion. To that end, TTA employs a readily used 3D encoder, e.g., pre-trained Point-M2AE [47], freezes its weights, and only updates the learnable parameters in the distillation head, which significantly fastens the TTA completion.

3.5Implementation Details

The proposed framework is implemented in PyTorch [32] and is optimized for 25 epochs via Adam optimizer [19] with a learning rate and batch size of 0.001 and 16, respectively. Unless further specified, the student network employs Point-M2AE [47] pre-trained in a self-supervised way on the ShapeNet55 dataset [6] as the 3D extractor, freezes its weights, and only updates the learnable parameters in the distillation head. A multi-layer perceptron consisting of 4 layers, with ReLU activation [2], is adopted for the distillation head. To fairly compare with the competing methods [48, 52, 24, 1], we follow their respective settings, including the used text prompts and the 2D rendering. Their methods render each shape into 10 multi-view images, either from a sparse point cloud [48, 52], a dense point cloud [24], or a mesh shape [1]. Lastly, we follow [18, 38] to specify a small threshold value, τ=0.01 in our backward distillation, and apply class-balance weighting [9] during the alignment, based on the VLM predictions in the zero-shot setting, with additional few-shot labels in the few-shot setting.

Table 1:Zero-shot segmentation on the ShapeNetPart dataset, reported in mIoU (%).*

VLMData typeMethodAirplaneBagCapChairEarphoneGuitarKnifeLaptopMugTableOverall
CLIP [37]point cloudPointCLIP [48]22.044.813.418.728.322.724.822.948.645.431.0
PointCLIPv2 [52] 35.753.353.151.948.159.166.761.845.549.848.4
OpenScene [34]34.463.856.159.862.669.370.165.451.060.452.9
Ours (TTA)37.562.655.556.455.671.776.967.453.562.953.8
Ours (Pre)40.675.667.265.066.385.879.892.683.168.763.9
GLIP [22]point cloudOurs (TTA)57.362.756.274.245.860.678.585.782.562.954.7
Ours (Pre)69.370.167.986.551.276.885.791.985.679.664.1
meshSATR [1]32.232.121.825.219.437.740.150.476.422.432.3
Ours (TTA)53.261.844.966.443.050.766.368.383.958.849.5
Ours (Pre)64.864.451.067.448.364.870.083.186.579.356.3

*Results for other categories, including those of Table 2 and Table 3, can be seen in the supplementary material.

Table 2:Zero-shot segmentation on the PartNetE dataset, reported in mIoU (%).

VLMData typeMethodBottleCartChairDisplayKettleKnifeLampOvenSuitcaseTableOverall
GLIP [22]point cloudPartSLIP [24]76.387.760.743.820.846.837.133.040.247.727.3
Ours (TTA)77.488.574.150.524.259.258.834.243.250.239.9

4Experiments

4.1Dataset and evaluation metric

We evaluate the effectiveness of our method on two main benchmark datasets, ShapeNetPart [44] and PartNetE [24]. While ShapeNetPart dataset contains 16 categories with a total of 31,963 shapes, PartNetE contains 2,266 shapes, covering 45 categories. The mean intersection over union (mIoU) [30] is adopted to evaluate the segmentation results on the test-set data, measured against the ground truth label.

4.2Zero-shot segmentation

To compare with the competing methods [48, 52, 1, 24], we adopt each of their settings and report their mIoU performances from their respective papers. Specifically, for P-VLM, we follow PointCLIP [48] and PointCLIPv2 [52] to utilize CLIP [37] with ViT-B/32 [10] backbone and use their pipeline to obtain the pixel-wise predictions from CLIP. For B-VLM, a GLIP-Large model [22] is employed in our method to compare with PartSLIP and SATR which also use the same model. While most competing methods report their performances on the ShapeNetPart dataset, PartSLIP evaluates its method on the PartNetE dataset. In addition, we compare with OpenScene [34] by extending it for 3D part segmentation and use the same Point-M2AE [47] backbone and VLM CLIP for a fair comparison.

Accordingly, we carry out the comparison separately to ensure fairness, based on the employed VLM model and the shape data type, i.e., point cloud or mesh data, as shown in Tables 1 and 2. In Table 1, we provide two versions of our method, including test-time alignment (TTA) and pre-alignment (Pre) with a collection of shapes from the train-set data. Note that in the Pre version, our method does not use any labels (only unlabeled shape data are utilized).

First, we compare our method to PointCLIP and PointCLIPv2 (both utilize CLIP) on the zero-shot segmentation for the ShapeNetPart dataset, as can be seen in the first part of Table 1. It is evident that our method for both TTA and pre-alignment versions achieves substantial improvements in all categories. For the overall mIoU, calculated by averaging the mIoUs from all categories, our method attains 5.4% and 15.5% higher mIoU for TTA and pre-alignment versions, respectively, compared to the best mIoU from the other methods. Such results reveal that our method which simultaneously exploits appearance and geometric features can better aggregate the 2D predictions for 3D part segmentation than directly averaging the corresponding 2D predictions as in the competing methods, where geometric evidence is not explored. We further compare with OpenScene [34] under the same setting as ours (Pre) and our method substantially outperforms it. One major reason is that our method can handle the inconsistency of VLM predictions (issue 𝓘2) better by backward distillation.

Next, as shown in the last three rows of Table 1, we compare our method to SATR [1] that works on mesh data shapes. To obtain the mesh face predictions, we propagate the point predictions via a nearest neighbors approach as in [17], where each face is voted from its five nearest points. Our method achieves 17.2% and 24% higher overall mIoU than SATR for TTA and pre-alignment versions, respectively. Then, we compare our method with PartSLIP [24] in Table 2 wherein only results from TTA are provided since the PartNetE dataset does not provide train-set data. One can see that our method consistently obtains better segmentations, with 12.6% higher overall mIoU than PartSLIP.

In PartSLIP and SATR, as GLIP is utilized, the uncovered 3D regions (issue 𝓘𝟏) could be intensified by possible undetected areas, and the negative transfer (issue 𝓘𝟐) may also be escalated due to semantic leaking, where the box predictions cover pixels from other semantics. On the other hand, our method can better alleviate these issues, thereby achieving substantially higher mIoU scores. In our method, the pre-alignment version achieves better segmentation results than TTA. This is expected since in the pre-alignment version, the student network can distill the knowledge from a collection of shapes, instead of individual shape.

Refer to caption

Figure 4:Visualization of the zero-shot segmentation results, drawn in different colors, on the ShapeNetPart dataset. We render PartSLIP results on the ShapeNetPart data to have the same visualization of shape inputs. While occluded and undetected regions (issue 𝓘𝟏) are shown with black and gray colors, respectively, the blue and red arrows highlight several cases of issues 𝓘𝟐 and 𝓘𝟑.

Besides foregoing quantitative comparisons, a qualitative comparison of the segmentation results is presented in Fig. 4. It is readily observed that the competing methods suffer from the lack of 3D segmentation for the uncovered regions (issue 𝓘𝟏) caused by either occlusion or not being covered by any bounding box, drawn with black and gray colors, respectively. Moreover, these methods may also encounter negative transfers caused by inaccurate VLM outputs (issue 𝓘𝟐), such as those pointed by blue arrows, with notably degraded outcomes in SATR due to semantic leaking. Nonetheless, our method performs cross-modal distillation and alleviates these two issues, as can be seen in Fig. 4. In addition, due to a direct transfer of 2D predictions to 3D space which relies on each independent shape as in the competing methods, erroneous 2D predictions will just remain as incorrect 3D segmentation (issue 𝓘𝟑), such as the missed detected chair arms and guitar heads pointed by red arrows. Our method also addresses this issue, by exploiting geometrical features across multiple shapes.

Table 3:Few-shot segmentation on the PartNetE dataset, reported in mIoU (%).

MethodBottleCartChairDisplayKettleKnifeLampOvenSuitcaseTableOverall
Non-VLM-basedPointNet++ [35]27.011.642.230.228.622.210.519.43.37.320.4
PointNext [36] 67.647.765.153.760.659.755.436.814.522.140.6
ACD [12] 22.431.539.029.240.239.613.78.913.213.523.2
Prototype [51] 60.136.870.867.362.750.438.236.535.525.744.3
Point-M2AE [47]72.474.583.474.364.368.057.653.357.533.656.4
VLM-based(GLIP [22]) PartSLIP [24]83.488.185.384.877.065.260.073.570.442.459.4
Ours84.690.188.487.478.671.469.272.873.463.365.9

4.3Few-shot segmentation

We further demonstrate the effectiveness of our method in a few-shot scenario by following the setting used in PartSLIP [24]. Specifically, we employ the fine-tuned GLIP model [22] provided by PartSLIP via 8-shot labeled shapes of the PartNetE dataset [44] for each category. In addition to the alignment via Eq. 3, we ask the student network to learn parameters that minimize both Eq. 3 and a standard cross-entropy loss for segmentation on the 8 labeled shapes.

As shown in Table 3, the methods dedicated to few-shot 3D segmentation, ACD [12] and Prototype [51], are adapted to PointNet++ [51] and PointNext [36] backbones, respectively, and can improve the performances (on average) of these backbones. PartSLIP, on the other hand, leverages multi-view GLIP predictions for 3D segmentation and further improves the mIoU, but there are still substantial performance gaps compared to our method which distills the GLIP predictions instead. We also present the results from fine-tuning Point-M2AE with the few-shot labels, which shows lower performances than ours, highlighting the significant contribution of our distillation framework. For more qualitative results, see the supplementary materials.

4.4Leveraging generated data

Since only unlabeled 3D shape data are required for our method to perform cross-modal distillation, existing generative models [31, 33] can facilitate an effortless generation of 3D shapes, and the generated data can be smoothly incorporated into our method. Specifically, we first adopt DiT-3D [31] which is pre-trained on the ShapeNet55 dataset [6] to generate point clouds of shapes, 500 shapes for each category, and further employ SAP [33] to transform the generated point clouds into mesh shapes. These generated mesh shapes can then be utilized in our method for distillation. Table 4 shows the results evaluated on the test-set data of ShapeNetPart [44] and COSEG [42] datasets for several shape categories, using GLIP as the VLM.

One can see that with distilling from the generated alone, our method already achieves competitive results on the ShapeNetPart dataset compared to distilling from the train-set data. Since the generated data via DiT-3D is pre-trained on the ShapeNet55 dataset which contains the ShapeNetPart data, we also evaluate its performance on the COSEG dataset to show that such results can be well transferred to shapes from another dataset. Finally, Table 4 (the last row) reveals that using generated data as a supplementary knowledge source can further increase the mIoU performance. Such results suggest that if a collection of shapes is available, generated data can be employed as supplementary knowledge sources, which can improve the performance. On the other hand, if a collection of shapes does not exist, generative models can be employed for shape creation and subsequently used in our method as the knowledge source.

4.5Ablation studies

Proposed components.

We perform ablation studies on the proposed components, and the mIoU scores in 2D11Calculated between the VLM predictions and their corresponding 2D ground truths projected from 3D, and weighted by the confidence scores. See supplementary material for the details. and 3D spaces on three categories of the ShapeNetPart dataset are shown in (1) to (9) of Table 5. In (1), only GLIP box predictions are utilized to get 3D segmentations, i.e., part labels are assigned by voting from all visible points within the multi-view box predictions. These numbers serve as baselines and are subject to issues 𝓘𝟏∼𝓘𝟑. In (2) and (3), 3D segmentations are achieved via forward distillation from the GLIP predictions to the student network using Eq. 3, for test-time alignment (TTA) and pre-alignment (Pre) versions, resulting in significant improvements compared to the baselines, with more than 10% and 14% higher mIoUs, respectively. Such results demonstrate that the proposed cross-modal distillation can better utilize the 2D multi-view predictions for 3D part segmentation, alleviating 𝓘𝟏∼𝓘𝟑.

Table 4:Segmentation mIoU (%) by leveraging generated data.

Distilled dataShapeNetPart [44]COSEG [42]
AirplaneChairGuitarChairGuitar
Train-set (baseline)69.386.276.896.468.0
Gen. data69.085.375.696.167.5
Gen. data & train-set70.888.478.397.470.2

Table 5:Ablation study on the proposed method.

NoVLMPreBDStudentnetworkAirplaneChairKnife
2D3D2D3D2D3D
(1)GLIP[22]42.840.260.260.153.657.2
(2)42.856.260.273.553.677.6
(3)42.864.360.284.253.684.5
(4)44.357.361.774.254.878.5
(5)48.269.363.286.555.085.7
(6)exclude 𝓘𝟏48.262.563.280.455.081.2
(7)w/o pretrain48.269.163.286.755.085.3
(8)CLIP[37]34.638.450.463.666.877.4
(9)37.840.654.265.068.478.9

We further add backward distillation (BD) in (4) and (5), which substantially improves the knowledge source in 2D, e.g., from 42.8% to 48.2% for the airplane category in the pre-alignment version, and subsequently enhances the 3D segmentation. We observe a higher impact (improvement) on the pre-alignment compared to TTA versions, i.e., in (4) and (5), as the student network of the former can better integrate the knowledge from a collection of shapes. A similar trend of improvement can be observed for a similar ablation performed with CLIP [37] used as the VLM (in (8) and (9)).

In (6), we exclude our method’s predictions for those uncovered points to simulate issue 𝓘𝟏, and the reduced mIoUs compared to (5), e.g., from 86.5% to 80.4% for the chair category, reveal that our method can effectively alleviate issue 𝓘𝟏. Finally, instead of using pre-trained weights of Point-M2AE [47] and freezing them as the 3D decoder as in (5), we initialize these weights (by default PyTorch [32] initialization) and set them to be learnable as in (7). Both settings produce comparable results (within 0.4%). The main purpose of using the pre-trained weights and freezing them is for faster convergence, especially for the test-time alignment purpose. Please refer to the supplementary material for the comparison of convergence curves.

Number of views.

We render V=10 multi-view images for each shape input in our main experiment, and Fig. 5 (left) shows the mIoU scores with different values of V. A substantial drop is observed when utilizing V<6, and small increases are obtained when a larger V is used.

Various shape types for 2D multi-view rendering.

We render 10 multi-view images from various shape data types, i.e., (i) gray mesh, (ii) colored mesh, (iii) dense colored point cloud (∼300k points) as used in PartSLIP [24], and (iv) sparse gray point cloud (2,048 points) using PyTroch3D [16] and the rendering method in [52] to render (i)-(iii) and (iv), respectively. Fig. 5 (right) summarizes such results on the ShapeNetPart dataset, with GLIP used as the VLM. Note that the first three shape types produce comparable mIoUs with slightly higher scores when colored mesh or dense colored point cloud is utilized. When sparse gray point cloud data type is used, a mild mIoU decrease is observed. Please refer to the supplementary material to see more results for (i)-(iv).

Refer to caption

Figure 5:Ablation study on number of views and various shape types for 2D multiview rendering on the ShapeNetPart dataset.

Limitation.

The main limitation of our method is that the segmentation results are impacted by the quality of the VLM predictions, where VLMs are generally pre-trained to recognize object- or sample-level categories (not part-level of object categories). For instance, GLIP can satisfactorily locate part semantics for the chair category but with lower qualities for the earphone category, while CLIP can favorably locate part semantics for the earphone category but with less favorable results for the airplane category. Hence, exploiting multiple VLMs can be a potential future work. Nonetheless, the proposed method which currently employs a single VLM model can already boost the segmentation results significantly compared to the existing methods.

5Conclusion

We present a cross-modal distillation framework that transfers 2D knowledge from a vision-language model (VLM) to facilitate 3D shape part segmentation, which generalizes well to both VLM with bounding-box and pixel-wise predictions. In the proposed method, backward distillation is introduced to enhance the quality of 2D predictions and subsequently improve the 3D segmentation. The proposed approach can also leverage existing generative models for shape creation and can be smoothly incorporated into the method for distillation. With extensive experiments, the proposed method is compared with existing methods on widely used benchmark datasets, including ShapeNetPart and PartNetE, and consistently outperforms existing methods with substantial margins both in zero-shot and few-shot scenarios on 3D data in point clouds or mesh shapes.

Acknowledgment.

This work was supported in part by the National Science and Technology Council (NSTC) under grants 112-2221-E-A49-090-MY3, 111-2628-E-A49-025-MY3, 112-2634-F-006-002 and 112-2634-F-A49-007. This work was funded in part by MediaTek and NVIDIA.

\thetitle

Supplementary Material

Table 6:Zero-shot segmentation on all 16 categories of the ShapeNetPart dataset [44], reported mIoU (%). In this table, TTA and Pre denote the test-time alignment and pre-alignment versions of our method, while VLM stands for vision-language model (see main paper for details).

CategoryVLM - CLIP [37]VLM - GLIP [22]
point cloud inputpoint cloud inputmesh input
PointCLIP [48] PointCLIP v2 [52]Ours (TTA)Ours (Pre)Ours (TTA)Ours (Pre)SATR [1]Ours (TTA)Ours (Pre)
Airplane22.035.737.540.657.369.332.253.264.8
Bag44.853.362.675.662.770.132.161.864.4
Cap13.453.155.567.256.267.921.844.951.0
Car30.434.536.441.232.439.222.330.232.3
Chair18.751.956.465.074.286.525.266.467.4
Earphone28.348.155.666.345.851.219.443.048.3
Guitar22.759.171.785.860.676.837.750.764.8
Knife24.866.776.979.878.585.740.166.370.0
Lamp39.644.745.863.134.543.521.630.535.2
Laptop22.961.867.492.685.791.950.468.383.1
Motorbike26.331.433.438.230.637.825.428.832.5
Mug48.645.553.583.182.585.676.483.986.5
Pistol42.646.148.255.839.648.534.137.440.9
Rocket22.746.749.349.536.848.933.241.145.3
Skateboard42.745.847.749.234.243.522.326.234.5
Table45.449.862.968.762.979.622.458.879.3
Overall31.048.453.863.954.764.132.349.556.3

Table 7:Segmentation on all 45 categories of the PartE dataset [24], reported in mIoU (%). In this table, TTA denotes our method with test-time alignment (see main paper for details).

CategoryZero-shotFew-shot
PartSLIP [24] Ours (TTA)PartSLIP [24]Ours
Bottle76.377.483.484.6
Box57.569.784.587.9
Bucket2.016.836.550.7
Camera21.429.458.360.1
Cart87.788.588.190.1
Chair60.774.185.388.4
Clock26.723.637.637.2
Coffee machine25.426.837.840.2
Dishwasher10.318.662.560.2
Dispenser16.511.473.874.7
Display43.850.584.887.4
Door2.741.140.855.5
Eyeglasses1.859.788.391.1
Faucet6.833.371.473.5
Folding chair91.789.786.390.7
Globe34.890.095.797.4
Kettle20.824.277.078.6
Keyboard37.338.553.670.8
Kitchenpot4.736.869.669.7
Knife46.859.265.271.4
Lamp37.158.866.169.2
Laptop27.037.129.740.0
Lighter35.437.364.764.9
Microwave16.623.242.743.8
Mouse27.018.644.046.9
Oven33.034.273.572.8
Pen14.615.771.574.4
Phone36.137.348.450.8
Pliers5.451.933.290.4
Printer0.83.34.36.3
Refrigerator20.225.255.858.1
Remote11.513.238.340.7
Safe22.418.232.258.6
Scissors21.864.460.368.8
Stapler20.965.184.886.3
Storage furniture29.530.653.656.5
Suitcase40.243.270.473.4
Switch9.530.359.460.7
Table47.750.242.563.3
Toaster13.811.460.058.7
Toilet20.622.553.855.0
Trash can30.149.322.370.0
Usb10.939.154.464.3
Washing machine12.512.953.555.1
Window5.245.375.478.1
Overall27.339.959.465.9

63D segmentation scores for full categories

We provide 3D segmentation scores, reported in mIoU, for full categories of the ShapeNetPart [44] and PartE [24] datasets in Tables 6 and 7, respectively. Table 6 is associated with Table 1 in the main paper, while Table 7 is associated with Tables 2 and 3. In Table 6, 16 categories of the ShapeNetPart dataset are reported, while 45 categories of the PartE dataset are presented in Table 7. For the tables, it is readily observed that the proposed method, PartDistill, attains substantial improvements compared to the competing methods [48, 52, 1, 24] in most categories.

7Evaluating 2D predictions

In the ablation studies of our method’s components presented in Table 5, we provide mIoU scores in 2D space, mIoU2D, to evaluate the quality of the 2D predictions measured against the 2D ground truths before and after performing backward distillation which re-scores the confidence scores of each knowledge unit. Here, the 2D ground truths are obtained by projecting the 3D mesh (faces) part segmentation labels to 2D space using the camera parameters utilized when performing 2D multi-view rendering.

We first explain how to calculate the mIoU2D if a vision-language model (VLM) which outputs pixel-wise predictions (P-VLM) is used in our method and later explain if a VLM which outputs bounding-box predictions (B-VLM) is employed. In each view, let {si}iρ be the prediction maps (see Eq. 1 in the main paper) of P-VLM with Ci denoting the confidence score of si and 𝒢 be the corresponding 2D ground truth. We first calculate the IoU2D for each semantic part r as,

IoU2D⁢(r)=ℐ⁢(r)ℐ⁢(r)+λ⁢(r)+γ⁢(r)(5)

where

ℐ⁢(r)=∑i∈ϕ⁢(r)Avg⁢(Ci)⁢(si∩𝒢r),(6)
λ⁢(r)=Avg⁢(Cϕ⁢(r))⁢((⋃i∈ϕ⁢(r)si)∉𝒢r),(7)

and

γ⁢(r)=𝒢r∉⋃i∈ϕ⁢(r)si,(8)

with ϕ⁢(r) denoting a function returning indices of {si}i=1ρ that predict part r, “Avg” denoting an averaging operation and 𝒢r indicating the ground truth of part r.

While Eq. 6 represents the intersection of the pixels between the 2D predictions and the corresponding ground truths, weighted by their confidence scores, Eq. 7 tells the union of the 2D predictions pixels that do not intersect with the corresponding ground truths, which is weighted by the average of all confidence scores associated with part r. As for Eq. 8, it tells the ground truth pixels that do not intersect with the corresponding 2D predictions union. We then calculate the IoU2D score for each semantic part r in every v view and compute the mean of them as mIoU2D.

Note that we involve the confidence scores as weights to calculate the mIoU2D. This allows us to compare the quality of the 2D predictions before and after applying backward distillation, using the confidence scores before and after this process. To compute the mIoU2D scores when a B-VLM is used in our method, we can use Eq. 5 with si in Eq. 6∼ Eq. 8 being replaced by ℱ⁢(bi), where ℱ denotes an operation excluding the background pixels covered in bi.

8Additional visualizations

8.1Visualization of few-shot segmentation

Refer to caption

Figure 6:Visualization of few-shot segmentation results derived using our method on the PartE dataset [24]. Each semantic part is drawn in different colors.

In Figure 6, we present several visualizations for few-shot segmentation obtained via our method, associated with Table 3 in the main paper. Following the prior work [24], 8-shot labeled shapes are utilized to carry the few-shot segmentation. From the figure, it is evident that our method successfully achieves satisfactory segmentation results.

8.2Convergence curves

In the ablation studies presented in Table 5, we compare two approaches used in the 3D encoder of our student network. First, we employ a pre-trained PointM2AE [47] backbone, freeze the weights and only update the learnable parameters in the student network’s distillation head. Second, we utilize a PointM2AE backbone with its weights initialized by PyTorch [32] default initialization and set them to be learnable, together with the parameters in the distillation head. From the table, we observe comparable results between both settings (see rows (5) and (7) for the first and second approaches respectively).

We then visualize the convergence curves in both settings, as depicted in Figure 7. From the figure, it can be seen that the loss in the first approach converges significantly faster than in the second approach. As a result, the first approach also starts to perform backward distillation in a substantially earlier epoch than the second one.

Refer to caption

Figure 7:Convergence curves of our method’s losses during optimization epochs. While the first approach employs a pre-trained PointM2AE [47] model and freezes its weights, the second approach initializes the Point2MAE’s weights from scratch and sets them to be learnable.

8.32D rendering from various shape types

We present several 2D rendering images from various shape types, including (i) gray mesh, (ii) colored mesh, (iii) dense colored point cloud, and (iv) sparse gray point cloud, which can be seen in Figure 8. While PartSLIP [24] renders the multi-view images using type (iii), SATR [1] uses type (i). As for PointCLIP [48] and PointCLIPv2 [52], they use type (iv) to render their multi-view images.

Refer to caption

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/6120.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

PID 控制算法(二):C 语言实现与应用

在本文中&#xff0c;我们将用 C 语言实现一个简单的 PID 控制器&#xff0c;并通过一个示例来演示如何使用 PID 控制算法来调整系统的状态&#xff08;如温度、速度等&#xff09;。同时&#xff0c;我们也会解释每个控制参数如何影响系统的表现。 什么是 PID 控制器&#xf…

数据结构——实验一·线性表

海~~欢迎来到Tubishu的博客&#x1f338;如果你也是一名在校大学生&#xff0c;正在寻找各种变成资源&#xff0c;那么你就来对地方啦&#x1f31f; Tubishu是一名计算机本科生&#xff0c;会不定期整理和分享学习中的优质资源&#xff0c;希望能为你的编程之路添砖加瓦⭐&…

vector的使用,以及部分功能的模拟实现(C++)

1.vector的介绍及使用 1.1 vector的介绍 vector是STL容器中的一种常用的容器&#xff0c;和数组类似&#xff0c;由于其大小(size)可变&#xff0c;常用于数组大小不可知的情况下来替代数组。 vector也是一种顺序容器&#xff0c;在内存中连续排列&#xff0c;因此可以通过下标…

【Postgres_Python】使用python脚本批量创建和导入多个PG数据库

之前批量创建和导入数据库分为2个python脚本进行&#xff0c;现整合优化代码合并为一个python脚本&#xff0c;可同步实现数据库的创建和数据导入。之前的文章链接&#xff1a; 【Postgres_Python】使用python脚本批量创建PG数据库 【Postgres_Python】使用python脚本将多个.S…

U-Net - U型网络:用于图像分割的卷积神经网络

U-Net是一种专为图像分割任务设计的卷积神经网络&#xff08;CNN&#xff09;&#xff0c;最初由Olaf Ronneberger等人于2015年提出。它被广泛应用于医学影像分析、遥感图像分割、自动驾驶和其他许多需要对图像进行像素级分类的任务中。U-Net具有强大的特征提取和恢复能力&…

ceph基本概念,架构,部署(一)

一、分布式存储概述 1.存储分类 存储分为封闭系统的存储和开放系统的存储&#xff0c;而对于开放系统的存储又被分为内置存储和外挂存储。 外挂存储又被细分为直连式存储(DAS)和网络存储(FAS)&#xff0c;而网络存储又被细分网络接入存储(NAS)和存储区域网络(SAN)等。 DAS(D…

联想电脑怎么用u盘装系统_联想电脑用u盘装win10系统教程

联想电脑怎么重装系统&#xff1f;在当今科技发展迅猛的时代&#xff0c;联想电脑已经成为了人们生活中不可或缺的一部分。然而&#xff0c;随着时间的推移&#xff0c;我们可能会遇到一些问题&#xff0c;例如系统崩溃或者需要更换操作系统。这时&#xff0c;使用U盘来重新安装…

基于ESP32-IDF驱动GPIO输出控制LED

基于ESP32-IDF驱动GPIO输出控制LED 文章目录 基于ESP32-IDF驱动GPIO输出控制LED一、点亮LED3.1 LED电路3.2 配置GPIO函数gpio_config()原型和头文件3.3 设置GPIO引脚电平状态函数gpio_set_level()原型和头文件3.4 代码实现并编译烧录 一、点亮LED 3.1 LED电路 可以看到&#x…

电路研究9.1.1——合宙 Air780EP 模组外围线路

本来要继续研究AT指令来着&#xff0c;结果发现后面还有之前用到的电路设计资料&#xff0c;所以就贴过来了。 5.3.2 工作模式&#xff1a; 注意&#xff1a;  当模块进入休眠模式或深度休眠模式后&#xff0c; VDD_EXT 电源会掉电&#xff0c;相应电压域的 GPIO 以及串口…

Apache Hive3定位表并更改其位置

Apache Hive3表 1、Apache Hive3表概述2、Hive3表存储格式3、Hive3事务表4、Hive3外部表5、定位Hive3表并更改位置6、使用点表示法引用表7、理解CREATE TABLE行为 1、Apache Hive3表概述 Apache Hive3表类型的定义和表类型与ACID属性的关系图使得Hive表变得清晰。表的位置取决于…

Flutter 改完安卓 applicationId 后App 闪退问题。

一、问题 当我们项目创建完&#xff0c;想 build.gradle 改 applicationId 的时候&#xff0c;再次执行的时候可能会出现 app 闪退问题&#xff0c; 控制台不显示任何错误提示 也不出现 Exit 停止运行的情况。&#xff08;像下方这样&#xff0c; 而 app 只是在模拟器中一闪而…

JavaScript笔记APIs篇01——DOM获取与属性操作

黑马程序员视频地址&#xff1a;黑马程序员前端JavaScript入门到精通全套视频教程https://www.bilibili.com/video/BV1Y84y1L7Nn?vd_source0a2d366696f87e241adc64419bf12cab&spm_id_from333.788.videopod.episodes&p78https://www.bilibili.com/video/BV1Y84y1L7Nn?…

【2024年 CSDN博客之星】我的2024年创作之旅:从C语言到人工智能,个人成长与突破的全景回顾

我的2024年创作之旅&#xff1a;从C语言到人工智能&#xff0c;个人成长与突破的全景回顾 引言 回望2024年&#xff0c;我不仅收获了技术上的成长&#xff0c;更收获了来自CSDN平台上无数粉丝、朋友以及网友们的支持与鼓励。在这条创作之路上&#xff0c;CSDN不仅是我展示技术成…

回归人文主义,探寻情感本质:从文艺复兴到AI时代,我的情感探索之旅

回归人文主义&#xff0c;探寻情感本质&#xff1a;从文艺复兴到AI时代&#xff0c;我们的情感探索之旅 多年来&#xff0c;我们的团队一直关注人工智能&#xff0c;尤其是AI在音乐领域的应用研究。随着技术的不断演进&#xff0c;我们也不断反思&#xff1a;在“算法、代码、…

【java】签名验签防篡改研究测试

上一篇文章写了接口安全通过一次性校验码和 时间戳可以防接口重放攻击、本篇将通过 signatrue签名模式进行研究性&#xff0c;知其所以然 说明本次实验是验证签名合法性该前端使用不安全加密&#xff0c;存在安全风险密钥在jsp中暴露 1、实现原理 2、前端 将 username 和 p…

实战演示:利用ChatGPT高效撰写论文

在当今学术界&#xff0c;撰写论文是一项必不可少的技能。然而&#xff0c;许多研究人员和学生在写作过程中常常感到困惑和压力。幸运的是&#xff0c;人工智能的快速发展为我们提供了新的工具&#xff0c;其中ChatGPT便是一个优秀的选择。本文将通过易创AI创作平台&#xff0c…

Java实现简易银行账户管理系统

目录 1、项目概述 1.1 项目结构 1.2 技术栈 2、核心功能说明 2.1 账户管理 2.2 异常处理体系 3、设计理念解析 3.1 面向对象设计 3.2 关键设计点 4、使用指南 4.1 运行流程 4.2 注意事项 5、扩展建议 5.1增加功能 5.2优化方向 6、主要的功能模块代码说明 6.1exception 6.2main …

【2024年华为OD机试】(B卷,100分)- 数据分类 (Java JS PythonC/C++)

一、问题描述 题目描述 对一个数据a进行分类,分类方法为: 此数据a(四个字节大小)的四个字节相加对一个给定的值b取模,如果得到的结果小于一个给定的值c,则数据a为有效类型,其类型为取模的值;如果得到的结果大于或者等于c,则数据a为无效类型。 比如一个数据a=0x010…

使用飞桨AI Studio平台训练数据,并进行图像识别分析得牡丹花测试

&#x1f3bc;个人主页&#xff1a;【Y小夜】 &#x1f60e;作者简介&#xff1a;一位双非学校的大二学生&#xff0c;编程爱好者&#xff0c; 专注于基础和实战分享&#xff0c;欢迎私信咨询&#xff01; &#x1f386;入门专栏&#xff1a;&#x1f387;【MySQL&#xff0…

Next.js:构建大模型智能体GPT研究者应用的 Web开发框架

Next.js&#xff1a;构建大模型智能体GPT研究者应用的 Web开发框架 Next.js 基础知识 Next.js 是由 Vercel 公司开发维护的框架&#xff0c;极大地简化了 React 应用的开发流程。其核心特性包括&#xff1a; 服务器端渲染&#xff08;SSR&#xff09;与静态站点生成&#xff…