长表格最后一行文本与两列文本重叠

长表格最后一行文本与两列文本重叠

我有一个两列文档,其中有一个表格,我希望表格全宽,并且该表格很大,因此它跨越多个页面。现在我以某种方式实现了这一点,但最后一行文本与后面的文本重叠:

在此处输入图片描述

这是乳胶代码:

\documentclass[journal]{IEEEtran}
\usepackage{booktabs}
\usepackage{longtable}
\centering
\begin{longtable}{ |p{2cm}|p{1cm}|p{3cm}|p{2cm}|p{8cm} |}
\toprule
{Tilte} & {Year} & {Method/Technique} & {Dataset} & {Challenges/ Proposed Solution}\\  
\midrule

   VITON & 2018 & coarse-to-fine strategy &  Zalando & According to the authors, the non-rigid nature of clothing is a significant challenge when it comes to satisfying virtual try-on requirements, particularly when 3D information is not available. This challenge is due to the frequent deformations and occlusions that occur. 


\\
     \hline


CP-VTON & 2018 & Geometric Matching Module, Composition mask & Zalando & In clothing try on, the shape is essential but the details regarding texture , logo , embroidery is also very important in correctly tranforimng the target clothes. Previous works failed to handle large spatial misalignment between the input image and target clothes. Prior work explicitly tackled spatial deformation using shape context matching, but failed to preserve clothing details due to its coarse-to-fine strategy. For which the authors introduce a fully-learnable network that comprises of two modules one that utilizes thin-plate spine transformation for cloth fitting using GMM.And a try-on modulefor further soothness using a composition mask. 



\\
    \hline
   M2E-Try On Net & 2019 & Dense pose based human Pose Aligned Network, geometric transformation for Texture Refinement & Deep Fashion, MVC Dataset & The authors have highlighted a significant limitation of prior research, namely the reliance on clean images of clothing. To address this challenge, the authors put forth a novel network architecture that extracts information from a model image and applies it to the target person's image, obviating the need for a clean image. To align the model image with the target person's image, a pose alignment network is employed. Next, a Texture Refinement Network is introduced to enhance the features of the aligned model image. Once the apparel image has been refined, the Fitting Network is employed to fit it to the target image. In addition, the authors have devised an Unpaired-Paired Joint Training strategy with the pose conditional GAN, which alleviates the issue of requiring expensive paired training data.\\

    \hline
    MG-VTON & 2019 & Conditional parsing, Warp GAN, Refinement Rendering, GMM & MPV, DeepFashion & This paper presents a multi-pose guided virtual try-on system that addresses several challenges, such as self-occlusions and heavy misalignment among different poses. The proposed Multi-pose Guided Virtual Try-On Network (MG-VTON) generates a new person image by fitting desired clothes into the person and manipulating the pose. MG-VTON comprises a conditional human parsing network, a deep Warping Generative Adversarial Network (Warp-GAN), and a refinement render network.\\

       \hline
VTNFP & 2019 & hybrid clothing-agnostic person representation, GMM, conditional GAN, image synthesis & Zalando & In previous research, generating images that accurately capture the intricate details of clothing and the human body has proven to be a significant challenge. In light of this, the authors propose a three-stage design strategy. The first stage involves a pose aligning module, which leverages a self-attention mechanism to enhance the robustness of the correlation matching component. Additionally, the authors introduce a novel segmentation map generation module to accurately predict the body parts of individuals wearing the target clothing. Lastly, a new image synthesis network is presented, which integrates information from the predicted body part segmentation map, warped clothing, and other auxiliary body information to effectively preserve clothing and body part details.
\\



    \hline
 FashionOn & 2019 & pose-guided parsing translator, Conditional GAN, refinement generators & FashionOn,  DeepFashion  & The authors observed that previous virtual try-on systems cannot solve difficult cases, e.g., body occlusions, wrinkles of clothes, and details of the hair. Moreover, the existing systems require the users to upload the image for the target pose, which is not user-friendly. Their proposed network uses pose-guided parsing translation, segmentation region coloring, and salient region refinement to synthesize the try-on images, which helps to resolve the ill-posed problem. It also generates the pleats and shadows based on the body shape and the posture of the source person which achieves
\\

\bottomrule
\end{longtable}
\end{document}

请指导我该如何做到这一点。

答案1

您可以按如下方式重新设计表格:

在此处输入图片描述

\documentclass[journal]{IEEEtran}
\usepackage{lipsum}% For dummy text. Don't use in a real document

\usepackage{xcolor}
\usepackage{tabularray}
\usepackage{rotating}


\begin{document}
\lipsum[1]
\begingroup
\small
\DefTblrTemplate{contfoot-text}{default}{Continued on next column/page}
\SetTblrStyle{contfoot-text}{font=\footnotesize\itshape}
\begin{longtblr}{hlines, vlines,
                 hline{1, Z} = 1pt,
                 colsep  = 3pt,
                 colspec = {Q[l, wd=21mm]
                            X[j]},
                 cell{even}{1} = {c=2}{},
                 row{even}= {gray!10},
                 rowhead  = 1,                   
                 }
{Method/\\Technique} 
& Challenges/ Proposed Solution \\*
{\textbf{Tilte:} VITON, 2018;   \\
 \textbf{Dataset:} Zalando}
    &           \\* 
coarse-to-fine strategy  
    & According to the authors, the non-rigid nature of clothing is a significant challenge when it comes to satisfying virtual try-on requirements, particularly when 3D information is not available. This challenge is due to the frequent deformations and occlusions that occur.
        \\
{\textbf{Tilte:} CP-VTONm 2018;\\ 
 \textbf{Dataset:} Zalando} 
    &           \\*
Geometric Matching Module, Composition mask
    & In clothing try on, the shape is essential but the details regarding texture , logo , embroidery is also very important in correctly tranforimng the target clothes. Previous works failed to handle large spatial misalignment between the input image and target clothes. Prior work explicitly tackled spatial deformation using shape context matching, but failed to preserve clothing details due to its coarse-to-fine strategy. For which the authors introduce a fully-learnable network that comprises of two modules one that utilizes thin-plate spine transformation for cloth fitting using GMM.And a try-on modulefor further soothness using a composition mask.
    \\
{\textbf{Tilte:} M2E-Try On Net, 2019;\\ 
 \textbf{Dataset:} Deep Fashion, MVC Dataset}
    &           \\*
Dense pose based human Pose Aligned Network, geometric transformation for Texture Refinement
    & The authors have highlighted a significant limitation of prior research, namely the reliance on clean images of clothing. To address this challenge, the authors put forth a novel network architecture that extracts information from a model image and applies it to the target person's image, obviating the need for a clean image. To align the model image with the target person's image, a pose alignment network is employed. Next, a Texture Refinement Network is introduced to enhance the features of the aligned model image. Once the apparel image has been refined, the Fitting Network is employed to fit it to the target image. In addition, the authors have devised an Unpaired-Paired Joint Training strategy with the pose conditional GAN, which alleviates the issue of requiring expensive paired training data.\\
{\textbf{Tilte:}  MG-VTON, 2019;\\
 \textbf{Dataset:}  MPV DeepFashion}
    &           \\*
Conditional parsing, Warp GAN, Refinement Rendering, GMM
    & This paper presents a multi-pose guided virtual try-on system that addresses several challenges, such as self-occlusions and heavy misalignment among different poses. The proposed Multi-pose Guided Virtual Try-On Network (MG-VTON) generates a new person image by fitting desired clothes into the person and manipulating the pose. MG-VTON comprises a conditional human parsing network, a deep Warping Generative Adversarial Network (Warp-GAN), and a refinement render network.\\
{\textbf{Tilte:} VTNFP, 2019;\\
 \textbf{Dataset:} Zalando}
    &           \\*
hybrid clothing-agnostic person representation, GMM, conditional GAN, image synthesis
    & In previous research, generating images that accurately capture the intricate details of clothing and the human body has proven to be a significant challenge. In light of this, the authors propose a three-stage design strategy. The first stage involves a pose aligning module, which leverages a self-attention mechanism to enhance the robustness of the correlation matching component. Additionally, the authors introduce a novel segmentation map generation module to accurately predict the body parts of individuals wearing the target clothing. Lastly, a new image synthesis network is presented, which integrates information from the predicted body part segmentation map, warped clothing, and other auxiliary body information to effectively preserve clothing and body part details.
            \\
{\textbf{Tilte:} FashionOn, 2019;\\ 
 \textbf{Dataset:} FashionOn,  DeepFashion}
    &       \\*
pose-guided parsing translator, Conditional GAN, refinement generators
    & The authors observed that previous virtual try-on systems cannot solve difficult cases, e.g., body occlusions, wrinkles of clothes, and details of the hair. Moreover, the existing systems require the users to upload the image for the target pose, which is not user-friendly. Their proposed network uses pose-guided parsing translation, segmentation region coloring, and salient region refinement to synthesize the try-on images, which helps to resolve the ill-posed problem. It also generates the pleats and shadows based on the body shape and the posture of the source person which achieves
            \\
\end{longtblr}
\endgroup
\lipsum\lipsum
\end{document}

相关内容