如何枚举 xltabular 和 long 表中的对象

如何枚举 xltabular 和 long 表中的对象

我正在使用部门提供的模板,其中包含 thesis.tex 文件,我认为其相关内容在这里

    \documentclass[twoside,mtp]{iiitg}
\usepackage{fancyhdr}
  \fancyhead{}
  \fancyhead[LO]{\slshape \rightmark}
  \fancyhead[RO,LE]{\textbf{\thepage}}
  \fancyhead[RE]{\slshape \leftmark}
  \fancyfoot{}
  \pagestyle{fancy}
  \renewcommand{\chaptermark}[1]{\markboth{\chaptername \ \thechapter \ \ #1}{}}
  \renewcommand{\sectionmark}[1]{\markright{\thesection \ \ #1}}
\tableofcontents

 \clearemptydoublepage

 % Make the list of figures
 \listoffigures
 \clearemptydoublepage

 % Make the list of tables
 \listoftables
 \clearemptydoublepage

%\phantomsection \addcontentsline{toc}{chapter}{List of Symbols and Abbreviation}
%\include{files/symb_b}
%\include{files/symb_b1}
%\clearemptydoublepage

\onehalfspace

 % Start regular page counting at page 1
\mainmatter
\addtolength{\parskip}{0.05\baselineskip}

\abovedisplayskip=13pt
\belowdisplayskip=13pt

\clearemptydoublepage
\input{texfiles/chapter1}
\clearemptydoublepage
\input{texfiles/chapter2}
\clearemptydoublepage
\input{texfiles/chapter3}
\clearemptydoublepage
\input{texfiles/chapter4}
\clearemptydoublepage
\input{texfiles/conclusion}
\clearemptydoublepage

在第 2 章中,我制作了一个很长的表格,由于上面的文字,它不适合放在一页中。所以我按照以下方式编写了它。

\begin{xltabular}{\textwidth}{@{} l X @{} X @{} X} 
\hline       
\thead{Algorithm}   & \thead{Pros}  & \thead{Cons} \\ \hline
\begin{enumerate}[label={}, wide = 0pt, leftmargin = *, nosep, itemsep = 0pt, before = \vspace*{\baselineskip}, after =\vspace*{\baselineskip} ]
\item K Nearest Neighbour
\item K-NN
    \end{enumerate}   & \begin{enumerate}
    \item Very easy to understand 
    \item Good for creating models that include non standard data types such as
    text
\end{enumerate}       & Large Storage requirements
Computationally Expensive
Sensitive to the choice of the similarity function for comparing instances             \\ \hline
Local Outlier Factor(LOF)  & Well-known and good algorithm
for local anomaly detection
             & Only relies on its direct neighborhood .\newline Perform poorly on data sets with global anomalies. \\ \hline
K Means       & Low Complexity \newline Very easy to implement & Each cluster has pretty equal number of observations \newline Necessity of specifying K \newline Only work with numerical data \\ \hline
Support Vector Machine (SVM) & Find the best separation hyper-plane.Deal with very high dimensional data.\newline 
Can learn very elaborate concepts.
Work very well         & Require both positive and negative examples. Require lots of memory.\newline Some numerical stability problems.Need to select a good kernel function   \\ \hline
Neural networks based anomaly detection & Learns and does not need to be reprogrammed.\newline Can be implemented in any application  &    Needs training to operate \newline Requires high processing time for large neural networks \newline The architecture needs to be emulated          \\ \hline
    \caption{Anomaly Detection Algorithms comparison}
    \label{tab:algorithm_comp}
    \end{xltabular}

生成的表格如下所示
生成的表
我认为这是有错误的。从外观上看,我不清楚错误是什么。上述实现中的错误可能是什么?

答案1

主要问题是l您尝试放置多个段落的列。我将其重新定义为左对齐的 x 列。如果您需要不同的列宽,请查看有关使用的 tabularx-documentation \hsize

我还定义了缺少的\thead命令,并将所有\hline命令更改为 booktabs-rules。

这是一些糟糕的换行符,但可以修复。

在此处输入图片描述

\documentclass{article}
\usepackage{xltabular, booktabs, enumitem}
\usepackage{babel}

\newcommand{\thead}[1]{\multicolumn{1}{c}{\bfseries #1}}

\begin{document}

\begin{xltabular}{\textwidth}{@{} >{\raggedright\arraybackslash}X X X @{}} 
\caption{Anomaly Detection Algorithms comparison\label{tab:algorithm_comp}}\\

\toprule      
\thead{Algorithm}   & \thead{Pros}  & \thead{Cons} \\ \midrule
\begin{enumerate}%
[label={}, wide = 0pt, leftmargin = *, nosep, itemsep = 0pt, before = \vspace*{\baselineskip}, after =\vspace*{\baselineskip} ]
\item K Nearest Neighbour
\item K-NN
\end{enumerate}   & \begin{enumerate}
    \item Very easy to understand 
    \item Good for creating models that include non standard data types such as
    text
\end{enumerate}       & Large Storage requirements
Computationally Expensive
Sensitive to the choice of the similarity function for comparing instances             \\ \midrule
Local Outlier Factor(LOF)  & Well-known and good algorithm
for local anomaly detection
             & Only relies on its direct neighborhood .\newline Perform poorly on data sets with global anomalies. \\ \midrule
K Means       & Low Complexity \newline Very easy to implement & Each cluster has pretty equal number of observations \newline Necessity of specifying K \newline Only work with numerical data \\ \midrule
Support Vector Machine (SVM) & Find the best separation hyper-plane.Deal with very high dimensional data.\newline 
Can learn very elaborate concepts.
Work very well         & Require both positive and negative examples. Require lots of memory.\newline Some numerical stability problems.Need to select a good kernel function   \\ \midrule
Neural networks based anomaly detection & Learns and does not need to be reprogrammed.\newline Can be implemented in any application  &    Needs training to operate \newline Requires high processing time for large neural networks \newline The architecture needs to be emulated          \\ \bottomrule

    \end{xltabular}

\end{document}

答案2

这是我的建议。我已将列中的水平对齐方式从两端对齐更改为左对齐,以防止单词之间出现较大的空白。由于第一列包含的文本比第二列和第三列少,因此我相对于其他列减小了第一列的列宽。为了提供更多结构,我tabitem为“优点”和“缺点”列中的条目使用了新定义的环境。在下面的 MWE 中,我还包含了另一个只需要两列的示例:

在此处输入图片描述

在此处输入图片描述

\documentclass{article}
\usepackage{xltabular, booktabs, enumitem}

\newlist{tabitem}{itemize}{1}
\setlist[tabitem]{wide=0pt, nosep, leftmargin= * ,label=\textendash,after=\vspace{-\baselineskip},before=\vspace{-0.6\baselineskip}}

\usepackage{makecell}
\renewcommand{\theadfont}{\normalsize\bfseries}

\newcolumntype{L}{>{\raggedright\arraybackslash}X}

\begin{document}


\begin{xltabular}{\textwidth}{@{} >{\raggedright\arraybackslash}p{1.85cm}LL @{}}
\caption{Anomaly Detection Algorithms comparison\label{tab:algorithm_comp}}\\ 
\toprule      
\thead{Algorithm}   & \thead{Pros}  & \thead{Cons} \\ 
\midrule
\endfirsthead
\toprule      
\thead{Algorithm}   & \thead{Pros}  & \thead{Cons} \\ 
\midrule
\endhead
K Nearest Neighbour K-NN
    & \begin{tabitem}
      \item Very easy to understand 
      \item Good for creating models that include non standard data types such as text
      \end{tabitem}       
          & \begin{tabitem} 
            \item Large Storage requirements 
            \item Computationally Expensive 
            \item Sensitive to the choice of the similarity function for comparing instances 
            \end{tabitem}            \\ 
\midrule
Local Outlier Factor (LOF)  
    & \begin{tabitem}
      \item Well-known and good algorithm for local anomaly detection
      \end{tabitem}
        &  \begin{tabitem} 
           \item Only relies on its direct neighborhood.
           \item Perform poorly on data sets with global anomalies. 
           \end{tabitem}\\ 
\midrule
K Means       
    & \begin{tabitem}
      \item Low Complexity 
      \item Very easy to implement 
      \end{tabitem}
          & \begin{tabitem}
            \item Each cluster has pretty equal number of observations 
            \item Necessity of specifying K 
            \item Only work with numerical data
            \end{tabitem} \\ 
\midrule
Support Vector Machine (SVM) 
    & \begin{tabitem}
      \item Find the best separation hyper-plane. 
      \item Deal with very high dimensional data.
      \item Can learn very elaborate concepts.
      \item Work very well
      \end{tabitem}        
          & \begin{tabitem}
            \item Require both positive and negative examples. 
            \item Require lots of memory.
            \item Some numerical stability problems.
            \item Need to select a good kernel function
            \end{tabitem}   \\ 
\midrule
Neural networks based anomaly detection 
    & \begin{tabitem}
      \item Learns and does not need to be reprogrammed
      \item Can be implemented in any application 
      \end{tabitem} 
          & \begin{tabitem}
            \item Needs training to operate 
            \item Requires high processing time for large neural networks 
            \item The architecture needs to be emulated 
            \end{tabitem}\\ 
\bottomrule
\end{xltabular}

%\pagebreak

\begin{xltabular}{\textwidth}{LL @{}}
\caption{Anomaly Detection Algorithms comparison\label{tab:algorithm_comp}}\\ 
\toprule      
    Pros  & Cons \\ 
\midrule
\endfirsthead
\toprule      
    \thead{Pros}  & \thead{Cons} \\ 
\midrule
\endhead
\multicolumn{2}{@{}l}{\itshape K Nearest Neighbour K-NN}\\*
     \begin{tabitem}
     \item Very easy to understand 
     \item Good for creating models that include non standard data types such as text
      \end{tabitem}       
          & \begin{tabitem} 
            \item Large Storage requirements 
            \item Computationally Expensive 
            \item Sensitive to the choice of the similarity function for comparing instances 
            \end{tabitem}            \\ 
\midrule
\multicolumn{2}{@{}l}{\itshape Local Outlier Factor (LOF)}\\*  
      \begin{tabitem}
      \item Well-known and good algorithm for local anomaly detection
      \end{tabitem}
        &  \begin{tabitem} 
           \item Only relies on its direct neighborhood.
           \item Perform poorly on data sets with global anomalies. 
           \end{tabitem}\\ 
\midrule
\multicolumn{2}{@{}l}{\itshape K Means}\\*      
      \begin{tabitem}
      \item Low Complexity 
      \item Very easy to implement 
      \end{tabitem}
          & \begin{tabitem}
            \item Each cluster has pretty equal number of observations 
            \item Necessity of specifying K 
            \item Only work with numerical data
            \end{tabitem} \\ 
\midrule
\multicolumn{2}{@{}l}{\itshape Support Vector Machine (SVM)}\\*
      \begin{tabitem}
      \item Find the best separation hyper-plane. 
      \item Deal with very high dimensional data.
      \item Can learn very elaborate concepts.
      \item Work very well
      \end{tabitem}        
          & \begin{tabitem}
            \item Require both positive and negative examples. 
            \item Require lots of memory.
            \item Some numerical stability problems.
            \item Need to select a good kernel function
            \end{tabitem}   \\ 
\midrule
\multicolumn{2}{@{}l}{\itshape Neural networks based anomaly detection}\\ 
      \begin{tabitem}
      \item Learns and does not need to be reprogrammed
      \item Can be implemented in any application 
      \end{tabitem} 
          & \begin{tabitem}
            \item Needs training to operate 
            \item Requires high processing time for large neural networks 
            \item The architecture needs to be emulated 
            \end{tabitem}\\ 
\bottomrule
\end{xltabular}
\end{document}

一种完全不同的方法:

在此处输入图片描述

\documentclass{article}
\usepackage{enumitem}
\newlist{proconlist}{itemize}{1}
\setlist[proconlist]{label=+.,leftmargin=*, nosep}

\usepackage{caption}
\begin{document}

\captionof{table}{Pros (+) and Cons (--) of Different Anomaly Detection Algorithms \label{tab:algorithm_comp}} 
\begin{enumerate}[leftmargin=*]
  \item K Nearest Neighbour K-NN
    \begin{proconlist}
        \item[+]  Very easy to understand 
        \item[+]  Good for creating models that include non standard data types such as text
        \item[--] Large Storage requirements 
        \item[--] Computationally Expensive 
        \item[--] Sensitive to the choice of the similarity function for comparing instances 
    \end{proconlist}       
  \item Local Outlier Factor (LOF)  
    \begin{proconlist}
        \item[+]  Well-known and good algorithm for local anomaly detection
        \item[--] Only relies on its direct neighborhood.
        \item[--] Perform poorly on data sets with global anomalies. 
    \end{proconlist} 
  \item K Means       
    \begin{proconlist}
        \item[+]  Low Complexity 
        \item[+]  Very easy to implement 
        \item[--]  Each cluster has pretty equal number of observations 
        \item[--]  Necessity of specifying K 
        \item[--]  Only work with numerical data
    \end{proconlist} 
  \item Support Vector Machine (SVM) 
    \begin{proconlist}
        \item[+] Find the best separation hyper-plane. 
        \item[+] Deal with very high dimensional data.
        \item[+] Can learn very elaborate concepts.
        \item[+] Work very well
        \item[--] Require both positive and negative examples. 
        \item[--] Require lots of memory.
        \item[--] Some numerical stability problems.
        \item[--] Need to select a good kernel function
    \end{proconlist} 
  \item Neural networks based anomaly detection 
    \begin{proconlist}
        \item[+] Learns and does not need to be reprogrammed
        \item[+] Can be implemented in any application 
        \item[--] Needs training to operate 
        \item[--] Requires high processing time for large neural networks 
        \item[--] The architecture needs to be emulated 
    \end{proconlist} 
\end{enumerate}

\end{document}

相关内容