我在最后一行合并了两列。如何让文本以项目符号的形式显示,从行首开始,并且不留下太多空白?另外,如何消除行底部的空格?提前致谢!
\documentclass[8pt]{article}
\usepackage{array}
\usepackage{pdflscape}
\usepackage{comment}
\usepackage{graphicx}
\usepackage{easytable}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{rotating}
\usepackage{makecell}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{multirow,hhline,graphicx,array}
\usepackage[margin=0.5in]{geometry}
%\DeclareMathSizes{8}{16}{16}{8}
\newcommand{\x}{\mathbf{x}}
\newcommand{\g}{\mathbf{g}}
\newcommand{\h}{\mathbf{h}}
\newcommand{\0}{\mathbf{0}} %<- that's not a good idea
\newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
\begin{document}
\aboverulesep=0ex
\belowrulesep=0ex
%\renewcommand{\arraystretch}{5}
\newgeometry{margin=0.1cm}
\begin{landscape}
% Table generated by Excel2LaTeX from sheet 'Sheet1'
\begin{table}[htbp]
\centering
\caption{Add caption}
\begin{tabular}{|p{0.7em}| p{0.7em}|p{20em}|p{21em}|p{21em}|}
\cmidrule{3-5} \multicolumn{1}{c}{}
&
&
\makecell{\textbf{Unconstrained} \\ $\underset{\x\in\mathbb{R}^n}
{\mathrm{minimize}}\ f(\x)$}
&
\makecell{\textbf{Constrained: Reduced Form} \\
$\underset{\x\in\mathbb{R}^n}{\mathrm{minimize}}\ f(\x)$ \\
$\mathrm{subject\ to\ } \h(\x)=\0 $}
&
\makecell{\textbf{Constrained: Lagrangian Form} \\
$\underset{\x\in\mathbb{R}^n}{\mathrm{minimize}}\ f(\x)$ \\
$\mathrm{subject\ to\ } \h(\x)=\0,\g(\x)\leq\0$ }
\\
\midrule
\multirow{2}{*}{\rotatebox[origin=r]{90}{\makecell{Local Optimality
Conditions~~~~~~~~~~~~~~~~~~~~~~~~~~~~}}} & \multicolumn{1}{p{0.7em}|}
{\rotatebox[origin=r]{90}{\ First Order Necessary~~~~~~\ }}
&
At a local minimizer, the gradient of the objective function must be zero
\[
\nabla f(\x_\dagger)=\0
\]
&
At a local minimizer, the reduced gradient must be zero if $\partial
h/\partial s$ is invertible.
\[
\nabla_d f_R (x_{\dagger})=0
\]
\[
h(x_{\dagger})=0
\]
\[
\text{where } x= \begin{bmatrix}
d\\s
\end{bmatrix}
,\nabla_d f_R (x_{\dagger})=\frac{\partial f}{\partial d}-\frac{\partial f}
{\partial s} \bigg( \frac{\partial h}{\partial s} \bigg )^{-1}\frac{\partial
h}{\partial d}
\]
&
At a local minimizer, the KKT conditions must be satisfied if the point is
regular (i.e.: if the linear independence constraint qualification (LICQ) is
satisfied: if $\nabla h_{\dagger}(x_{*})$ has independent rows).
\[
\nabla _x L(x_{\dagger})=0
\]
\[
h(x_{\dagger})=0,g(x_{\dagger})≤0
\]
\[
\mu_{\dagger}^⊤ g(x_{\dagger})=0
\]
\[
\mu_{\dagger}≥0
\]
\[
\text{where } L(x_{\dagger})=f(x_{\dagger})+\lambda^⊤ h(x_{\dagger})+μ^⊤
g(x_{\dagger})
\]
\\
\cmidrule{2-5} \multicolumn{1}{|c|}{}
&
\multicolumn{1}{p{0.7em}|}{\rotatebox[origin=r]{90}{\ Second Order
Sufficiency~~~~~~~~\ }}
&
If the Hessian of the objective function is positive definite at a point
where the gradient is zero, the point is a local minimum.
\[
\partial x^T\nabla^2f(x_{*})\partial x>0
\]
\[
\forall \partial x \neq 0
\]
A Hessian matrix is positive definite if all of its eigenvalues are
positive.
&
If the reduced Hessian is positive definite at a point where the reduced
gradient is zero, the point is a local minimum.
\[
\partial d^⊤ \nabla_d^2 f_R (x_{*})\partial d>0, \forall \partial d \neq 0
\]
\[
\text{where }\nabla_d^2 f_R (x_{*})=A \frac{\partial ^2 f}{\partial x^2}
A^{T}+ \frac{\partial f}{\partial s} \frac{\partial ^2 s}{\partial d^2}
\]
\[
A=
\bigg[
I \hspace{2mm}\bigg({\frac{\partial s}{\partial d}\bigg)}^T
\bigg]
, \frac{\partial^2 s}{\partial d^2} =-\bigg(\frac{\partial h}{\partial
s}\bigg)^{-1} A \frac{\partial^2 h}{\partial x^2} A^{T}
\]
&
If the Hessian of the Lagrangian is positive definite on the subspace
tangent to the active constraints at a KKT point, the point is a local
minimum.
\[
\partial x^T\nabla^2_x L(x_{*})\partial x>0
\]
\[
\forall \partial x \neq 0: \nabla_x h_{\dagger}(x_{*})\partial x = 0
\]
\[
\text{where }h_{\dagger}(x_{*}) = [h(x_{*})^T, g_j(x_{*})\forall
j:\mu_j>0]^T
\]
A Hessian matrix is positive definite on the subspace tangent to the
active constraints if the last n-m leading principle minors of the
bordered Hessian $\begin{bmatrix}
0 & \nabla h\\ \nabla h^T & \nabla^2_x L
\end{bmatrix}$have sign $(-1)^m$, where m is the number of active
constraints.
\\
\midrule
\multicolumn{1}{|p{1.4em}|}{\rotatebox[origin=r]{90}{\makecell{\ Global Optimality Conditions~~~~~~~}\ }}
&
\multicolumn{1}{p{1.4em}|}{\rotatebox[origin=r]{90}{\makecell{\
Convexity~~~~~~~~~~~~~~~~~~}\
}}
&
\begin{itemize}
\item For convex functions, if a point is a local minimum it is also the
global minimum and a local minimizer is also a global minimizer (not
necessarily the only one).
\item If the objective function is nonconvex, it may or may not have
multiple local minima.
\item A convex function* is a function whose Hessian is positive
semidefinite for all x.
\item A Hessian matrix is positive semidefinite if all of its eigenvalues
are nonnegative.
\end{itemize}
&
\multicolumn{1}{c}{}
&
\begin{itemize}
\item A convex optimization problem is a problem in negative null form where
f(x) and g(x) are each convex functions and h(x) are affine functions.
\item For convex optimization problems, a local minimum is also the global
minimum, and a local minimizer is also a global minimizer (not necessarily the only one).
\item A nonconvex optimization problem may or may not have multiple
local minima and/or disconnected feasible regions.
\end{itemize}
\\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\end{table}%
\end{landscape}
\restoregeometry
\end{document}
答案1
makecell
这是一个改进:利用和enumitem
加载的可能性简化了一些代码tabularx
:
\documentclass[8pt]{extarticle}
\usepackage{array}
\usepackage{pdflscape}
\usepackage{comment}
\usepackage{graphicx}
\usepackage{easytable}
\usepackage{enumitem}
\usepackage{amssymb}
\usepackage{mathtools, nccmath, esdiff}
\usepackage{rotating}
\usepackage{makecell}
\renewcommand{\theadfont}{\normalsize\bfseries}
\usepackage{booktabs}
\usepackage{multirow,hhline,graphicx,array, caption, tabularx}
\usepackage[margin=0.5in]{geometry}
\newcommand{\x}{\mathbf{x}}
\newcommand{\g}{\mathbf{g}}
\newcommand{\h}{\mathbf{h}}
\newcommand{\0}{\mathbf{0}} %<- that's not a good idea
\newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
\makeatletter
\newcommand*{\compress}{\@minipagetrue}
\makeatother
\newlength{\TXcolwd}
\begin{document}
\aboverulesep=0ex
\belowrulesep=0ex
\renewcommand{\theadalign}{tc}
\newgeometry{margin=0.1cm}
\begin{landscape}
\null\vfill
% Table generated by Excel2LaTeX from sheet 'Sheet1'
\begin{table}[htbp]
\setlist[itemize, 1]{wide=0pt, leftmargin=*, before=\compress, after=\vspace*{\dimexpr\topsep-\baselineskip}}
\setlength{\extrarowheight}{4pt}
\centering
\caption{Add caption}
\begin{tabularx}{\linewidth}{|c|c|X|X|X|}% }{|p{0.7em}|p{0.4em}|X|X|X|}% p{0.7em}
\cmidrule{3-5} \multicolumn{1}{c}{}
& & \thead{Unconstrained \\[1ex] $\underset{\x \in \mathbb{R}^n}
{\mathrm{minimize}}\ f(\x)$}
&
\thead{Constrained: Reduced Form \\
$\begin{array}{l}\underset{\x \in \mathbb{R}^n}{\mathrm{minimize}}\ f(\x) \\
\mathrm{subject\ to\enspace} \h(\x)=\0
\end{array} $}
&
\thead{Constrained: Lagrangian Form \\
$\begin{array}{l}\underset{\x \in \mathbb{R}^n}{\mathrm{minimize}}\ f(\x) \\
\mathrm{subject\ to\ } \h(\x)=\0,\g(\x)\leq\0
\end{array} $ } \\
\midrule
\multirowcell{20}{\rotatebox{90}{Local Optimality Conditions}}%
&
\multirowcell{9}{\rotatebox{90}{First Order Necessary}}
&
At a local minimizer, the gradient of the objective function must be zero
\[ \nabla f(\x_\dagger)=\0 \]
&
At a local minimizer, the reduced gradient must be zero if $\partial h/\partial s$ is invertible. \useshortskip
\begin{gather*}
\nabla_d f_R (x_{\dagger})=0 \\
h(x_{\dagger})=0 \\
\text{where } x= \begin{bmatrix}
d\\s
\end{bmatrix},\:\nabla_d f_R (x_{\dagger})=\frac{\partial f}{\partial d}-\frac{\partial f}
{\partial s} \biggl( \diffp{h}{s} \biggr )^{\mkern-6mu-1}\diffp{h}{d}
\end{gather*}
&
At a local minimizer, the KKT conditions must be satisfied if the point is regular (i.e.: if the linear independence constraint qualification (LICQ) is satisfied: if $ \nabla h_{\dagger}(x_{*})$ has independent rows).\useshortskip
\begin{gather*}
\nabla _x L(x_{\dagger})=0 \\
h(x_{\dagger})=0,g(x_{\dagger}) \le 0 \\
\mu_{\dagger}^T g(x_{\dagger})=0 \\
\mu_{\dagger} \ge 0 \\
\text{where } L(x_{\dagger})=f(x_{\dagger})+\lambda^T h(x_{\dagger})+\mu ^T
g(x_{\dagger})
\end{gather*}
\vspace*{\dimexpr 1ex-\baselineskip} \\
\cmidrule{2-5}%
&
\multirowcell{11}{\rotatebox{90}{Second Order Sufficiency}} %
&
If the Hessian of the objective function is positive definite at a point where the gradient is zero, the point is a local minimum.
\begin{gather*}
\partial x^T\nabla^2f(x_{*})\partial x>0 \\
\forall \partial x \neq 0
\end{gather*}
A Hessian matrix is positive definite if all of its eigenvalues are positive.
&
If the reduced Hessian is positive definite at a point where the reduced gradient is zero, the point is a local minimum.
\begin{gather*}
\partial d^T \nabla_d^2 f_R (x_{*})\partial d>0, \forall \partial d \neq 0 \\
\text{where }\nabla_d^2 f_R (x_{*})=A \frac{\partial ^2 f}{\partial x^2}
A^{T}+ \diffp{f}{s} \diffp[2]{s}{d} \\
A= \biggl[
I \hspace{2mm}\biggl({\diffp{s}{d}\biggr)}^T
\biggr],
\frac{\partial^2 s}{\partial d^2} =-\biggl(\diffp{h}{s}\biggr)^{\mkern-6mu -1} A\, \diffp[2]{h}{x} A^{T}
\end{gather*}
&
If the Hessian of the Lagrangian is positive definite on the subspace tangent to the active constraints at a KKT point, the point is a local minimum.
\begin{gather*}
\partial x^T\nabla^2_x L(x_{*})\partial x>0 \\
\forall \partial x \neq 0: \nabla_x h_{\dagger}(x_{*})\partial x = 0 \\
\text{where }h_{\dagger}(x_{*}) = [h(x_{*})^T, g_j(x_{*})\forall
j:\mu_j>0]^T
\end{gather*}
A Hessian matrix is positive definite on the subspace tangent to the active constraints if the last $ n $-$ m $ leading principal minors of the bordered Hessian %
$\begin{bmatrix}
0 & \nabla h\\ \nabla h^T & \nabla^2_x L
\end{bmatrix}$have sign $(-1)^m$, where $ m $ is the number of active
constraints. \smallskip
\\
\midrule
\multirowcell{9}{\rotatebox{90}{Global Optimality Conditions}}
&
\multirowcell{9}{\rotatebox{90}{Convexity}}
& \begin{itemize}
\item For convex functions, if a point is a local minimum it is also the global minimum and a local minimizer is also a global minimizer (not necessarily the only one).
\item If the objective function is nonconvex, it may or may not have multiple local minima.
\item A convex function* is a function whose Hessian is positive semidefinite for all x.
\item A Hessian matrix is positive semidefinite if all of its eigenvalues are nonnegative.
\end{itemize}
&
\multicolumn{2}{p{57em}|}{%
\begin{itemize}
\item A convex optimization problem is a problem in negative null form where f(x) and g(x) are each convex functions and h(x) are affine functions.
\item For convex optimization problems, a local minimum is also the global minimum, and a local minimizer is also a global minimizer (not necessarily the only one).
\item A nonconvex optimization problem may or may not have multiple local minima and/or disconnected feasible regions.
\end{itemize}} \\
\bottomrule
\end{tabularx}%
\label{tab:addlabel}%
\end{table}%
\vfill
\end{landscape}
\restoregeometry
\end{document}
答案2
我认为你必须以\multicolumn
不同的方式使用命令:
\multicolumn{2}{p{42em}|}{
\begin{itemize}
\item A convex optimization problem is a problem in negative null form
where f(x) and g(x) are each convex functions and h(x) are affine
functions.
\item For convex optimization problems, a local minimum is also the global
minimum, and a local minimizer is also a global minimizer (not necessarily
the only one).
\item A nonconvex optimization problem may or may not have multiple
local minima and/or disconnected feasible regions.
\end{itemize}}
请参阅文档或如何合并表中的列?有疑问时。
至于垂直对齐,列类型m
应该可以解决问题:
表格中的 p、m 和 b 列