在下面的代码中,当我使用该\For
语句时,代码无法编译并显示以下消息:
! Missing \endcsname inserted. <to be read again> \ALG@currentbkock@0
你能帮我编译代码吗?谢谢!
以下是代码:
\documentclass{article}
\usepackage{amsmath, amsfonts, amssymb, amsthm, bm}
\usepackage{commath}
\usepackage{xcolor}
\usepackage{algpseudocode}
\usepackage{pifont}
\usepackage[linesnumbered,ruled,vlined]{algorithm2e}
\newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}}
\SetCommentSty{mycommfont}
\begin{document}
\begin{algorithm}[H]
\DontPrintSemicolon
% \KwData{Training set $x$}
% $\Delta_{ji}^l := 0$ \tcp*{will be used to compute $\partial x$}
% \tcc{iterate over all training examples}
{\bfseries Input:} {a matrix of training samples $A = [A_{1}, A_{2}...,A_{k} ] \in \mathbb{R}^{m\times n}$ for $k$ classes, a test sample $\mathbf{y}\in \mathbb{R}^{m}$, (and an optional error tolerance $\varepsilon > 0$).}\\
Normalize the columns of $A$ to have unit $\ell^{2}$-norm.\\
Solve the $\ell^{1}$-minimization problem:
\begin{equation}
\hat{\bm{x}}_{1} = \arg \min_{x}\norm{\bm{x}}_{1}\quad \text{subject to}\quad A\bm{x} = \bm{y}
\end{equation}
(Or alternatively, solve
\begin{equation}
\hat{\bm{x}}_{1} = \arg \min_{x}\norm{\bm{x}}_{1}\quad \text{subject to}\quad \norm{A\bm{x} = \bm{y}}_{2}\leqslant \varepsilon).
\end{equation}\\
Compute the residuals $r_{i}(\bm{y}) = \norm{\bm{y} - A \delta_{i} (\hat{\bm{x}}_{1})}_{2}$\\
\For{$i = 1,\ldots,k$.}
\EndFor\\
{\bfseries Output:} identity$(\bm{y}) = \arg \min_{i}r_{i}$
\caption{Sparse Representation-based Classification (SRC)}
\end{algorithm}
\end{document}
答案1
该问题是由于您混合了两个不兼容的算法包造成的。algorithm2e
提供algorithm
用于编写伪代码的环境和语法宏。algpseudocode
仅提供语法宏。您应该使用任何一个 algorithm2e
或者 algpseudocode
(和algorithm
),但不是两个都。
以下是algorithm2e
您可能需要的粗略实现:
\documentclass{article}
\usepackage{amsmath,amssymb,commath,bm}
\usepackage[linesnumbered,ruled,vlined]{algorithm2e}
\newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}}
\SetCommentSty{mycommfont}
% http://tex.stackexchange.com/a/153906/5764
\let\oldnl\nl% Store \nl in \oldnl
\newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl}}% Remove line number for one line
\begin{document}
\begin{algorithm}[H]
\DontPrintSemicolon
\caption{Sparse Representation-based Classification (SRC)}
\KwIn{a matrix of training samples $A = [A_{1}, A_{2}, \dots ,A_{k}] \in \mathbb{R}^{m \times n}$
for $k$ classes, a test sample $\mathbf{y} \in \mathbb{R}^{m}$, (and an optional error tolerance $\varepsilon > 0$).}
Normalize the columns of $A$ to have unit $\ell^{2}$-norm.\;
Solve the $\ell^{1}$-minimization problem:
$\hat{\bm{x}}_{1} = \arg \min_{x}\norm{\bm{x}}_{1}\quad \text{subject to}\quad A\bm{x} = \bm{y}$ \;
\nonl (Or alternatively, solve
$\hat{\bm{x}}_{1} = \arg \min_{x}\norm{\bm{x}}_{1}\quad \text{subject to}\quad \norm{A\bm{x} = \bm{y}}_{2} \leqslant \varepsilon$).\;
Compute the residuals $r_{i}(\bm{y}) = \norm{\bm{y} - A \delta_{i}(\hat{\bm{x}}_{1})}_{2}$\;
\For{$i = 1,\dots,k$}{something}
\end{algorithm}
\end{document}