我正在尝试弄清楚如何显示算法步骤algorithm2e
。
我的代码如下:
\title{AlgorithmTemplate}
\documentclass[17pt]{article}
\usepackage{fullpage}
\usepackage{times}
\usepackage{fancyhdr,graphicx,amsmath,amssymb}
\usepackage[ruled,vlined]{algorithm2e}
\include{pythonlisting}
\begin{document}
\begin{algorithm}[H]
\SetAlgoLined
\KwIn{Multi-label dataset : $\left(x^{(n)}, \mathbf{y}^{(n)}\right), n=1,2, \dots, N$ ; \newline A zero matrix $A \in \mathbb{R}{^{n\times n}}$ ; \newline A numpy matrix $H^l \in \mathbb{R}{^{n\times d}}$}
\KwOut{Predicted label set $\mathbf{\widehat y}$}
\emph{Feature vector from bi-directional LSTM}\;
\For{each epoch}{
\emph{Feature vector from bi-directional LSTM}\;
\For{each batch}{\label{forins}
$1)$ Compute \textit{x} using equation (1)\newline
$\mathbf{\textit{x} = f_{rnn}(f_{word2vec}(I;\theta_{word2vec});\theta_{rnn})\in\mathbb{R}^{D}}$\newline
Compute forward pass for lstm \newline
$\overrightarrow{\boldsymbol{h}}_{i}=\overrightarrow{\mathrm{LSTM}}\left(\overrightarrow{\boldsymbol{h}}_{i-1}, \boldsymbol{x}_{i}\right)$\newline
Compute forward pass for lstm \newline
$\overleftarrow{\boldsymbol{h}}_{i}=\overleftarrow{\mathrm{LSTM}}\left(\overleftarrow{\boldsymbol{h}}_{i+1}, \boldsymbol{x}_{i}\right)$ \newline
concatenating the hidden states from both directions \newline
$\boldsymbol{h}_{i}=\left[\overrightarrow{\boldsymbol{h}_{i}} ; \overleftarrow{\boldsymbol{h}}_{i}\right]$\newline
$2)$ Compute \textit{W} using equation (10)\newline
$\mathbf{W = f_{network} (I;J;\theta_{network})}$ \newline
compute mulit-head attention using equation (9)
\mathbf{H_{i}^{(l+1)}=\sigma\left(\frac{1}{K} \sum_{k=1}^{K} \sum_{j \in N(i)} \alpha_{i j, k}^{(l)} H_{j}^{(l)} W^{(l)}\right)}\newline
Matrix multiplication
$\mathbf{\widehat y}$ = \textit{x} \mathbf{\odot} \textit{W} \newline
Compute cross entropy \newline
$\mathbf{\mathcal{L}=\sum_{c=1}^{C} y^{c} \log \left(\sigma\left(\hat{y}^{c}\right)\right)+\left(1-y^{c}\right) \log \left(1-\sigma\left(\hat{y}^{c}\right)\right)}$
loss = reduce ( cross entropy )
update the parameters basis on loss using back propagation \newline
$\mathbf{\theta_{t+1}=\theta_{t}-\frac{\eta}{\sqrt{\hat{v}_{t}}+\epsilon} \hat{m}_{t}}$
}
}
\caption{Algorithm}
\end{algorithm}
\end{document}
如何获得这样的编号:
编辑算法:
\title{AlgorithmTemplate}
\documentclass[17pt]{article}
\usepackage{fullpage}
\usepackage{times}
\usepackage{fancyhdr,graphicx,amsmath,amssymb}
\usepackage[ruled,vlined,linesnumbered]{algorithm2e}
\include{pythonlisting}
\begin{document}
\SetAlgoLined
\begin{algorithm}[H]
\KwIn{Multi-label dataset : $\left(x^{(n)}, \mathbf{y}^{(n)}\right), n=1,2, \dots, N$ ; \newline A zero matrix $A \in \mathbb{R}{^{n\times n}}$ ; \newline A numpy matrix $H^l \in \mathbb{R}{^{n\times d}}$}
\KwOut{Predicted label set $\mathbf{\widehat y}$}
\emph{Feature vector from bi-directional LSTM}\;
\For{each epoch}{
\emph{Feature vector from bi-directional LSTM}\;
\For{each batch}{\label{forins}
$1)$ Compute \textit{x} using equation (1)
$\mathbf{\textit{x} = f_{rnn}(f_{word2vec}(I;\theta_{word2vec});\theta_{rnn})\in\mathbb{R}^{D}}$
Compute forward pass for lstm
$\overrightarrow{\boldsymbol{h}}_{i}=\overrightarrow{\mathrm{LSTM}}\left(\overrightarrow{\boldsymbol{h}}_{i-1}, \boldsymbol{x}_{i}\right)$
Compute forward pass for lstm
$\overleftarrow{\boldsymbol{h}}_{i}=\overleftarrow{\mathrm{LSTM}}\left(\overleftarrow{\boldsymbol{h}}_{i+1}, \boldsymbol{x}_{i}\right)$
concatenating the hidden states from both directions
$\boldsymbol{h}_{i}=\left[\overrightarrow{\boldsymbol{h}_{i}} ; \overleftarrow{\boldsymbol{h}}_{i}\right]$
$2)$ Compute \textit{W} using equation (10)
$\mathbf{W = f_{network} (I;J;\theta_{network})}$
compute mulit-head attention using equation (9)
\mathbf{H_{i}^{(l+1)}=\sigma\left(\frac{1}{K} \sum_{k=1}^{K} \sum_{j \in N(i)} \alpha_{i j, k}^{(l)} H_{j}^{(l)} W^{(l)}\right)}
Matrix multiplication
$\mathbf{\widehat y}$ = \textit{x} \mathbf{\odot} \textit{W}
Compute cross entropy
\begin{aligned}
\mathbf{\mathcal{L}=\sum_{c=1}^{C} y^{c} \log \left(\sigma\left(\hat{y}^{c}\right)\right)+ \\
\left(1-y^{c}\right) \log \left(1-\sigma\left(\hat{y}^{c}\right)\right)}
\end{aligned}
loss = reduce ( cross entropy )
update the parameters basis on loss using back propagation
$\mathbf{\theta_{t+1}=\theta_{t}-\frac{\eta}{\sqrt{\hat{v}_{t}}+\epsilon} \hat{m}_{t}}$
}}
\caption{Algorithm}
\end{algorithm}
\end{document}
答案1
您需要添加选项linesnumbered
您需要在加载时algorithm2e
包裹或者\LinesNumbered
加载后调用:
\documentclass{article}
\usepackage[ruled,vlined,linesnumbered]{algorithm2e}
\begin{document}
\begin{algorithm}[H]
\SetAlgoLined
\KwData{this text}
\KwResult{how to write algorithm with \LaTeX2e }
initialization\;
\While{not at end of this document}{
read current\;
\eIf{understand}{
go to next section\;
current section becomes this one\;
}{
go back to the beginning of current section\;
}
}
\caption{How to write algorithms}
\end{algorithm}
\end{document}
如果您希望更改样式以包含冒号:
后缀,那么您还可以添加
\SetNlSty{textbf}{}{:}% Add colon after line number
\IncMargin{.2em}% Push algorithm to the right (allowing for larger line numbering)
到你的序言(加载后algorithm2e
)。