哪种文档类别会产生这种经典的格式样式?

哪种文档类别会产生这种经典的格式样式?

我很好奇,想知道哪个文档类会在标题和部分区域中产生以下样式。以下是一些示例:

在此处输入图片描述

本例中,除了作者姓名之外,标题的位置(以及标题和作者姓名的字体大小差异)似乎在某些文档类别中位置完美。此外,字体样式部分似乎还有另一个功能\textsc

在此处输入图片描述

答案1

绝对是amsart。不幸的是,它使用\usepackage{times}如下所示

在此处输入图片描述

我发现这篇文章https://www.math.uci.edu/~chenlong/226/Ch6IterativeMethod.pdf并复制了介绍。您可以看到输出完全相同。

\documentclass[a4paper]{amsart}

\usepackage{times}
\usepackage{lipsum}

\begin{document}

\title{Classical iterative methods}
\author{Long Chen}

\maketitle

In this notes we discuss classic iterative methods on solving 
the linear operator equation
\begin{equation}\label{operator-eq}
Au = f,
\end{equation}
posed on a finite dimensional Hilbert space $\mathbb{V}\cong\mathbb{R}^N$ 
equipped with an inner product $(\cdot,\cdot)$. Here 
$A : \mathbb{V}\mapsto \mathbb{V}$ is an \emph{symmetric and positive definite (SPD)} 
operator, $f\in\mathbb{V}$ is given, and we are looking for $u\in\mathbb{V}$ such 
that~\eqref{operator-eq} holds.

The direct method to solve~\eqref{operator-eq} is to form $A^{-1}$ or the action 
of $A^{-1}f$. For example, the Gaussian elimination or LU factorization still 
remains the most commonly used methods in practice. It is a black-box as it can 
be applied to any problem in principle. For general dense matrices, a matrix-vector 
product requires $\mathcal{O}(N^2)$ operations and the straightforward
implementation of Gauss elimination is $\mathcal{O}(N^3)$, which is prohibitive 
large when $N$ is large. The state-of-the-art of direct solvers can achieve the 
nearly linear complexity for certain structured sparse matrices; see for example~[2].

When $A$ is sparse, the nonzero entries of $A$ is $\mathcal{O}(N)$ and the 
basic matrix-vector product reduces to $\mathcal{O}(N)$ operation. Then it is 
desirable to design optimally scaled solvers, say, with $\mathcal{O}(N)$ or 
$\mathcal{O}(N\log N)$ computational cost. Namely computing $A^{-1}f$ is just 
a few number of $Ax$. To this end, we first introduce a basic residual-correction 
iterative method and study classic iterative methods.

To see the huge saving of an $\mathcal{O}(N)$ algorithm comparing with 
an $\mathcal{O}(N^2)$ one when $N$ is large, let us do the following calculation. 
Suppose $N = 10^6$ and a standard PC can do the summation of $10^6$ numbers 
in $1$ minute. Then an $\mathcal{O}(N)$ algorithm will finish in few
minutes while an $\mathcal{O}(N^2)$ algorithm will take nearly two years 
($10^6$ minutes $\approx$ 694 days).

\section{Residual-Correction Method}

\lipsum[1-30]

\end{document}

我添加了 Lipsum 文本只是为了制作更多页面并查看标题。

在此处输入图片描述

在此处输入图片描述

现在,让我使用 Times 克隆版对数学进行一些改动,以获得更好的排版效果。 追踪差异(可能还需进行其他改进)。

\documentclass[a4paper]{amsart}

\usepackage{newtx}
\usepackage[cal=cm,bb=ams]{mathalpha}

\usepackage{lipsum}

\begin{document}

\title{Classical iterative methods}
\author{Long Chen}

\maketitle

In this notes we discuss classic iterative methods on solving 
the linear operator equation
\begin{equation}\label{operator-eq}
Au = f,
\end{equation}
posed on a finite dimensional Hilbert space $\mathbb{V}\cong\mathbb{R}^N$ 
equipped with an inner product $(\cdot,\cdot)$. Here 
$A\colon \mathbb{V}\to \mathbb{V}$ is a \emph{symmetric and positive definite} (SPD) 
operator, $f\in\mathbb{V}$ is given, and we are looking for $u\in\mathbb{V}$ such 
that~\eqref{operator-eq} holds.

The direct method to solve~\eqref{operator-eq} is to form $A^{-1}$ or the action 
of $A^{-1}f$. For example, the Gaussian elimination or LU factorization still 
remains the most commonly used methods in practice. It is a black-box as it can 
be applied to any problem in principle. For general dense matrices, a matrix-vector 
product requires $\mathcal{O}(N^2)$ operations and the straightforward
implementation of Gauss elimination is $\mathcal{O}(N^3)$, which is prohibitive 
large when $N$ is large. The state-of-the-art of direct solvers can achieve the 
nearly linear complexity for certain structured sparse matrices; see for example~[2].

When $A$ is sparse, the nonzero entries of $A$ is $\mathcal{O}(N)$ and the 
basic matrix-vector product reduces to $\mathcal{O}(N)$ operation. Then it is 
desirable to design optimally scaled solvers, say, with $\mathcal{O}(N)$ or 
$\mathcal{O}(N\log N)$ computational cost. Namely computing $A^{-1}f$ is just 
a few number of $Ax$. To this end, we first introduce a basic residual-correction 
iterative method and study classic iterative methods.

To see the huge saving of an $\mathcal{O}(N)$ algorithm comparing with 
an $\mathcal{O}(N^2)$ one when $N$ is large, let us do the following calculation. 
Suppose $N = 10^6$ and a standard PC can do the summation of $10^6$ numbers 
in $1$ minute. Then an $\mathcal{O}(N)$ algorithm will finish in few
minutes while an $\mathcal{O}(N^2)$ algorithm will take nearly two years 
($10^6\,\text{minutes}\approx 694\,\text{days}$).

\section{Residual-Correction Method}

\lipsum[1-30]

\end{document}

在此处输入图片描述

相关内容