混合两列并在同一页面上创建四列引用

混合两列并在同一页面上创建四列引用

文章采用双栏排版,参考部分需要排版在同一页的第四栏。我在谷歌上搜索了一下,发现cuted可以混合使用单栏和双栏模式。但我需要在同一页上使用四栏参考。请找到 MWE:

\documentclass[twocolumn]{article}
\usepackage{amsmath}
\usepackage{cuted}%%Mix­ing onecol­umn and twocol­umn modes
\begin{document}
\title{Traditional computers and their limitation}
\author{K. E. Kelbal}
\maketitle
\begin{abstract}
High performance computing is needed for evaluating computationally "bulky" problems. For years, computers have progressed in architecture and hardware
aspects to exploit parallelism, but the algorithmic or software aspects are yet to be discovered fully in many fields. There are many aspects related to high performance computing, which we need to know before developing algorithms. Linear algebra lies at the heart of most calculations in scientific computing. Thus there is a need for developing computationally "rich" algorithms for linear algebra. In this report we present some ways to exploit parallelism for achieving high performance for linear algorithms, along with a summary of parallel processing and architecture overview with performance evaluation analysis. 
\end{abstract} 
\section{Introduction}
In the past decade, the world has experienced one of the most exciting periods in computer development. Computer performance improvements have been dramatic - a trend that promises to continue for the next several years. One reason for the improved performance is the rapid advancement in microprocessor technology. Microprocessors have become smaller, denser and more powerful. The result is that microprocessor based super computing is rapidly becoming the technology of preference in attacking some of the most important problems of science and engineering. To exploit microprocessor technology, vendors have developed high parallel computers \cite{id:0001,id:0002}.

Highly parallel systems offer the enormous computational power needed for solving some of the most challenging computational problems such as
circuit simulation incorporating various effects. Unfortunately, software development has not keep pace with hardware advances. New programming paradigms
languages, scheduling and partitioning techniques, and algorithms are needed to fully exploit the power of these highly parallel machines.

A major new trend for scientific problem solving is distributed computing. In distributed computing \cite{id:0003}, computers are connected by a network
are used collectively to solve a single larger problem. Many scientists are discovering that their computational requirements are best served not by a single,
monolithic computer but by a variety of distributed computing resources, linked by high speed networks. By parallel computing, we mean a set of processes that
are able to work together to solve a computational problem. There are a few things that are worthwhile to point out. First, the use of parallel processing and
the techniques that exploit them are now everywhere; from the personal computer to the fastest computer available. Second, parallel processing doesn't
necessary imply high performance computing.

The traditional computer, or conventional approach to computer design involves a single instruction stream. Instructions are processed sequentially and the
result is movement of data from memory to functional unit and back to memory. As demands for faster performance increased, modifications were made to
improve the design of the computers. It became evident that a number of factors were limiting potential speed: the switching speed of the devices, packaging
and interconnection delays, and compromises in the design to account for realistic tolerances of parameters in the timing of individual components. Even if a
dramatic improvement could be made in any of these areas, one factor still limits performance: the speed of light. Today's supercomputers have a cycle time
on the order of nanoseconds. One nanosecond translates into the time it takes light to move about a foot (in practice, the speed of pulses through the wiring
of a computer ranges from 0.3 to 0.9 feet per nanosecond). Faced by this fundamental limitation, computer designers have begun moving in the direction of
parallelism.

In this report we explore some of the issues involved in the use of high performance computing and parallel processing aspects. The organization
of the report is given below.

\section{Organization Of the Report}

\begin{enumerate}
\item Chapter 2 describes the fundamentals of parallel processing and issues related with high performance computing \cite{id:0004,id:0005}.
\item Chapter 3 describes the techniques used to decrease overhead and improve performance \cite{id:0006}.
\item Chapter 4 describes the performance analysis for uniprocessor and parallel processors \cite{id:0007}.
\item Chapter 5 describes parallel algorithms for solving linear \cite{id:0008,id:0009,id:0010}.
\item Chapter 6 discusses future work that is to be done.
\end{enumerate}

\begin{strip}
\begin{thebibliography}{99}
\bibitem{id:0001}{Abraham Silberschatz, Peter Baer Galvin, "Operating System Concept", Addison Wesley, Reading Massachusetts, USA, 1998}
\bibitem{id:0002}{John P. Hayes, "Computer Architecture and Organization", McGraw-Hill International Company, Singapore, 1988 }
\bibitem{id:0003}{PVM 3 User Guide and Reference Manual, Edited by Al Gist, Oak Ridge National Laboratory, Engineering Physics and Mathematics Divison,
Mathematical Science Section, Oak Ridge, Tennessee, USA, 1991}
\bibitem{id:0004}{PVM's HTTP Site, "http://www.epm.ornl.gov/pvm/"}
\bibitem{id:0005}{Brian W. Kernighan, Dennis M. Ritchie, "The C - Programming Language, (ANSI C Version)", Prentice-Hall of India Pvt. Ltd., New Delhi, 1998}
\bibitem{id:0006}{Thomas H. Corman, Charles E. Leiserson, Ronald L. Rivest, "Introduction to Algorithm", MIT Press, Cambridge, MA, USA, 1990}
\bibitem{id:0007}{Kenneth Hoffmann, Rey Kunze, " Linear Algebra", Prentice-Hall of India Pvt. Ltd., New Delhi, 1997}
\bibitem{id:0008}{G.H. Golub and C. F. Van Loan , " Matrix Computations", Third Edition. The Johns Hopkins University Press, Baltimore, 1996}
\bibitem{id:0009}{David A. Patterson, John L. Hennessy, "Computer Architecture, A Quantitative Approach", Morgan Kaufmann Publications Inc., San Mateo, California, USA,     1990}
\bibitem{id:0010}{Jack Dongarra, Iain Duff, Danny Sorensen, and Henk van der Vorst, Numerical Linear Algebra for High-Performance Computing",Society for Industrial and     Applied Mathematics, Philadelphia, 1998}
\end{thebibliography}
\end{strip}
\end{document}

注意:目前我们已经创建了参考部分\begin{figure*}。但这需要手动完成。我需要像这样的自动放置cuted。这可能吗?

答案1

只有multicols这样,才有可能,但是边距与您的 MWE 不同:

\documentclass[onecolumn]{article}
\usepackage{amsmath}
\usepackage{multicol}
\begin{document}
\begin{multicols}{2}
\title{Traditional computers and their limitation}
\author{K. E. Kelbal}
\maketitle
\begin{abstract}
High performance computing is needed for evaluating computationally "bulky" problems. For years, computers have progressed in architecture and hardware
aspects to exploit parallelism, but the algorithmic or software aspects are yet to be discovered fully in many fields. There are many aspects related to high performance computing, which we need to know before developing algorithms. Linear algebra lies at the heart of most calculations in scientific computing. Thus there is a need for developing computationally "rich" algorithms for linear algebra. In this report we present some ways to exploit parallelism for achieving high performance for linear algorithms, along with a summary of parallel processing and architecture overview with performance evaluation analysis. 
\end{abstract} 
\section{Introduction}
In the past decade, the world has experienced one of the most exciting periods in computer development. Computer performance improvements have been dramatic - a trend that promises to continue for the next several years. One reason for the improved performance is the rapid advancement in microprocessor technology. Microprocessors have become smaller, denser and more powerful. The result is that microprocessor based super computing is rapidly becoming the technology of preference in attacking some of the most important problems of science and engineering. To exploit microprocessor technology, vendors have developed high parallel computers \cite{id:0001,id:0002}.

Highly parallel systems offer the enormous computational power needed for solving some of the most challenging computational problems such as
circuit simulation incorporating various effects. Unfortunately, software development has not keep pace with hardware advances. New programming paradigms
languages, scheduling and partitioning techniques, and algorithms are needed to fully exploit the power of these highly parallel machines.

A major new trend for scientific problem solving is distributed computing. In distributed computing \cite{id:0003}, computers are connected by a network
are used collectively to solve a single larger problem. Many scientists are discovering that their computational requirements are best served not by a single,
monolithic computer but by a variety of distributed computing resources, linked by high speed networks. By parallel computing, we mean a set of processes that
are able to work together to solve a computational problem. There are a few things that are worthwhile to point out. First, the use of parallel processing and
the techniques that exploit them are now everywhere; from the personal computer to the fastest computer available. Second, parallel processing doesn't
necessary imply high performance computing.

The traditional computer, or conventional approach to computer design involves a single instruction stream. Instructions are processed sequentially and the
result is movement of data from memory to functional unit and back to memory. As demands for faster performance increased, modifications were made to
improve the design of the computers. It became evident that a number of factors were limiting potential speed: the switching speed of the devices, packaging
and interconnection delays, and compromises in the design to account for realistic tolerances of parameters in the timing of individual components. Even if a
dramatic improvement could be made in any of these areas, one factor still limits performance: the speed of light. Today's supercomputers have a cycle time
on the order of nanoseconds. One nanosecond translates into the time it takes light to move about a foot (in practice, the speed of pulses through the wiring
of a computer ranges from 0.3 to 0.9 feet per nanosecond). Faced by this fundamental limitation, computer designers have begun moving in the direction of
parallelism.

In this report we explore some of the issues involved in the use of high performance computing and parallel processing aspects. The organization
of the report is given below.

\section{Organization Of the Report}

\begin{enumerate}
\item Chapter 2 describes the fundamentals of parallel processing and issues related with high performance computing \cite{id:0004,id:0005}.
\item Chapter 3 describes the techniques used to decrease overhead and improve performance \cite{id:0006}.
\item Chapter 4 describes the performance analysis for uniprocessor and parallel processors \cite{id:0007}.
\item Chapter 5 describes parallel algorithms for solving linear \cite{id:0008,id:0009,id:0010}.
\item Chapter 6 discusses future work that is to be done.
\end{enumerate}
\end{multicols}
\begin{multicols}{4}
\begin{thebibliography}{99}
\bibitem{id:0001}{Abraham Silberschatz, Peter Baer Galvin, "Operating System Concept", Addison Wesley, Reading Massachusetts, USA, 1998}
\bibitem{id:0002}{John P. Hayes, "Computer Architecture and Organization", McGraw-Hill International Company, Singapore, 1988 }
\bibitem{id:0003}{PVM 3 User Guide and Reference Manual, Edited by Al Gist, Oak Ridge National Laboratory, Engineering Physics and Mathematics Divison,
Mathematical Science Section, Oak Ridge, Tennessee, USA, 1991}
\bibitem{id:0004}{PVM's HTTP Site, "http://www.epm.ornl.gov/pvm/"}
\bibitem{id:0005}{Brian W. Kernighan, Dennis M. Ritchie, "The C - Programming Language, (ANSI C Version)", Prentice-Hall of India Pvt. Ltd., New Delhi, 1998}
\bibitem{id:0006}{Thomas H. Corman, Charles E. Leiserson, Ronald L. Rivest, "Introduction to Algorithm", MIT Press, Cambridge, MA, USA, 1990}
\bibitem{id:0007}{Kenneth Hoffmann, Rey Kunze, " Linear Algebra", Prentice-Hall of India Pvt. Ltd., New Delhi, 1997}
\bibitem{id:0008}{G.H. Golub and C. F. Van Loan , " Matrix Computations", Third Edition. The Johns Hopkins University Press, Baltimore, 1996}
\bibitem{id:0009}{David A. Patterson, John L. Hennessy, "Computer Architecture, A Quantitative Approach", Morgan Kaufmann Publications Inc., San Mateo, California, USA,     1990}
\bibitem{id:0010}{Jack Dongarra, Iain Duff, Danny Sorensen, and Henk van der Vorst, Numerical Linear Algebra for High-Performance Computing",Society for Industrial and     Applied Mathematics, Philadelphia, 1998}
\end{thebibliography}
\end{multicols}
\end{document}

相关内容