环绕图形推动文本

环绕图形推动文本

我有类似的代码

\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\includegraphics[width=0.4\textwidth]{Data/Graphs/shot.JPG}
\caption{Testing Apparatus}
\label{apparatus}
\end{wrapfigure}

然后在接下来的页面上,文本被推到左边,以适应环绕图的大小,即使那里没有出现图形。我该如何防止这种情况?

\documentclass[12pt,a4paper]{article}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{wrapfig}
\pagestyle{empty}
\usepackage[top=1.0in, bottom=1.0in, left=1.0in, right=1.0in]{geometry}
\usepackage{indentfirst}
\author{Ashvin Nair}
\title{Camera as Distance Sensor}
\begin{document}
\begin{flushleft}
Camera as Distance Sensor\\
Ashvin Nair\\
Winchester High School\\
[email protected]
\end{flushleft}

\begin{huge}
    \begin{center}
        \textbf{Camera as Distance Sensor}
    \end{center}
\end{huge}

\section*{Introduction}
The camera is easily Botball's most versatile and useful sensor.  Color recognition enables many of the Botball tasks to be completed.  We decided that along with knowing \textit{what} things are, knowing \textit{where} those things are is just as important.  The distance sensors provided with the kit are easily distracted, sometimes unpredictable and limited, and generally hard to trust.  With the camera, objects can generally be tracked smoothly and predictably.

Our approach to using the camera to find distance to objects is deriving a statistical model based on controlled trials.  In the past, we have used simple algorithms to close in on objects by manipulating the speeds of the wheels based on their position on the camera.  However, we wanted to know the distance to objects, mainly to  constantly map objects while the robot is moving so that the robot always has the most accurate idea of where objects are located.  Additionally, fixed objects can be used for the robot to localize itself.

\begin{wrapfigure}{r}{0.5\textwidth}
\includegraphics[width=0.4\textwidth]{Data/Graphs/shot.JPG}
\caption{Testing Apparatus}
\label{apparatus}
\end{wrapfigure}

\section*{Process}
Trials were conducted on large graph paper.  The CBC was placed on one side and the height and angle of the camera were measured.  The paper was marked into 6-inch boxes, and in each individual trial, a pompom was placed on the corner of a box.  Multiple trials were strung together in one program which performed several trials in a pattern and emptied the data into a comma-separated-values file.

With this data, specific situations were used to obtain a model.  In practice, each time the camera is adjusted on the robot (height or angle is changed), a new model may be required.  Of course, the next step in analysis is to model the camera generally, incorporating height and angle.

\pagebreak

\section*{Model}
To analyze the data, we used the statistics package R \cite{stats}.  The model above is specifically for a camera height of 20 cm and an angle of elevation of $55^{\circ}$.  The original data showed an inverse relationship so the inverse of the distance values was taken and a linear model was obtained.  To use this model, one would  first get the value of a blob's bbox\_bottom from the camera.  Say that the value was 70.  First, plug in 70 for the value of p, then evaluate the expression and take the inverse:
\begin{align*}
\frac{1}{d} &= 0.01158 + 0.0004387p \\
\frac{1}{d} &= 0.01158 + 0.0004387(70) \\
\frac{1}{d} &= 0.04229 \\
d &= 23.6
\end{align*}

\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{Data/Graphs/dvpx0.png}
\caption{Model For Distance}
\label{model1}
\end{center}
\end{figure}

The model was taken in a straight line in front of the camera, but later data showed that the value of pixels of bbox\_bottom varied very little when the object was moved horizontally with respect to the camera.

\section*{Dealing with Variables}
Intuitively, as the camera is raised, the model is affected linearly.

\section*{Conclusion}
Although being able to use the camera to approximate distances to objects is infinitely useful, the usefulness is limited by the camera hardware and drivers.  Still, using this process, a lot more information can be gathered by the robot.

\begin{thebibliography}{9}

\bibitem{stats}
  R Development Core Team (2011). R: A language and environment for
  statistical computing. R Foundation for Statistical Computing,
  Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/.

\end{thebibliography}

\end{document}

答案1

发生这种情况是因为wrapfig不能很好地处理分页符和分段命令。您的图超出了页面上的剩余空间。此外,与带星号的\section*命令有一些奇怪的交互。(您可以检查:如果您对“简介”部分使用无星号版本,此问题将消失)。我不知道有任何精心设计的黑客可以防止这种情况,而替代方案(更老的)floatflt软件包对此没有更好的处理。事实上,文档中建议手动调整。

因此,如果您需要未编号的部分,我建议您将wrapfigure环境向上移动一个(或两个)段落。如果(很可能)图形(标题)下方仍有令人烦恼的空白,您可以使用 hack 将其删除:

\begin{wrapfigure}{r}{0.5\textwidth}
\includegraphics[width=0.4\textwidth]{this.JPG}
\caption{Testing Apparatus}
\label{apparatus}
\vspace{-1.5cm}
\end{wrapfigure}

\vspace根据您的需要修改负量。

相关内容