浮动图形时前半页留空

浮动图形时前半页留空

我有一份文档,希望在其中显示一段文本,然后是图片,再下面是一段文本。我尝试过使用该[H]选项和float软件包。虽然这可以满足我的要求,但它也带来了副作用,即页面下半部分空白。

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer in ipsum cursus, ullamcorper tortor eu, congue ante. Mauris non nunc aliquam, sollicitudin purus ac, hendrerit nibh.
\begin{figure}[H]
  \centering
  \includegraphics[scale=0.3, trim = 0mm 0mm 0mm 95mm, clip]{architecture}
  \caption{Proposed system architecture}
\end{figure}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer in ipsum cursus, ullamcorper tortor eu, congue ante. Mauris non nunc aliquam, sollicitudin purus ac, hendrerit nibh.

编辑:如果我删除,H那么该图将被放置在我的参考资料的中间。

编辑:完整文档

% ********************************************************************
% *                  Format for IMVIP 2014  papers,                  *
% *                  based on the IMVIP 2001, 2006 template          *
% ********************************************************************

\documentclass[a4paper,11pt]{article}

\textwidth     14.5cm  %
\textheight    25.7cm  %
\oddsidemargin      +1.0cm  %
\evensidemargin  +1.0cm  %
\topmargin     -1.5cm  %

\usepackage{times}
\usepackage{graphicx}
\usepackage{float}

\pagestyle{empty}
\begin{document}

\title{\bf Wearable Computing to Aid in Activities of Daily Living}
\author{%
{\bf Author details suppressed}\\
Premier Image and Vision Laboratory\\
1504 Highway Street\\
ZZ-3595 Cybercity \\
[email protected]
          \\\\
}
\date{}
\maketitle
\thispagestyle{empty}
\begin{abstract}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer in ipsum cursus, ullamcorper tortor eu, congue ante. Mauris non nunc aliquam, sollicitudin purus ac, hendrerit nibh. Cras mollis fringilla condimentum. Aenean pellentesque, elit sit amet ultricies adipiscing, sem eros tempus diam, vel dictum metus libero in lorem. Etiam venenatis, nisi non elementum lobortis, ipsum augue eleifend ligula, blandit pulvinar tortor dolor quis mauris. Nunc vestibulum varius augue vitae gravida. Suspendisse potenti. In in tempus ligula. Aliquam vehicula turpis erat, non auctor libero volutpat ut. Maecenas pharetra luctus mauris. In lacinia ante nibh, eu lacinia elit rutrum ut. Nunc nec aliquet lectus. Fusce nisl justo, porttitor in arcu ac, scelerisque interdum massa. Fusce consectetur nunc at fermentum ultricies. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur semper, nunc nec sodales dapibus, orci lacus mattis tellus, ut vehicula dolor lectus vitae metus.
\end{abstract}
\textbf{Keywords:} Machine Vision, Wearable Computing, Smart Environments, Emerging Healthcare, Pervasive Computing.

\section{Introduction}
One of the most important achievements of the 20\textsuperscript{th} century has been the remarkable gain of life expectancy throughout the world, however, this has resulted in the oldest group of society (aged 85 plus) to be the most rapidly expanding segment of society \cite{Christensen2009}. The burden placed on health care systems will continue to increase as this segment of society continues to expand \cite{Christensen2009}, one potential solution to ease the burden on the health care systems in the use of an automated ``smart environment'' which would allow occupants that would normally need the assistance of carers to live at home with a larger degree of independence. A smart environment can be defined as being one that is ``able to acquire and apply knowledge about the environment and its inhabitants in order to improve their experience in that environment'' \cite{Cook2007}. It is an example of ubiquitous computing which represents the idea of ``computing everywhere'', in other words, making computing and communication effectively transparent \cite{Weiser1991}.

Wearable technology offers new opportunities within pervasive computing, allowing data to be continuously collected from a user and his/her immediate environment. Such a solution is particularly useful to support intelligent applications within smart environments where contextual information is core to success. The current paper proposes a solution to facilitate indoor localisation through the use of a single ``always on'' wearable camera. Location is determined using machine vision techniques that identify ``key'' objects within an environment and cross-reference these against a knowledge base that indicates these objects room placement within the environment. It is hypothesised that using a single wearable camera to determine user location will offer low impact in terms of equipment installation when compared with fixed vision or dense sensing based technologies while offering the potential to ``follow'' a user within an environment and provide enhanced contextual information based on location information.

The current work also aims to address one of the main challenges faced within smart environments; namely the heterogeneous nature of the data. Each device stores data in a different format which can create difficulties when data is being exchanged and processed as well as limiting the opportunity for data to be reused and compared \cite{McDonald2013}. This challenge is further compounded as there is no single common standard being used \cite{McDonald2013}. HomeML is one potential standard that can be used, it is an XML based open format for the exchange of data generated within a smart environment. HomeML was originally proposed as a means of solving the problems caused by the heterogeneous nature of data generated within a smart environment \cite{McDonald2013}.

\section{Related Work}
G\'{o}mez-Romero \textit{et al.} developed a system that used multiple fixed cameras placed within a smart environment which allowed them to detect objects, including people, in the cameras field of view \cite{Gomez-Romero2011a}. As the system was able to determine between people and objects it also allowed simple scene recognition using simple rules such as \textit{touch} or \textit{enclosing} (determined by overlapping boundary boxes) to establish which object the occupant was interacting with \cite{Gomez-Romero2011a}. While this technique was effective there where limitations with this approach. Due to the static nature of the cameras occlusion was an issue, while they tried to overcome this problem by reassigning the size and position of the boundary box when size variation over 80\% was detected but this did not solve the problem of total occlusion \cite{Gomez-Romero2011a}. Multiple occupancy is also an issue with this system as the cameras can only detect if a person is present or not and cannot distinguish between multiple occupants. One final problem with this system is due to the static nature of the cameras multiple cameras are needed in each room to attempt to cover all angles, which still may not be possible, both driving up the cost in terms of retro-fitting to the users environment.

Kurze and Roselius proposed an open architecture and runtime environment for mobile augmented reality applications that would allow the monitoring on environmental information to provide context aware support \cite{Kurze2010}. They also provided an example system that consisted of wearable smart glasses along with a facial recognition application. However their proposed architecture does not take account of other external sensors that may be placed within a smart environment.

Kang \textit{et al.} proposed an approach to identify and segment objects from scenes that are encountered in ADL. Their approach used a bottom-up segmentation approach and extracts object candidates as groups of mutually consistent segments \cite{Hebert2011}. While this work could detect objects in the scene it could not determine what activity the occupant was performing but this approach has been built on by Pirsiavash and Ramanan in order to determine what ADL the occupant was performing \cite{Pirsiavash2012}. Pirsiavash and Ramanan where able to achieve a 77\% accuracy rate in determining the correct activity with higher accuracy being currently limited due to genuine ambiguities in the data, as well as difficulties in annotation (annotations consist of an action label, bounding box, identity, and human-object interaction). Such as actions that involve interactions with the same object or objects which are small and often occluded and so may not be fully annotated \cite{Pirsiavash2012}. While both these techniques could detect objects in a scene and determine ADL they could not use this information in order to determine context or provide contextual information. \clearpage

\section{SERG Smart Environment}
To support smart environment research the SERG lab has a large scale intelligent environment (approximately 6,800 ft\textsuperscript{2}) to support the deployment and evaluation of connected health solutions. The lab consists of four dedicated smart labs (each 17 m\textsuperscript{2}) including a smart kitchen, living room, and meeting room to support research staff and postgraduate students. The labs are also fitted out with a series of sensors, such as TyneTec contact sensors fitted to doors, cupboards etc. as well as access to a range of wearable technology and high performance servers.

\section{Wearable Camera to Recognise Objects}
This research proposes a solution to facilitate indoor localisation through the use of a single ``always on'' wearable camera. Location is determined using machine vision techniques that identify ``key'' objects within an environment and cross-reference these against a knowledge base that indicates these objects room placement within the environment. For example, if a cooker and fridge are detected then it can be assumed that the user is in the kitchen. This approach will employ ``off the shelf'' machine vision tools, more specifically an OpenCV Haar Feature-based Cascade Classifier for rapid object detection. This method involves training a classifier using a series of positive images (images of the object you wish to detect), which are subsequently compared with a set of negative images in order to ``train'' the algorithm to discriminate between environmental objects observed within a given video stream. In an ideal scenario the negative images would be identical to the positive images minus the object of focus. This method uses AdaBoost to combine many ``weak'' classifiers to form one ``strong'' classifier.

\section{System Architecture}

\begin{figure}[H]
  \centering
  \includegraphics[scale=0.3, trim = 0mm 0mm 0mm 95mm, clip]{architecture}
  \caption{Proposed system architecture}
\end{figure}

\section{Conclusion}
In summary, this research aims to develop a context aware application through the use of wearable technology. In doing so, the research will advance context awareness through improved location based services based on vision processing of environmental objects. An effective data storage and inferencing system will also be developed to enable sensor integration of video based data along with other environmental and biometric sensors. Future work will focus on collecting and analysing data from the SERG labs and extending homeML to accommodate video data. Consequently, the adoption of homeRuleML will be investigated as a method to manage rules through a multi-agent based system.

\bibliography{imvip2014}
\bibliographystyle{apalike}

\end{document}

答案1

您可以使用insbox一组通用宏:它有一个\InsertBoxC命令,在插入点,首先终止当前文本行,然后放置其参数的内容(居中),最后继续文本。对于标题和标签引用,您可以使用 包captionof中的命令caption。所以在你的例子中,这需要添加以下代码:

\usepackage{caption}
\input{insbox}
...............
\begin{document}
................
\InsertBoxC{\includegraphics[scale=0.3, trim = 0mm 0mm 0mm 95mm, clip]{architecture}%
captionof{figure}{Proposed system architecture}\label{mylabel}}%
................

答案2

这是 LaTeX 的一个非常常见的问题用户

        ------------- <-- beginning of page
          .........
          .       .
          .       . <-- text 
          .........
          .........
          .       . <-- figure
        --.       .-- <-- end of page
          .........

如果图形不适合(或不满足其中一个约束),它就会漂走或发生其他奇妙的事情。

如果一定要将图形放在特定位置,最好只使用includegraphics[width=0.8\textwidth]{path/to/figure} ie,不要使用\begin{figure}..\end{figure}。稍微调整width也会有所帮助。

以下设置多次对我很有用

% Use Donald Arseneau's improved float parameters. 
% I am not too sure when this was first referenced
% once I find it, will provide a citation and or a link.
% 
\renewcommand{\topfraction}{.85}
\renewcommand{\bottomfraction}{.7} % .3 in kernel.
\renewcommand{\textfraction}{.15}
\renewcommand{\floatpagefraction}{.7}
\renewcommand{\dbltopfraction}{.66}
\renewcommand{\dblfloatpagefraction}{.66}
\setcounter{topnumber}{9}
\setcounter{bottomnumber}{9}
\setcounter{totalnumber}{20}
\setcounter{dbltopnumber}{9}

相关内容