如何在正文中完整引用一个 bibentry?

如何在正文中完整引用一个 bibentry?

我正在使用这个natbib包。

现在我想在文中完整引用参考书目条目。这也应该出现在参考书目中。例如

文字文字...

ABCD 先生 (2012) 你好,世界,这是第 页中的引用...

文字 文字...

.....bib 区域...
bbb sss 先生..
ABCD 先生 (2012) HELLO WORL,这是第... 页中的引用

答案1

使用包\bibentry中的命令bibentry

\begin{filecontents}{mytestbib.bib}
@book{goossens93,
    author = "Frank Mittelbach and Michel Goossens  and Johannes Braams and David Carlisle  and Chris Rowley",
    title = "The {LaTeX} Companion",
    year = "1993",
    publisher = "Addison-Wesley",
    address = "Reading, Massachusetts"
}
\end{filecontents}
\documentclass{article}
\usepackage{filecontents}
\usepackage{natbib}
\usepackage{bibentry}
\nobibliography*

\begin{document}

A full in-text cite of \bibentry{goossens93}.

A regular citation of \cite{goossens93}.

\bibliographystyle{plainnat}
\bibliography{mytestbib}

\end{document}

在此处输入图片描述

答案2

如果您使用 BibLateX,您只需使用该\fullcite命令即可。例如:

\fullcite{kumar_exploiting_2010}

生产

在此处输入图片描述

来自 bibtex:

@inproceedings{kumar_exploiting_2010,
    title = {{EXPLOITING} {N}-{GRAM} {IMPORTANCE} {AND} {ADDITIONAL} {KNOWEDGE} {BASED} {ON} {WIKIPEDIA} {FOR} {IMPROVEMENTS} {IN} {GAAC} {BASED} {DOCUMENT} {CLUSTERING}},
    url = {http://cogprints.org/7148/},
    abstract = {This paper provides a solution to the issue: “How can we use Wikipedia based concepts in document
clustering with lesser human involvement, accompanied by effective improvements in result?” In the
devised system, we propose a method to exploit the importance of N-grams in a document and use
Wikipedia based additional knowledge for GAAC based document clustering. The importance of N-grams
in a document depends on several features including, but not limited to: frequency, position of their
occurrence in a sentence and the position of the sentence in which they occur, in the document. First, we
introduce a new similarity measure, which takes the weighted N-gram importance into account, in the
calculation of similarity measure while performing document clustering. As a result, the chances of topical similarity in clustering are improved. Second, we use Wikipedia as an additional knowledge base both, to remove noisy entries from the extracted N-grams and to reduce the information gap between N-grams that are conceptually-related, which do not have a match owing to differences in writing scheme or strategies. Our experimental results on the publicly available text dataset clearly show that our devised system has a significant improvement in performance over bag-of-words based state-of-the-art systems in this area.},
    urldate = {2019-09-16},
    author = {Kumar, Mr Niraj and Vemula, Mr Venkata Vinay Babu and Srinathan, Dr Kannan and Varma, Dr Vasudeva},
    month = oct,
    year = {2010},
    file = {Kumar et al. - 2010 - EXPLOITING N-GRAM IMPORTANCE AND ADDITIONAL KNOWED.pdf:/Users/m/Zotero/storage/NJ88HWGE/Kumar et al. - 2010 - EXPLOITING N-GRAM IMPORTANCE AND ADDITIONAL KNOWED.pdf:application/pdf;Snapshot:/Users/m/Zotero/storage/H7EJNT4M/7148.html:text/html}
}

相关内容