如何在参考文献部分仅列出我的文章

如何在参考文献部分仅列出我的文章

使用 BiB(La)TeX,我尝试制作如下文档网页。 那是,

  1. 我希望 BiB(La)TeX 根据我在下一部分中对 LaTeX 源的输入来打印我的文章列表,即
  2. 其他人的文章列表,在源中以 BiB(La)TeX 键列表的形式给出,然后转换为类似于它们在引用列表中出现的格式;每篇文章前面都有我的文章被引用,我也在源中手动将其作为 BiBTeX 键列表输入。

具体来说,我希望系统跟踪我\cite在第二部分中的文章,但不跟踪其他人的文章,并将其放在第一部分中。我有以下“不起作用”的示例,它在第一部分中列出了全部第二部分引用的文章:

\documentclass{article}

\usepackage{bibentry}
\title{list of citation response}
\author{John DOe}
\begin{document}
\renewcommand{\refname}{List of Cited Literature Items}
\bibliographystyle{plain}
\bibliography{bib}
\nobibliography{bib}
\section*{In Recent articles}
\begin{description}
\item[Citing \cite{myarticle}]
\bibentry{myadvisorsarticle}
\end{description}
\end{document}

(对于熟悉 BiBLaTeX 的人来说,这似乎是一个常见问题解答或非常简单的任务,但我不了解整个 BiBLaTeX 手册。对于可能的重复,我深表歉意。)

答案1

您可以使用以下方法之一(不需要背面)

https://www.overleaf.com/learn/latex/Questions/Creating_multiple_bibliographies_in_the_same_document#Bibliographies_for_different_categories

创建两类参考文献:一类用于您自己的论文,一类用于其他所有内容。

然后,您可以按照正常方式打印自己论文的参考书目。

要打印所有其他论文的参考书目,因为您想添加有关它们如何引用您的额外信息,所以您不能 - 正如已经指出的那样 - 只使用通常的参考书目命令。

相反,你可以用 \fullcite 手动引用它们: \cite{me1} \cite{me2} \textbf{in} \fullcite{g1}

这种方法的一个缺点是,如果您在文本中引用了某人,但忘记将其包含在手册列表中,它就不会“捕获”它。作为一种解决方法,您可以创建第二个类别,将其命名为 manualcite,并编写一个包装器来将论文添加到该类别中。然后,最后,您可以得到一个最终的书目,它将“捕获”任何未添加到任何一个类别的内容(作为检查;一旦您满意所有事情,您就可以删除此行)

另一个问题是 bibtex 会为所有引用的参考文献分配编号,而不仅仅是你的。但也许它会给你一些思路。

\begin{filecontents}{shortbib.bib}
@misc{me1,
author={me},
title={my first paper},
howpublished={online}
}
@misc{me2,
author={me},
title={my second paper}
}
@misc{me3,
author={me},
title={my third paper}
}
@misc{jd1,
author={john doe},
title={another paper}
}
@misc{jd2,
author={jane doe},
title={a different paper}
}
@misc{jd3,
author={juliet doe},
title={yet another paper}
}
\end{filecontents}

\documentclass{article}
\usepackage[style=ext-numeric, articlein=false]{biblatex}
\addbibresource{shortbib.bib}

\DeclareBibliographyCategory{me}
\DeclareBibliographyCategory{manualcite}
\newcommand{\citeme}[1]{\cite{#1}\addtocategory{me}{#1}}
\newcommand{\longcite}[1]{\addtocategory{manualcite}{#1}\fullcite{#1}}

\begin{document}
% some text, with citations
I wrote a paper \citeme{me1}. And another \citeme{me2}. Some people
\cite{jd1} \cite{jd2} liked it. I wrote a third paper \citeme{me3} and
that was used by \cite{jd3}.

\hrule

\underline{Bibliography}
%my papers
 \printbibliography[category={me}, title={My papers}]
 % other papers

 \underline{Papers citing me}
 
\begin{itemize}
  \item  \citeme{me1} \citeme{me2}, \textbf{in} \longcite{jd1}
  \item \citeme{me1} \citeme{me3}, \textbf{in} \longcite{jd2}
\end{itemize}

% the check line -- did we forget anybody? 
\printbibliography[notcategory={me}, notcategory={manualcite},  title={Other papers}]
\end{document}

在此处输入图片描述

答案2

以下是一些分析和解决方案的蓝图:

  • 您拥有的标准 LaTeX
  • 通用数据模型
  • 你想要的非标准 LaTeX
  • 利用Access进行实施研究
  • 结束语
  • 附言:如何MairAw 的解决方案遵循这一概念蓝图

我假设您在这里从 LaTeX 生成 .pdf,尽管有一些限制,您也可以从 .tex 生成 html。然而,这个细节既没有触及也没有解决关键问题。

您拥有的标准 LaTeX

如果你将代码翻译成 biblatex 的话,你马上就会得到如下结果:

  • 文献数据库
  • \cite特征
  • 打印该书目

结果将是您引用的网页上的上部条目:

bib_own

通用数据模型

现在,让我们看一下通用数据模型,它是数据输入和书目生成的基础:

  • 你有一个引用文档的类
  • 包含相关属性
  • 可以提供函数()
  • 可以通过某些(id)进行索引(这已经是朝着实现的方向发展,但在你心中为实例分配索引=此类的条目更容易)

dm1

例如,如果您使用类似的引文管理程序Zotero,您可以输入所有类型的文献:

佐特罗

如果您选择,例如对于作者,它会提供一些exportBiblatex()由菜单操作触发的功能,例如创建extract.bib;通过手动创建,您自己可以提供此功能:

@online{wikipediaCarlWilhelmNaegeli2019,
    title = {Carl Wilhelm von Nägeli},
    url = {https://de.wikipedia.org/wiki/Carl_Wilhelm_von_N%C3%A4geli},
    abstract = {Biographische Beschreibung des Botanikers},
    author = {Wikipedia},
    date = {2019-06-22},
}

@online{wikipediaOligodynamie2020,
    title = {Oligodynamie},
    url = {https://de.wikipedia.org/wiki/Oligodynamie},
    abstract = {Der Begriff der Oligodynamie geht auf den Schweizer Botaniker Carl Wilhelm von Nägeli zurück und beschreibt eine schädigende Wirkung von Metall-Kationen (positiv elektrisch geladene Metallionen) auf lebende Zellen.},
    author = {Wikipedia},
    date = {2020-01-04},
}

@online{Wiki2023,
    title = {Digital Engineering},
    url = {https://de.wikipedia.org/wiki/Digital_Engineering},
    abstract = {Teildisziplinen unter Digital Engineering},
    author = {Wikipedia},
    date = {2023},
}

@online{Wiki,
    title = {Attention (machine learning) - Wikipedia},
    url = {https://en.wikipedia.org/wiki/Attention_(machine_learning)},
    abstract = {In artificial neural networks, attention is a technique that is meant to mimic cognitive attention. This effect enhances some parts of the input data while diminishing other parts—the motivation being that the network should devote more focus to the important parts of the data, even though they may be a small portion of an image or sentence. Learning which part of the data is more important than another depends on the context, and this is trained by gradient descent.

Attention-like mechanisms were introduced in the 1990s under names like multiplicative modules, sigma pi units[1], and hyper-networks.[2] Its flexibility comes from its role as "soft weights" that can change during runtime, in contrast to standard weights that must remain fixed at runtime. Uses of attention include memory in fast weight controllers,[3] neural Turing machines, reasoning tasks in differentiable neural computers,[4] language processing in transformers, and {LSTMs}, and multi-sensory data processing (sound, images, video, and text) in perceivers. [5][6][7][8]},
    author = {Wikipedia},
    urldate = {2023-06-15},
    file = {Attention (machine learning) - Wikipedia:C\:\\Users\\indernet\\Zotero\\storage\\P9NL9HA6\\Attention_(machine_learning).html:text/html},
}

@online{Wikia,
    title = {Word embedding - Wikipedia},
    url = {https://en.wikipedia.org/wiki/Word_embedding},
    abstract = {In natural language processing ({NLP}), a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning.[1] Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers.

Methods to generate this mapping include neural networks,[2] dimensionality reduction on the word co-occurrence matrix,[3][4][5] probabilistic models,[6] explainable knowledge base method,[7] and explicit representation in terms of the context in which words appear.[8]

Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in {NLP} tasks such as syntactic parsing[9] and sentiment analysis.[10]},
    author = {Wikipedia},
    urldate = {2023-06-15},
    file = {Word embedding - Wikipedia:C\:\\Users\\indernet\\Zotero\\storage\\WB4GBEUI\\Word_embedding.html:text/html},
}

@online{Wikib,
    title = {Transformer (machine learning model) - Wikipedia},
    url = {https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)},
    abstract = {A transformer is a deep learning model. It is distinguished by its adoption of self-attention, differentially weighting the significance of each part of the input (which includes the recursive output) data. It is used primarily in the fields of natural language processing ({NLP})[1] and computer vision ({CV}).[2]

Like recurrent neural networks ({RNNs}), transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarization. However, unlike {RNNs}, transformers process the entire input all at once. The attention mechanism provides context for any position in the input sequence. For example, if the input data is a natural language sentence, the transformer does not have to process one word at a time. This allows for more parallelization than {RNNs} and therefore reduces training times.[1]

Transformers were introduced in 2017 by a team at Google Brain[1] and are increasingly becoming the model of choice for {NLP} problems,[3] replacing {RNN} models such as long short-term memory ({LSTM}).[4] Compared to {RNN} models, transformers are more amenable to parallelization, allowing training on larger datasets. This led to the development of pretrained systems such as {BERT} (Bidirectional Encoder Representations from Transformers) and the original {GPT} (generative pre-trained transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific tasks.[5][6]},
    author = {Wikipedia},
    urldate = {2023-06-15},
    file = {Transformer (machine learning model) - Wikipedia:C\:\\Users\\indernet\\Zotero\\storage\\STVSNRG6\\Transformer_(machine_learning_model).html:text/html},
}

@online{Wikic,
    title = {Generative pre-trained transformer - Wikipedia},
    url = {https://en.wikipedia.org/wiki/Generative_pre-trained_transformer},
    abstract = {Generative pre-trained transformers ({GPT}) are a type of large language model ({LLM})[1][2][3] and a prominent framework for generative artificial intelligence.[4][5] The first {GPT} was introduced in 2018 by the American artificial intelligence ({AI}) company {OpenAI}.[6] {GPT} models are artificial neural networks that are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content.[2][3] As of 2023, most {LLMs} have these characteristics[7] and are sometimes referred to broadly as {GPTs}.[8]

{OpenAI} has released very influential {GPT} foundation models that have been sequentially numbered, to comprise its "{GPT}-n" series.[9] Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, {GPT}-4, was released in March 2023. Such models have been the basis for their more task-specific {GPT} systems, including models fine-tuned for instruction following—which in turn power the {ChatGPT} chatbot service.[1]

The term "{GPT}" is also used in the names and descriptions of such models developed by others. For example, other {GPT} foundation models include a series of models created by {EleutherAI},[10] and recently seven models created by Cerebras.[11] Also, companies in different industries have developed task-specific {GPTs} in their respective fields, such as Salesforce's "{EinsteinGPT}" (for {CRM})[12] and Bloomberg's "{BloombergGPT}" (for finance).[13]},
    author = {Wikipedia},
    urldate = {2023-06-15},
    file = {Generative pre-trained transformer - Wikipedia:C\:\\Users\\indernet\\Zotero\\storage\\WS6HLCT9\\Generative_pre-trained_transformer.html:text/html},
}

@online{Wikid,
    title = {Literaturverzeichnis – Wikipedia},
    url = {https://de.wikipedia.org/wiki/Literaturverzeichnis},
    abstract = {Ein Literaturverzeichnis ist eine unselbstständige Zusammenstellung von Literaturhinweisen in alphabetischer oder systematischer Form und damit eine spezielle Bibliografie. Es steht meist am Ende wissenschaftlicher Qualifizierungsarbeiten wie Diplom-, Magister-, Bachelor- und Master-, Staatsexamens- oder Doktorarbeiten sowie eines Aufsatzes in einer Fachzeitschrift und in Sachbüchern als Hilfsmittel für weitergehende Studien oder als Teil der Quellenangaben.

Bei den Quellenangaben wird manchmal unterschieden zwischen Zitaten aus Büchern und Zeitschriften oder weiteren Medien.},
    author = {Wikipedia},
    urldate = {2023-06-15},
    file = {Literaturverzeichnis – Wikipedia:C\:\\Users\\indernet\\Zotero\\storage\\ILEH9G3G\\Literaturverzeichnis.html:text/html},
}

@online{Wikie,
    title = {Wissenschaftliche Arbeit – Wikipedia},
    url = {https://de.wikipedia.org/wiki/Wissenschaftliche_Arbeit},
    abstract = {Eine wissenschaftliche Arbeit ist ein systematisch gegliederter Text, in dem ein oder mehrere Wissenschaftler das Ergebnis ihrer eigenständigen Forschung darstellen. Wissenschaftliche Arbeiten entstehen im Allgemeinen an Hochschulen oder anderen, auch privaten, Forschungseinrichtungen und werden von Studenten, Doktoranden, Professoren oder anderen Forschern verfasst. Dies ist jedoch kein zwingendes Merkmal. Vor wissenschaftlichen Konferenzen oder bei Sonderausgaben einer wissenschaftlichen Zeitschrift wird in einem call for papers zum Einreichen wissenschaftlicher Arbeiten aufgefordert.

Wissenschaftliches Arbeiten zielt auf die Schaffung neuen Wissens und eine wissenschaftliche Arbeit im Sinne dieses Lemmas ist eines von mehreren Formaten, in denen Ergebnisse wissenschaftlichen Arbeitens zur weiterführenden Forschung und Lehre dargestellt werden können. Andere Formate wären z. B. Forschungskolloquien oder Vorträge auf einer wissenschaftlichen Konferenz.},
    author = {Wikipedia},
    urldate = {2023-06-15},
    file = {Wissenschaftliche Arbeit – Wikipedia:C\:\\Users\\indernet\\Zotero\\storage\\A582ZELI\\Wissenschaftliche_Arbeit.html:text/html},
}

但是现在,您需要不同的功能,据我所知,该功能不是开箱即用的。

你想要的非标准 LaTeX

bib_cross

交叉引用部分是新内容,您可能需要以某种方式自行实现。以下数据模型的优点在于:

  • 它很容易显示你想去哪里
  • 可以实施,
  • 例如,即使它是面向对象的,你也可以用非对象实现的语言来实现。

因此,要生成上述条目,您需要能够在文献数据库中进行某种自我引用:

dm2

即为了提供多引用,您需要存储和使用这种关系,并具有数据完整性:

dm3

出于实际原因,尽管左类和右类至少具有相同的属性,但提供两次它们可能更容易:作为citers-class和 作为,通过 建立关系ownDoc-class相互关联。many-to-manymultiRef-class

必须multiRef-class提供一个新功能katedraExport()

  • 执行查询(如果该类是作为数据库/表实现的)
  • 生成 LaTeX 代码,当您访问属性时这没什么大不了的,它呈现为:

拥有

Zotero做不到,Biblatex不能马上做到,所以你需要在这里花费一些精力。

利用Access进行实施研究

现在目标数据模型及其功能已经清楚了,至少稍微清楚一点,让我们开始实现。

有很多方法可以做到这一点,例如:

  • 使用.bib手动或使用其他方式创建的文件Zotero以及一些脚本
  • 用数据库替换.bib/Zotero并让查询生成 LaTeX 代码
  • 还有很多

那么,让我们看看如何使用 Microsoft 来实现这一点Access。让我们将数据模型概念转化为 Access 实现:

  • classtable
  • a class instance(对象)是row所述表中的a
  • 类之间的映射relation是通过new table

结构和内容t_docs文档结构 文档内容

的结构和内容t_multiRef,它只是一个索引对的表格,关键信息,你必须手动输入一次结构先生 磁共振数据

最后我们需要指定relationship表之间的关系: 关系

现在,发生了一些重要的事情:

  • 这个程序可以将一个表与其自身关联起来:
  • 它只是复制了表格
  • 但是查询失败了,这正是我们需要的
  • 因此,在这种方法中,你需要为文档定义两个表,
  • t_citers(这里没做)和t_ownDoc(这里没做),见上文
  • 然后您可以根据需要微调查询(Abfragen),并将返回的值转换为所需的 LaTeX 字符串,然后将其保存到文件中

结束语

我希望我能向您展示并说服您,据我所知,您想要的东西不是由 Latex、biblatex 或 bibtex 临时提供的。

我向您展示了如何将您的客观输出表达为通用数据模型。

有很多方法可以实现这一点,例如:

  • 脚本,包括 perl、latex3、python
  • 关系数据库,如Access、mySQL等。
  • 引文管理器,例如Zotero,加上上述一些
  • 可能已经存在或可能不存在这样先进的引文管理器

我启动的 Access 示例将通过一组查询提供所需的代码生成器,一旦为文献提供了两个具有相同结构的表。查询实现了functions()数据模型中指示的内容。

所以这个答案是一种通用的蓝图,可以通过大量方式用具体代码来实现。

附言:如何MairAw 的解决方案遵循这一概念蓝图

首先让我们回顾一下自引用类

dm2-回忆

总是可以表示为多对多关系: dm3-回忆

使用后者可以将 MairAw 解决方案的不同部分与这个概念蓝图联系起来:

解决方案 如你看到的:

  • 需要一些全球性的定义
  • citers 和 ownDoc 只是同一个shortbib.bib
  • 该类multiRef实现为citation key
  • katedraExport()功能通过手工编码实现
  • 其中自己的论文只需使用\printbibliography()标准乳胶

相关内容