无法在具有特定乳胶类别的科学期刊中添加作者的所属关系

无法在具有特定乳胶类别的科学期刊中添加作者的所属关系

下午好 !

我正在尝试将作者所属机构添加到我的文章中(根据期刊模板)。但是,我的所有尝试都失败了:

\documentclass[times]{iapress}
\usepackage{moreverb}
\usepackage[dvips,colorlinks,bookmarksopen,bookmarksnumbered,citecolor=red,urlcolor=red]{hyperref}

%%
%\usepackage{amsmath, amssymb}
\usepackage{pdflscape}
\usepackage{subfigure}

%%
\def\volumeyear{202x}
\def\volumenumber{x}
\def\volumemonth{Month}
\setcounter{page}{00}
\renewcommand{\baselinestretch}{1.01}
%%

\usepackage{colortbl}
\usepackage[margin=1cm]{caption}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{float}
%\usepackage{graphicx}
\graphicspath{{figures/}}

\usepackage{longtable}

%\usepackage{algorithm}

\usepackage{algorithmic}

\usepackage{pdfpages}
\usepackage[ ruled,vlined]{algorithm2e}
\usepackage{ifoddpage}
\usepackage{blindtext}
%\usepackage{authblk} 
\usepackage{listings}
\usepackage{xcolor}
\usepackage{ragged2e}
\usepackage{lipsum}

%% added for affiliations



%%
\makeatletter
\def\normaljustify{%
  \let\\\@centercr\rightskip\z@skip \leftskip\z@skip%
  \parfillskip=0pt plus 1fil}
\makeatother


%% attention here ! 
\renewcommand{\topfraction}{0.9}

\lstset{
basicstyle=\scriptsize\tt,
}
%%
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}

\lstdefinestyle{mystyle}{
    backgroundcolor=\color{backcolour},   
    commentstyle=\color{codegreen},
    keywordstyle=\color{magenta},
    numberstyle=\tiny\color{codegray},
    stringstyle=\color{codepurple},
    basicstyle=\ttfamily\footnotesize,
    breakatwhitespace=false,         
    breaklines=true,                 
    captionpos=b,                    
    keepspaces=true,                 
    numbers=left,                    
    numbersep=5pt,                  
    showspaces=false,                
    showstringspaces=false,
    showtabs=false,                  
    tabsize=2
}

%%



%\usepackage{pdflscape}
 \begin{filecontents}[overwrite]{references.bib}
    @book{yang2018optimization,
        title={Optimization Techniques and Applications with Examples},
        author={Yang, Xin-She},
        year={2018},
        publisher={John Wiley~\& Sons},
    }
 
 @book{kelley1999iterative,
    title={Iterative methods for optimization},
    author={Kelley, Carl T},
    year={1999},
    publisher={SIAM}
 }
 @book{cavazzuti2012optimization,
    title={Optimization methods: from theory to design scientific and technological aspects in mechanics},
    author={Cavazzuti, Marco},
    year={2012},
    publisher={Springer Science \& Business Media}
 }
 
 @book{grivet2012methodes,
    title={M{\'e}thodes num{\'e}riques appliqu{\'e}es pour le scientifique et l’ing{\'e}nieur (edition 2009): Edition 2013},
    author={Grivet, Jean-Philippe},
    year={2012},
    publisher={EDP sciences}
 }

@article{lemarechal2012cauchy,
    title={Cauchy and the gradient method},
    author={Lemar{\'e}chal, Claude},
    journal={Doc Math Extra},
    volume={251},
    pages={254},
    year={2012}
}
@article{cauchy1847methode,
    title={M{\'e}thode g{\'e}n{\'e}rale pour la r{\'e}solution des systemes d’{\'e}quations simultan{\'e}es},
    author={Cauchy, Augustin},
    journal={Comp. Rend. Sci. Paris},
    volume={25},
    number={1847},
    pages={536--538},
    year={1847}
}
@article{meza2010steepest,
    title={Steepest descent},
    author={Meza, Juan C},
    journal={Wiley Interdisciplinary Reviews: Computational Statistics},
    volume={2},
    number={6},
    pages={719--722},
    year={2010},
    publisher={Wiley Online Library}
}
@article{robbins1951stochastic,
    title={A stochastic approximation method},
    author={Robbins, Herbert and Monro, Sutton},
    journal={The annals of mathematical statistics},
    pages={400--407},
    year={1951},
    publisher={JSTOR}
}
@article{kiefer1952stochastic,
    title={Stochastic estimation of the maximum of a regression function},
    author={Kiefer, Jack and Wolfowitz, Jacob and others},
    journal={The Annals of Mathematical Statistics},
    volume={23},
    number={3},
    pages={462--466},
    year={1952},
    publisher={Institute of Mathematical Statistics}
}
@article{rumelhart1986learning,
    title={Learning representations by back-propagating errors},
    author={Rumelhart, David E and Hinton, Geoffrey E and Williams, Ronald J},
    journal={nature},
    volume={323},
    number={6088},
    pages={533--536},
    year={1986},
    publisher={Nature Publishing Group}
}
@article{qian1999momentum,
    title={On the momentum term in gradient descent learning algorithms},
    author={Qian, Ning},
    journal={Neural networks},
    volume={12},
    number={1},
    pages={145--151},
    year={1999},
    publisher={Elsevier}
}
@article{duchi2011adaptive,
    title={Adaptive subgradient methods for online learning and stochastic optimization.},
    author={Duchi, John and Hazan, Elad and Singer, Yoram},
    journal={Journal of machine learning research},
    volume={12},
    number={7},
    year={2011}
}
@article{tieleman2012lecture,
    title={Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude},
    author={Tieleman, Tijmen and Hinton, Geoffrey},
    journal={COURSERA: Neural networks for machine learning},
    volume={4},
    number={2},
    pages={26--31},
    year={2012}
}

@book{antoniou2007practical,
    title={Practical optimization: algorithms and engineering applications},
    author={Antoniou, Andreas and Lu, Wu-Sheng},
    year={2007},
    publisher={Springer Science \& Business Media}
}
@article{zeiler2012adadelta,
    title={Adadelta: an adaptive learning rate method},
    author={Zeiler, Matthew D},
    journal={arXiv preprint arXiv:1212.5701},
    year={2012}
}
@article{ruder2016overview,
    title={An overview of gradient descent optimization algorithms},
    author={Ruder, Sebastian},
    journal={arXiv preprint arXiv:1609.04747},
    year={2016}
}
@phdthesis{rakotoarivelo2018aide,
    title={Aide {\`a} la d{\'e}cision multi-crit{\`e}re pour la gestion des risques dans le domaine financier},
    author={Rakotoarivelo, Jean-Baptiste},
    year={2018}
}
@book{mu2016practical,
    title={Practical decision making: an introduction to the Analytic Hierarchy Process (AHP) using super decisions V2},
    author={Mu, Enrique and Pereyra-Rojas, Milagros},
    year={2016},
    publisher={Springer}
}
@article{molga2005test,
    title={Test functions for optimization needs},
    author={Molga, Marcin and Smutnicki, Czes{\l}aw},
    journal={Test functions for optimization needs},
    volume={101},
    pages={48},
    year={2005}
}
@article{andrei2008unconstrained,
    title={An unconstrained optimization test functions collection},
    author={Andrei, Neculai},
    journal={Adv. Model. Optim},
    volume={10},
    number={1},
    pages={147--161},
    year={2008},
    publisher={Citeseer}
}
@inproceedings{khirirat2017mini,
    title={Mini-batch gradient descent: Faster convergence under data sparsity},
    author={Khirirat, Sarit and Feyzmahdavian, Hamid Reza and Johansson, Mikael},
    booktitle={2017 IEEE 56th Annual Conference on Decision and Control (CDC)},
    pages={2880--2887},
    year={2017},
    organization={IEEE}
@article{barzilai1993measuring,
    title={Measuring rates of convergence of numerical algorithms},
    author={Barzilai, Jonathan and Dempster, Michael AH},
    journal={Journal of optimization theory and applications},
    volume={78},
    number={1},
    pages={109--125},
    year={1993},
    publisher={Springer}
}
@book{Sheskin-2004,
    doi = {10.4324/9780203489536},
    url = {https://doi.org/10.4324%2F9780203489536},
        year = 2004,
        month = {jun},
        publisher = {{CRC} Press},
        author = {David J. Sheskin},
        title = {Handbook of Parametric and Nonparametric Statistical Procedures}
    }
@article{de2018convergence,
    title={Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration},
    author={De, Soham and Mukherjee, Anirbit and Ullah, Enayat},
    journal={arXiv preprint arXiv:1807.06766},
    year={2018}
}
@article{de2016mean,
    title={Mean absolute percentage error for regression models},
    author={De Myttenaere, Arnaud and Golden, Boris and Le Grand, B{\'e}n{\'e}dicte and Rossi, Fabrice},
    journal={Neurocomputing},
    volume={192},
    pages={38--48},
    year={2016},
    publisher={Elsevier}
}
@article{vastrad2013performance,
    title={Performance analysis of neural network models for oxazolines and oxazoles derivatives descriptor dataset},
    author={Vastrad, Chanabasayya and others},
    journal={arXiv preprint arXiv:1312.2853},
    year={2013}
}
@book{haftka2012elements,
    title={Elements of structural optimization},
    author={Haftka, Raphael T and G{\"u}rdal, Zafer},
    volume={11},
    year={2012},
    publisher={Springer Science \& Business Media}
}
@book{craveur2014optimisation,
    title={Optimisation des structures m{\'e}caniques: M{\'e}thodes num{\'e}riques et {\'e}l{\'e}ments finis},
    author={Craveur, Jean-Charles and Bruyneel, Michael and Gourmelen, Pierre},
    year={2014},
    publisher={Dunod}
}
@book{rouaud2014calcul,
    title={Calcul d’incertitudes},
    author={Rouaud, Mathieu},
    year={2014},
    publisher={Paris, France: Creative Commons}
}


@article{thangaraj2011particle,
    title={Particle swarm optimization: hybridization perspectives and experimental illustrations},
    author={Thangaraj, Radha and Pant, Millie and Abraham, Ajith and Bouvry, Pascal},
    journal={Applied Mathematics and Computation},
    volume={217},
    number={12},
    pages={5208--5226},
    year={2011},
    publisher={Elsevier}
}
@inproceedings{li2007novel,
    title={A novel hybrid particle swarm optimization algorithm combined with harmony search for high dimensional optimization problems},
    author={Li, Hong-qi and Li, Li},
    booktitle={The 2007 International Conference on Intelligent Pervasive Computing (IPC 2007)},
    pages={94--97},
    year={2007},
    organization={IEEE}
}


%% used test functions references 

@article{andrei2008unconstrained,
    title={An unconstrained optimization test functions collection},
    author={Andrei, Neculai},
    journal={Adv. Model. Optim},
    volume={10},
    number={1},
    pages={147--161},
    year={2008},
    publisher={Citeseer}
}
@article{jamil2013literature,
    title={A literature survey of benchmark functions for global optimisation problems},
    author={Jamil, Momin and Yang, Xin-She},
    journal={International Journal of Mathematical Modelling and Numerical Optimisation},
    volume={4},
    number={2},
    pages={150--194},
    year={2013},
    publisher={Inderscience Publishers Ltd}
}
@article{molga2005test,
    title={Test functions for optimization needs},
    author={Molga, Marcin and Smutnicki, Czes{\l}aw},
    journal={Test functions for optimization needs},
    volume={101},
    pages={48},
    year={2005}
}
@article{abusnaina2019modified,
    title={Modified global flower pollination algorithm and its application for optimization problems},
    author={Abusnaina, Ahmed A and Alsalibi, Ahmed I and others},
    journal={Interdisciplinary Sciences: Computational Life Sciences},
    volume={11},
    number={3},
    pages={496--507},
    year={2019},
    publisher={Springer}
}

%% added references

@article{bergstra2012random,
  title={Random search for hyper-parameter optimization.},
  author={Bergstra, James and Bengio, Yoshua},
  journal={Journal of machine learning research},
  volume={13},
  number={2},
  year={2012}
}

@article{lydia2019adagrad,
  title={Adagrad—An optimizer for stochastic gradient descent},
  author={Lydia, Agnes and Francis, Sagayaraj},
  journal={Int. J. Inf. Comput. Sci.},
  volume={6},
  number={5},
  year={2019}
}

@article{defossez2020convergence,
  title={On the convergence of adam and adagrad},
  author={D{\'e}fossez, Alexandre and Bottou, L{\'e}on and Bach, Francis and Usunier, Nicolas},
  journal={arXiv preprint arXiv:2003.02395},
  year={2020}
}

@inproceedings{zhang2018improved,
  title={An improved Adagrad gradient descent optimization algorithm},
  author={Zhang, N and Lei, D and Zhao, JF},
  booktitle={2018 Chinese Automation Congress (CAC)},
  pages={2359--2362},
  year={2018},
  organization={IEEE}
}

@inproceedings{mukkamala2017variants,
  title={Variants of rmsprop and adagrad with logarithmic regret bounds},
  author={Mukkamala, Mahesh Chandra and Hein, Matthias},
  booktitle={International Conference on Machine Learning},
  pages={2545--2553},
  year={2017},
  organization={PMLR}
}

@inproceedings{reddy2018handwritten,
  title={Handwritten Hindi digits recognition using convolutional neural network with RMSprop optimization},
  author={Reddy, R Vijava Kumar and Rao, B Srinivasa and Raju, K Prudvi},
  booktitle={2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS)},
  pages={45--51},
  year={2018},
  organization={IEEE}
}

@article{khan2017vprop,
  title={Vprop: Variational inference using rmsprop},
  author={Khan, Mohammad Emtiyaz and Liu, Zuozhu and Tangkaratt, Voot and Gal, Yarin},
  journal={arXiv preprint arXiv:1712.01038},
  year={2017}
}

@inproceedings{reddi2018adaptive,
  title={Adaptive methods for nonconvex optimization},
  author={Reddi, S and Zaheer, Manzil and Sachan, Devendra and Kale, Satyen and Kumar, Sanjiv},
  booktitle={Proceeding of 32nd Conference on Neural Information Processing Systems (NIPS 2018)},
  year={2018}
}

@inproceedings{babu2020performance,
  title={Performance Analysis of Cost and Accuracy for Whale Swarm and RMSprop Optimizer},
  author={Babu, D Vijendra and Karthikeyan, C and Kumar, Abhishek and others},
  booktitle={IOP Conference Series: Materials Science and Engineering},
  volume={993},
  number={1},
  pages={012080},
  year={2020},
  organization={IOP Publishing}
}
%% momentum
@article{yaqub2020state,
  title={State-of-the-Art CNN Optimizer for Brain Tumor Segmentation in Magnetic Resonance Images},
  author={Yaqub, Muhammad and Jinchao, Feng and Zia, M Sultan and Arshid, Kaleem and Jia, Kebin and Rehman, Zaka Ur and Mehmood, Atif},
  journal={Brain Sciences},
  volume={10},
  number={7},
  pages={427},
  year={2020},
  publisher={Multidisciplinary Digital Publishing Institute}
}

@article{ding2018adaptive,
  title={An adaptive control momentum method as an optimizer in the cloud},
  author={Ding, Jianhao and Han, Lansheng and Li, Dan},
  journal={Future Generation Computer Systems},
  volume={89},
  pages={192--200},
  year={2018},
  publisher={Elsevier}
}

@article{duda2019sgd,
  title={SGD momentum optimizer with step estimation by online parabola model},
  author={Duda, Jarek},
  journal={arXiv preprint arXiv:1907.07063},
  year={2019}
}

@electronic{website1,
 title = {CS231n Convolutional Neural Networks for Visual Recognition},
 url = {https://cs231n.github.io/neural-networks-3/#sgd},
 urldate = {01.02.2021},

}

@inproceedings{cutkosky2020momentum,
  title={Momentum improves normalized sgd},
  author={Cutkosky, Ashok and Mehta, Harsh},
  booktitle={International Conference on Machine Learning},
  pages={2260--2268},
  year={2020},
  organization={PMLR}
}

@inproceedings{yu2019linear,
  title={On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization},
  author={Yu, Hao and Jin, Rong and Yang, Sen},
  booktitle={International Conference on Machine Learning},
  pages={7184--7193},
  year={2019},
  organization={PMLR}
}

@article{goh2017momentum,
  title={Why momentum really works},
  author={Goh, Gabriel},
  journal={Distill},
  volume={2},
  number={4},
  pages={e6},
  year={2017}
}

@article{chen2019decaying,
  title={Decaying momentum helps neural network training},
  author={Chen, John and Kyrillidis, Anastasios},
  journal={arXiv preprint arXiv:1910.04952},
  year={2019}
}
%% SGD
@article{yang2018modified,
  title={Modified convolutional neural network based on dropout and the stochastic gradient descent optimizer},
  author={Yang, Jing and Yang, Guanci},
  journal={Algorithms},
  volume={11},
  number={3},
  pages={28},
  year={2018},
  publisher={Multidisciplinary Digital Publishing Institute}
}

@inproceedings{hardt2016train,
  title={Train faster, generalize better: Stability of stochastic gradient descent},
  author={Hardt, Moritz and Recht, Ben and Singer, Yoram},
  booktitle={International Conference on Machine Learning},
  pages={1225--1234},
  year={2016},
  organization={PMLR}
}
@article{ilboudo2020tadam,
  title={Tadam: A robust stochastic gradient optimizer},
  author={Ilboudo, Wendyam Eric Lionel and Kobayashi, Taisuke and Sugimoto, Kenji},
  journal={arXiv preprint arXiv:2003.00179},
  year={2020}
}

@article{sweke2020stochastic,
  title={Stochastic gradient descent for hybrid quantum-classical optimization},
  author={Sweke, Ryan and Wilde, Frederik and Meyer, Johannes Jakob and Schuld, Maria and F{\"a}hrmann, Paul K and Meynard-Piganeau, Barth{\'e}l{\'e}my and Eisert, Jens},
  journal={Quantum},
  volume={4},
  pages={314},
  year={2020},
  publisher={Verein zur F{\"o}rderung des Open Access Publizierens in den Quantenwissenschaften}
}


 \end{filecontents}




\renewenvironment{abstract}
{\par\noindent\textbf{\abstractname.}\ \ignorespaces}
{\par\medskip}



\begin{document}

\runningheads{Image reconstruction from incomplete convolution data}{Z. Shen, Z. Geng and J. Yang}

\title{Image reconstruction from incomplete convolution data via total variation regularization}
%\footnote[2]{This
%work was partly supported by Chinese NSF grant (NO. 10771162)}}

\author{Zhida Shen \affil{1},
   Zhe Geng \affil{2}, Junfeng Yang \affil{1}$^,$\corrauth
 }

\address{
\affilnum{1}Department of Mathematics, Nanjing University, China;
\affilnum{2}Department of Mathematics, Peiking University, China
}

\corraddr{Junfeng Yang (Email: [email protected]). Department of Mathematics, Nanjing University. 22 Hankou Road, Nanjing, Jiangsu Province, China (210093).}



  

    

  %% \maketitle 
    

\begin{abstract}
\small
In this paper, we present an empirical comparison of some Gradient Descent variants used to solve global optimization problems for large search domains. The aim is to identify which one of them is more suitable for solving an optimization problem regardless of the features of the used test function. Five variants of Gradient Descent were implemented in the R language and tested on a benchmark of five test functions. We proved the dependence between the choice of the variant and the obtained performances using the khi-2 test in a sample of 120 experiments. Those test functions vary on convexity, the number of local minima, and are classified according to some criteria. We had chosen a range of values for each algorithm parameter. Results are compared in terms of accuracy and convergence speed.  Based on the obtained results, we defined the priority of usage for those variants and we contributed by a new hybrid optimizer. The new optimizer is tested in a benchmark of well-known test functions and two real applications are proposed. Except for the classical gradient descent algorithm, only stochastic versions of those variants are considered in this paper.

\end{abstract}

\textbf{Keywords} : global numerical optimization, mono-objective, descent gradient variants, analytic hierarchy process, hybrid optimization, random search


\section{Introduction}

\small
Optimization techniques have common applications in fields such as differential calculus, regression models for prediction, shape optimization, topological optimization, and other applications in logistic and graph theory \cite{yang2018optimization}.
The optimization is mono-objective when it consists of finding the best solution that optimizes a given objective \cite{kelley1999iterative}. On the other hand, multi-objective optimization concerns multiple contradictory criteria for making a decision \cite{cavazzuti2012optimization}.
Commonly, Numerical methods can provide practical and adaptable solutions for both cases. Although finding exact analytical solutions is a hard task because of dimensions or because of the nature of the objective function, algorithms such as gradient descent are considered to find acceptable solutions with an error margin \cite{grivet2012methodes}.
One of the main issues with gradient descent variants is how to select the appropriate algorithm according to the problem’s features. When it comes to applying gradient descent variants on a real application, a practitioner will prefer to use some criteria for making a quick decision. Because not all variants have the same performance. The use of a decision technique will help in saving time, especially while performing a simulation. For this, we will compare the performance of gradient descent variants based on a panel of test functions. After that, we will apply a khi-2 test to help in deploying suitable decisions that match the researcher’s goals or understanding of a problem.
The paper is organized as follows: In Section 2, we provide a review of the related work.
\bibliographystyle{ieeetr}
\bibliography{references.bib}

\end{document}

这次尝试无法生成作者的所属部分!

使用的 iapress 类文件可以在这里找到:

https://drive.google.com/file/d/1M3ZEjdt6PSXOuzN9d2Ex4CYdOjtBAGNI/view?usp=sharing

谢谢你的帮助 !

答案1

在此处输入图片描述

尝试此代码

\documentclass[times]{iapress}
\usepackage{moreverb}

%\usepackage[dvips,colorlinks,bookmarksopen,bookmarksnumbered,citecolor=red,urlcolor=red]{hyperref} % if run by LaTex (Shift + Ctrl + L)
\usepackage[colorlinks,bookmarksopen,bookmarksnumbered,citecolor=red,urlcolor=red]{hyperref} % if run by PDFLaTex


\def\volumeyear{202x}
\def\volumenumber{x}
\def\volumemonth{Month}
\setcounter{page}{00}

\usepackage{amsmath,amssymb}
\usepackage{graphicx}
\renewcommand{\baselinestretch}{1.01}   

\usepackage{colortbl}
\usepackage[margin=1cm]{caption}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{float}
\usepackage{graphicx}
\graphicspath{{figures/}}

\usepackage{longtable}

%\usepackage{algorithm}

\usepackage{algorithmic}

\usepackage{pdfpages}
\usepackage[ ruled,vlined]{algorithm2e}
\usepackage{ifoddpage}
\usepackage{blindtext}
%\usepackage{authblk} 
\usepackage{listings}
\usepackage{xcolor}
\usepackage{ragged2e}
\usepackage{lipsum}

%% added for affiliations   

%%
\makeatletter
\def\normaljustify{%
  \let\\\@centercr\rightskip\z@skip \leftskip\z@skip%
  \parfillskip=0pt plus 1fil}
\makeatother


%% attention here ! 
\renewcommand{\topfraction}{0.9}

\lstset{
basicstyle=\scriptsize\tt,
}
%%
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}

\lstdefinestyle{mystyle}{
    backgroundcolor=\color{backcolour},   
    commentstyle=\color{codegreen},
    keywordstyle=\color{magenta},
    numberstyle=\tiny\color{codegray},
    stringstyle=\color{codepurple},
    basicstyle=\ttfamily\footnotesize,
    breakatwhitespace=false,         
    breaklines=true,                 
    captionpos=b,                    
    keepspaces=true,                 
    numbers=left,                    
    numbersep=5pt,                  
    showspaces=false,                
    showstringspaces=false,
    showtabs=false,                  
    tabsize=2
}

%   

\usepackage{pdflscape}  

%\renewenvironment{abstract}
%{\par\noindent\textbf{\abstractname.}\ \ignorespaces}
%{\par\medskip}
    
\begin{document}

\runningheads{Image reconstruction from incomplete convolution data}{Z. Shen, Z. Geng and J. Yang}

\title{Image reconstruction from incomplete convolution data via total variation regularization}
%\footnote[2]{This
%work was partly supported by Chinese NSF grant (NO. 10771162)}}

\author{Zhida Shen \affil{1},
    Zhe Geng \affil{2}, Junfeng Yang \affil{1}$^,$\corrauth
}

\address{
    \affilnum{1}Department of Mathematics, Nanjing University, China;
    \affilnum{2}Department of Mathematics, Peiking University, China
}

\corraddr{Junfeng Yang (Email: [email protected]). Department of Mathematics, Nanjing University. 22 Hankou Road, Nanjing, Jiangsu Province, China (210093).}

    

\begin{abstract}
\small
In this paper, we present an empirical comparison of some Gradient Descent variants used to solve global optimization problems for large search domains. The aim is to identify which one of them is more suitable for solving an optimization problem regardless of the features of the used test function. Five variants of Gradient Descent were implemented in the R language and tested on a benchmark of five test functions. We proved the dependence between the choice of the variant and the obtained performances using the khi-2 test in a sample of 120 experiments. Those test functions vary on convexity, the number of local minima, and are classified according to some criteria. We had chosen a range of values for each algorithm parameter. Results are compared in terms of accuracy and convergence speed.  Based on the obtained results, we defined the priority of usage for those variants and we contributed by a new hybrid optimizer. The new optimizer is tested in a benchmark of well-known test functions and two real applications are proposed. Except for the classical gradient descent algorithm, only stochastic versions of those variants are considered in this paper.
\end{abstract}



\keywords{Keywords: global numerical optimization, mono-objective, descent gradient variants, analytic hierarchy process, hybrid optimization, random search}

\maketitle

\section{Introduction}

\small
Optimization techniques have common applications in fields such as differential calculus, regression models for prediction, shape optimization, topological optimization, and other applications in logistic and graph theory \cite{yang2018optimization}.
The optimization is mono-objective when it consists of finding the best solution that optimizes a given objective \cite{kelley1999iterative}. On the other hand, multi-objective optimization concerns multiple contradictory criteria for making a decision \cite{cavazzuti2012optimization}.
Commonly, Numerical methods can provide practical and adaptable solutions for both cases. Although finding exact analytical solutions is a hard task because of dimensions or because of the nature of the objective function, algorithms such as gradient descent are considered to find acceptable solutions with an error margin \cite{grivet2012methodes}.
One of the main issues with gradient descent variants is how to select the appropriate algorithm according to the problem’s features. When it comes to applying gradient descent variants on a real application, a practitioner will prefer to use some criteria for making a quick decision. Because not all variants have the same performance. The use of a decision technique will help in saving time, especially while performing a simulation. For this, we will compare the performance of gradient descent variants based on a panel of test functions. After that, we will apply a khi-2 test to help in deploying suitable decisions that match the researcher’s goals or understanding of a problem.
The paper is organized as follows: In Section 2, we provide a review of the related work.
%\bibliographystyle{ieeetr}
%\bibliography{references.bib}

\end{document}

相关内容