Bacula. 在一个 Linux 机器上运行并发作业

Bacula. 在一个 Linux 机器上运行并发作业

大家好,我遇到了一个奇怪的问题,根据 bacula 文档,如果我将“最大并发作业数”设置为大于 1,我就可以运行两个或多个并发作业。

这对于来自不同服务器的作业来说工作得很好。但是当我有两个作业从一个 Linux 服务器运行时,第二个作业将等待第一个作业完成。作业具有相同的优先级 (10)。每个作业都有单独的池、卷和存储设备。

Bacula-dir 和 Bacula-sd 在不同的 Linux 服务器上运行。

操作系统 Ubuntu 14.04,Bacula 版本 5.2.6

来自 bconsole 的报告

Running Jobs:
Console connected at 03-Apr-16 09:12
 JobId Level   Name                       Status
======================================================================
  4094 Full    arkive03_Share.2016-04-02_22.00.00_06 is running
  4106 Full    BackupCatalog.2016-04-02_23.10.00_19 is waiting for higher priority jobs to finish
  4112 Full    arkive03EtcBackup.2016-04-03_06.00.00_25 is waiting on max Client jobs
====

bacula-dir.conf

Director {                            # define myself
  Name = bacula.tumo.lab-dir
  DIRport = 9101                # where we listen for UA connections
  QueryFile = "/etc/bacula/scripts/query.sql"
  WorkingDirectory = "/var/lib/bacula"
  PidDirectory = "/var/run/bacula"
  Maximum Concurrent Jobs = 10
  Password = "WDT0OAXCx57U"         # Console password
  Messages = Daemon
  DirAddress = bacula.tumo.lab
}

bacula-fd.conf

FileDaemon {                          # this is me
  Name = arkive03.tumo.lab-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run/bacula
  Maximum Concurrent Jobs = 20
  FDAddress = 10.44.20.137
}

bacula-sd.conf

Storage {                             # definition of myself
  Name = arkive03.tumo.lab-sd
  SDPort = 9103                  # Director's port      
  WorkingDirectory = "/var/lib/bacula"
  Pid Directory = "/var/run/bacula"
  Maximum Concurrent Jobs = 20
  SDAddress = 10.44.20.137
}

Device {
  Name = Arkive03_other               # device for arkive03EtcBackup
  Media Type = File
  Archive Device = /local/bacula/backup/other
  LabelMedia = yes;                   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Device {
  Name = Arkive03_Share               # device for arkive03_Share       
  Media Type = File
  Archive Device = /local/bacula/backup/Share
  LabelMedia = yes;                   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

我尝试将“最大并发作业”添加到设备部分,但没有帮助。

池配置文件

Pool {
  Name = File                         # pool for arkive03EtcBackup
  Pool Type = Backup
  Recycle = yes                       # Bacula can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Action On Purge = Truncate
  Volume Retention = 21 days         # 21 days
  Maximum Volume Bytes = 10G          # Limit Volume size to something reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
  Label Format = "Vol-"
}

Pool {
  Name = ark_share                    # pool for arkive03_Share
  Pool Type = Backup
  Recycle = yes                       # Bacula can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Action On Purge = Truncate
  Volume Retention = 21 days         # 21 days
  Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
  Maximum Volumes = 400               # Limit number of Volumes in Pool
  Label Format = "Ark_share-"
}

作业定义配置文件

JobDefs {
  Name = "ark_Share"
  Type = Backup
  Level = Incremental
  Client = arkive03.tumo.lab-fd
  Storage = Arkive03_Share
  Messages = Standard
  Pool = ark_share
  Priority = 10
  Write Bootstrap = "/var/lib/bacula/arkive03_share.bsr"
}

JobDefs {
  Name = "EtcBackup"
  Type = Backup
  Level = Incremental
  Schedule = "Dayly"
  Storage = Arkive03_other
  Messages = Standard
  Pool = File
  Priority = 10
  Write Bootstrap = "/var/lib/bacula/etc.bsr"
}

客户端档案03.conf

Client {
  Name = arkive03.tumo.lab-fd
  Address = 10.44.20.137
  FDPort = 9102
  Catalog = MyCatalog
  Password = "WDT0OAXCx57U"          # password for FileDaemon
  File Retention = 30 days            # 30 days
  Job Retention = 6 months            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}

Job {
  Name = "arkive03_Share"
  Schedule = "arkbackup"
  FileSet = "Share"
  JobDefs = "ark_Share"
  Client = "arkive03.tumo.lab-fd"
}

Job {
  Name = "arkive03EtcBackup"
  JobDefs = "EtcBackup"
  FileSet = "etc"
  Client = "arkive03.tumo.lab-fd"
}

我不知道该怎么办。我的“share”= 10tb,“etc”= 4mb,我需要等待 bacula 完成 10tb 备份并开始备份 4mb。这太疯狂了。

答案1

在 Badula 控制器上的 Storages.conf 中的存储定义和 bacula-sd 上的设备定义中添加“最大并发作业”可解决此问题。

bacula director 上的 storages.conf

Storage {
  Name = Arkive03_other
  Address = arkive03.tumo.lab                # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = "SomePassword"
  Device = Arkive03_other
  Media Type = File
  Maximum Concurrent Jobs = 5
}

bacula-sd.conf

Device {
  Name = Arkive03_other
  Media Type = File
  Archive Device = /local/bacula/backup/other
  LabelMedia = yes;                   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 5
}

答案2

我遇到了类似的问题,但无法解决。

我有一台机器,其最大并发作业数定义为

设备部分和控制器和客户端中的存储部分(sd 和控制器)。

我只是在作业部分和客户端的 FD 上没有。

因为:从手册来看:如果您希望不同的作业同时运行,则不需要更改作业资源,这是正常情况。

这正是我所期望的。

我想要实现的是让备份服务器同时启动多个作业。但它只能执行一项作业,并且只能备份一个客户端。

所以我希望客户端能够正常运行。按顺序运行他的作业,所以我认为客户端和 FD 配置是不必要的,但也许我错了?

我不会在客户端运行并发作业。一个客户端应该只运行一个作业。

但 Director 应该在多个客户端上启动工作。

相关内容