OpenShift Master 未启动 - assetConfig.servingInfo 无效值“”

OpenShift Master 未启动 - assetConfig.servingInfo 无效值“”

我已经使用官方高级文档在 RHEL 7 系统上安装了 OpenShift Enterprise 3.2使用此模板。我的安装目前由 2 台机器组成:一台主服务器和一台节点。我使用以下命令运行了 ansible playbook:

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

它成功完成。但是,当我尝试在浏览器中打开 OpenShift 控制台时,页面无法打开,这使我检查了服务状态。它反复失败并附加了日志。让我怀疑的是,invalid value日志开头的 -warnings 是否导致了错误?

为了使用 lets encrypt ssl 证书,我在 /etc/ansible/hosts 中添加了以下几行:

openshift_master_cluster_public_hostname=myhost
openshift_master_overwrite_named_certificates=true
openshift_master_named_certificates=[{"certfile": "/etc/letsencrypt/archive/myhost/fullchain.pem", "keyfile": "/etc/letsencrypt/archive/myhost/privkey.pem", "names": ["myhost"]}]

这是我在主服务日志中发现的内容:

os-master systemd[1]: Starting Atomic OpenShift Master...
W0128 10:52:59.306231   19245 start_master.go:271] assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console
W0128 10:52:59.306308   19245 start_master.go:271] assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console
W0128 10:52:59.306334   19245 start_master.go:271] assetConfig.servingInfo: Invalid value: "\u003cnot displayed\u003e": changes to assetConfig certificate configuration are not used when colocated with master API
E0128 10:52:59.811565   19245 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.ClusterPolicy: error #0: dial tcp 192.168.0.235:4001: connection refused
E0128 10:52:59.811592   19245 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.ClusterPolicyBinding: error #0: dial tcp 192.168.0.235:4001: connection refused
E0128 10:52:59.811647   19245 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.PolicyBinding: error #0: dial tcp 192.168.0.235:4001: connection refused
E0128 10:52:59.811653   19245 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.Policy: error #0: dial tcp 192.168.0.235:4001: connection refused
E0128 10:52:59.811685   19245 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.Group: error #0: dial tcp 192.168.0.235:4001: connection refused
E0128 10:52:59.812919   19245 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.OAuthAccessToken: error #0: dial tcp 192.168.0.235:4001: connection refused
E0128 10:52:59.813097   19245 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.User: error #0: dial tcp 192.168.0.235:4001: connection refused
I0128 10:53:00.071073   19245 plugins.go:71] No cloud provider specified.
I0128 10:53:00.189257   19245 genericapiserver.go:82] Adding storage destination for group
I0128 10:53:00.189305   19245 genericapiserver.go:82] Adding storage destination for group extensions
I0128 10:53:00.189325   19245 start_master.go:384] Starting master on 0.0.0.0:8443 (v3.2.1.34-20-g6367d5d)
I0128 10:53:00.189331   19245 start_master.go:385] Public master address is https://cluster.test-env.local:8443
I0128 10:53:00.189350   19245 start_master.go:389] Using images from "openshift3/ose-<component>:v3.2.1.34"
I0128 10:53:00.309754   19245 server.go:64] etcd: peerTLS: cert = /etc/origin/master/etcd.server.crt, key = /etc/origin/master/etcd.server.key, ca = /etc/origin/master/ca-bundle.crt, trusted-ca = , client-cert-auth = false
I0128 10:53:00.435788   19245 server.go:75] etcd: listening for peers on https://0.0.0.0:7001
I0128 10:53:00.435831   19245 server.go:86] etcd: clientTLS: cert = /etc/origin/master/etcd.server.crt, key = /etc/origin/master/etcd.server.key, ca = /etc/origin/master/ca-bundle.crt, trusted-ca = , client-cert-auth = false
I0128 10:53:00.559343   19245 server.go:100] etcd: listening for client requests on https://0.0.0.0:4001
I0128 10:53:00.691274   19245 run.go:61] Started etcd at os-master:4001
I0128 10:53:00.850210   19245 run_components.go:204] Using default project node label selector:
I0128 10:53:00.977996   19245 master.go:79] Using the lease endpoint reconciler
W0128 10:53:01.292447   19245 lease_endpoint_reconciler.go:174] Resetting endpoints for master service "kubernetes" to [192.168.0.235]
I0128 10:53:01.584964   19245 master.go:264] Started Kubernetes API at 0.0.0.0:8443/api/v1
I0128 10:53:01.584996   19245 master.go:264] Started Kubernetes API Extensions at 0.0.0.0:8443/apis/extensions/v1beta1
I0128 10:53:01.585001   19245 master.go:264] Started Origin API at 0.0.0.0:8443/oapi/v1
I0128 10:53:01.585005   19245 master.go:264] Started OAuth2 API at 0.0.0.0:8443/oauth
I0128 10:53:01.585009   19245 master.go:264] Started Web Console 0.0.0.0:8443/console/
I0128 10:53:01.585013   19245 master.go:264] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/
I0128 10:53:01.905160   19245 ensure.go:231] Ignoring bootstrap policy file because cluster policy found
I0128 10:53:02.109520   19245 ensure.go:86] Added build-controller service accounts to the system:build-controller cluster role: <nil>
I0128 10:53:02.236048   19245 ensure.go:86] Added daemonset-controller service accounts to the system:daemonset-controller cluster role: <nil>
I0128 10:53:02.348228   19245 ensure.go:86] Added deployment-controller service accounts to the system:deployment-controller cluster role: <nil>
I0128 10:53:02.450625   19245 ensure.go:86] Added gc-controller service accounts to the system:gc-controller cluster role: <nil>
I0128 10:53:02.555825   19245 ensure.go:86] Added hpa-controller service accounts to the system:hpa-controller cluster role: <nil>
I0128 10:53:02.651823   19245 ensure.go:86] Added job-controller service accounts to the system:job-controller cluster role: <nil>
I0128 10:53:02.754298   19245 ensure.go:86] Added namespace-controller service accounts to the system:namespace-controller cluster role: <nil>
I0128 10:53:02.852278   19245 ensure.go:86] Added pv-binder-controller service accounts to the system:pv-binder-controller cluster role: <nil>
I0128 10:53:02.956611   19245 ensure.go:86] Added pv-provisioner-controller service accounts to the system:pv-provisioner-controller cluster role: <nil>
I0128 10:53:03.058727   19245 ensure.go:86] Added pv-recycler-controller service accounts to the system:pv-recycler-controller cluster role: <nil>
I0128 10:53:03.154841   19245 ensure.go:86] Added replication-controller service accounts to the system:replication-controller cluster role: <nil>
W0128 10:53:03.156189   19245 run_components.go:179] Binding DNS on port 8053 instead of 53 (you may need to run as root and update your config), using 0.0.0.0:8053 which will not resolve from all locations
I0128 10:53:03.357280   19245 run_components.go:199] DNS listening at 0.0.0.0:8053
os-master systemd[1]: Started Atomic OpenShift Master.
I0128 10:53:03.357370   19245 start_master.go:528] Controllers starting (*)
I0128 10:53:03.546822   19245 nodecontroller.go:143] Sending events to api server.
I0128 10:53:03.547298   19245 factory.go:155] Creating scheduler from configuration: {{ } [{PodFitsHostPorts <nil>} {PodFitsResources <nil>} {NoDiskConflict <nil>} {NoVolumeZoneConflict <nil>} {MatchNodeSelector <nil>} {MaxEBSVolumeCount <nil>} {MaxGCEPDVolumeCount <nil>} {Region 0xc20d4d5f40}] [{LeastRequestedPriority 1 <nil>} {BalancedResourceAllocation 1 <nil>} {SelectorSpreadPriority 1 <nil>} {NodeAffinityPriority 1 <nil>} {Zone 2 0xc20ccacbf0}] []}
I0128 10:53:03.547356   19245 factory.go:164] Registering predicate: PodFitsHostPorts
I0128 10:53:03.547371   19245 plugins.go:122] Predicate type PodFitsHostPorts already registered, reusing.
I0128 10:53:03.547394   19245 factory.go:164] Registering predicate: PodFitsResources
I0128 10:53:03.547402   19245 plugins.go:122] Predicate type PodFitsResources already registered, reusing.
I0128 10:53:03.547416   19245 factory.go:164] Registering predicate: NoDiskConflict
I0128 10:53:03.547423   19245 plugins.go:122] Predicate type NoDiskConflict already registered, reusing.
I0128 10:53:03.547436   19245 factory.go:164] Registering predicate: NoVolumeZoneConflict
I0128 10:53:03.547444   19245 plugins.go:122] Predicate type NoVolumeZoneConflict already registered, reusing.
I0128 10:53:03.547461   19245 factory.go:164] Registering predicate: MatchNodeSelector
I0128 10:53:03.547468   19245 plugins.go:122] Predicate type MatchNodeSelector already registered, reusing.
I0128 10:53:03.547483   19245 factory.go:164] Registering predicate: MaxEBSVolumeCount
I0128 10:53:03.547491   19245 plugins.go:122] Predicate type MaxEBSVolumeCount already registered, reusing.
I0128 10:53:03.547506   19245 factory.go:164] Registering predicate: MaxGCEPDVolumeCount
I0128 10:53:03.547514   19245 plugins.go:122] Predicate type MaxGCEPDVolumeCount already registered, reusing.
I0128 10:53:03.547529   19245 factory.go:164] Registering predicate: Region
I0128 10:53:03.547544   19245 factory.go:170] Registering priority: LeastRequestedPriority
I0128 10:53:03.547553   19245 plugins.go:191] Priority type LeastRequestedPriority already registered, reusing.
I0128 10:53:03.547574   19245 factory.go:170] Registering priority: BalancedResourceAllocation
I0128 10:53:03.547582   19245 plugins.go:191] Priority type BalancedResourceAllocation already registered, reusing.
I0128 10:53:03.547601   19245 factory.go:170] Registering priority: SelectorSpreadPriority
I0128 10:53:03.547610   19245 plugins.go:191] Priority type SelectorSpreadPriority already registered, reusing.
I0128 10:53:03.547629   19245 factory.go:170] Registering priority: NodeAffinityPriority
I0128 10:53:03.547638   19245 plugins.go:191] Priority type NodeAffinityPriority already registered, reusing.
I0128 10:53:03.547655   19245 factory.go:170] Registering priority: Zone
I0128 10:53:03.547669   19245 factory.go:190] creating scheduler with fit predicates 'map[MaxGCEPDVolumeCount:{} Region:{} PodFitsHostPorts:{} PodFitsResources:{} NoDiskConflict:{} NoVolumeZoneConflict:{} MatchNodeSelector:{} MaxEBSVolumeCount:{}]' and priority functions 'map[NodeAffinityPriority:{} Zone:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} SelectorSpreadPriority:{}]
I0128 10:53:03.548922   19245 replication_controller.go:208] Starting RC Manager
I0128 10:53:03.549637   19245 horizontal.go:120] Starting HPA Controller
I0128 10:53:03.549977   19245 controller.go:211] Starting Daemon Sets controller manager
I0128 10:53:03.553095   19245 nodecontroller.go:416] NodeController observed a new Node: api.Node{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"node01.test-env.local", GenerateName:"", Namespace:"", SelfLink:"/api/v1/nodes/node01.test-env.local", UID:"67be84ed-180c-11e9-83d6-ac1f6bb2846e", ResourceVersion:"279250", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63683074517, nsec:0, loc:(*time.Location)(0x59af580)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"kubernetes.io/hostname":"node01.test-env.local", "region":"primary", "zone":"worker"}, Annotations:map[string]string(nil)}, Spec:api.NodeSpec{PodCIDR:"", ExternalID:"node01.test-env.local", ProviderID:"", Unschedulable:false}, Status:api.NodeStatus{Capacity:api.ResourceList{"cpu":resource.Quantity{Amount:20.000, Format:"DecimalSI"}, "memory":resource.Quantity{Amount:33284198400.000, Format:"BinarySI"}, "pods":resource.Quantity{Amount:110.000, Format:"DecimalSI"}}, Allocatable:api.ResourceList{"memory":resource.Quantity{Amount:33284198400.000, Format:"BinarySI"}, "pods":resource.Quantity{Amount:110.000, Format:"DecimalSI"}, "cpu":resource.Quantity{Amount:20.000, Format:"DecimalSI"}}, Phase:"", Conditions:[]api.NodeCondition{api.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63683931884, nsec:0, loc:(*time.Location)(0x59af580)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63683074517, nsec:0, loc:(*time.Location)(0x59af580)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, api.NodeCondition{Type:"Ready", Status:"True", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63683931884, nsec:0, loc:(*time.Location)(0x59af580)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63683844451, nsec:0, loc:(*time.Location)(0x59af580)}}, Reason:"KubeletReady", Message:"kubelet is posting ready status"}}, Addresses:[]api.NodeAddress{api.NodeAddress{Type:"LegacyHostIP", Address:"192.168.0.236"}, api.NodeAddress{Type:"InternalIP", Address:"192.168.0.236"}}, DaemonEndpoints:api.NodeDaemonEndpoints{KubeletEndpoint:api.DaemonEndpoint{Port:10250}}, NodeInfo:api.NodeSystemInfo{MachineID:"13c8c5e54f714391b2a53a5c94bcc5bf", SystemUUID:"00000000-0000-0000-0000-AC1F6BB20CA8", BootID:"308d0709-f19f-4086-a2e1-0eff144a87bf", KernelVersion:"3.10.0-957.1.3.el7.x86_64", OSImage:"Unknown", ContainerRuntimeVersion:"docker://1.10.3", KubeletVersion:"v1.2.0-36-g4a3f9c5", KubeProxyVersion:"v1.2.0-36-g4a3f9c5"}, Images:[]api.ContainerImage{api.ContainerImage{Names:[]string{"registry.access.redhat.com/openshift3/ose-deployer:v3.2.1.34"}, SizeBytes:484648699}, api.ContainerImage{Names:[]string{"registry.access.redhat.com/openshift3/ose-pod:v3.2.1.34"}, SizeBytes:216220653}}}}
I0128 10:53:03.553324   19245 nodecontroller.go:604] Recording Registered Node node01.test-env.local in NodeController event message for node node01.test-env.local
I0128 10:53:03.553350   19245 nodecontroller.go:416] NodeController observed a new Node: api.Node{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"os-master", GenerateName:"", Namespace:"", SelfLink:"/api/v1/nodes/os-master", UID:"3aaf8c61-1805-11e9-b470-ac1f6bb2846e", ResourceVersion:"279249", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63683071434, nsec:0, loc:(*time.Location)(0x59af580)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"region":"infra", "zone":"default", "kubernetes.io/hostname":"os-master"}, Annotations:map[string]string(nil)}, Spec:api.NodeSpec{PodCIDR:"", ExternalID:"os-master", ProviderID:"", Unschedulable:false}, Status:api.NodeStatus{Capacity:api.ResourceList{"cpu":resource.Quantity{Amount:20.000, Format:"DecimalSI"}, "memory":resource.Quantity{Amount:33284198400.000, Format:"BinarySI"}, "pods":resource.Quantity{Amount:110.000, Format:"DecimalSI"}}, Allocatable:api.ResourceList{"cpu":resource.Quantity{Amount:20.000, Format:"DecimalSI"}, "memory":resource.Quantity{Amount:33284198400.000, Format:"BinarySI"}, "pods":resource.Quantity{Amount:110.000, Format:"DecimalSI"}}, Phase:"", Conditions:[]api.NodeCondition{api.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63683931882, nsec:0, loc:(*time.Location)(0x59af580)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63683071434, nsec:0, loc:(*time.Location)(0x59af580)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, api.NodeCondition{Type:"Ready", Status:"True", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63683931882, nsec:0, loc:(*time.Location)(0x59af580)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63683920546, nsec:0, loc:(*time.Location)(0x59af580)}}, Reason:"KubeletReady", Message:"kubelet is posting ready status"}}, Addresses:[]api.NodeAddress{api.NodeAddress{Type:"LegacyHostIP", Address:"192.168.0.235"}, api.NodeAddress{Type:"InternalIP", Address:"192.168.0.235"}}, DaemonEndpoints:api.NodeDaemonEndpoints{KubeletEndpoint:api.DaemonEndpoint{Port:10250}}, NodeInfo:api.NodeSystemInfo{MachineID:"c9cd124ae34b47e4ab2b47251d647876", SystemUUID:"00000000-0000-0000-0000-AC1F6BB2846E", BootID:"66b7397b-70d2-4089-848a-83f1c35772de", KernelVersion:"3.10.0-957.1.3.el7.x86_64", OSImage:"Unknown", ContainerRuntimeVersion:"docker://1.10.3", KubeletVersion:"v1.2.0-36-g4a3f9c5", KubeProxyVersion:"v1.2.0-36-g4a3f9c5"}, Images:[]api.ContainerImage{api.ContainerImage{Names:[]string{"registry.access.redhat.com/openshift3/ose-deployer:v3.2.1.34"}, SizeBytes:484648699}, api.ContainerImage{Names:[]string{"registry.access.redhat.com/openshift3/ose-docker-registry:v3.2.1.34"}, SizeBytes:531472384}, api.ContainerImage{Names:[]string{"registry.access.redhat.com/openshift3/ose-pod:v3.2.1.34"}, SizeBytes:216220653}, api.ContainerImage{Names:[]string{"registry.access.redhat.com/openshift3/ose-haproxy-router:v3.2.1.34"}, SizeBytes:488011184}}}}
I0128 10:53:03.553519   19245 nodecontroller.go:604] Recording Registered Node os-master in NodeController event message for node os-master
W0128 10:53:03.553540   19245 nodecontroller.go:671] Missing timestamp for Node node01.test-env.local. Assuming now as a timestamp.
W0128 10:53:03.553557   19245 nodecontroller.go:671] Missing timestamp for Node os-master. Assuming now as a timestamp.
I0128 10:53:03.553742   19245 event.go:211] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"node01.test-env.local", UID:"node01.test-env.local", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node01.test-env.local event: Registered Node node01.test-env.local in NodeController
I0128 10:53:03.553815   19245 event.go:211] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"os-master", UID:"os-master", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node os-master event: Registered Node os-master in NodeController
F0128 10:53:03.654255   19245 master.go:127] Failed to get supported resources from server: the server has asked for the client to provide credentials
I0128 10:53:03.745620   19245 endpoints_controller.go:283] Waiting for pods controller to sync, requeuing rc mih-testing/jenkins123
I0128 10:53:03.745663   19245 endpoints_controller.go:283] Waiting for pods controller to sync, requeuing rc default/docker-registry
os-master systemd[1]: atomic-openshift-master.service: main process exited, code=exited, status=255/n/a
os-master systemd[1]: Unit atomic-openshift-master.service entered failed state.
os-master systemd[1]: atomic-openshift-master.service failed.

相关内容