1.12.2 rebase#67
Merged
soltysh merged 53 commits intoopenshift:origin-4.0-kubernetes-1.12.2from Nov 7, 2018
Merged
Conversation
Origin-commit: 7331c6412a9ef1b23155d7fd928f4ddc6961a05b
Origin-commit: a5fade4cb1bb90919a356defa541a4f8ec7d5bb8
…m to allow multiple containers to union for swagger
:100644 100644 b32534e... 3e694fc... M pkg/controller/serviceaccount/tokens_controller.go
Doesn't offer enough use, complicates creation and setup.
…ore than one is present
Origin-commit: 495b8f4f7563cfee27824bb98bc73fabc4add064
Origin-commit: 33a71aff9bb4e204bf2e15af4cdfb5bd0525ce4e
d6eaeb8 to
c5e2820
Compare
|
Yes, Looking at the other one... |
|
After we land the rebase, we should consider reverting: |
|
lgtm |
|
And openshift/origin#20158 removed the need for |
The feature gate is not yet enabled and may not be for several releases. Pod team owns allowing this to be used.
We are still keeping RBAC policies and service accounts installed. This is just to avoid duplicating that work in ansible for now. Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1602793 Origin-commit: f9d325875a67ddc89f20fd2597199859e864d2a6
Origin-commit: 557317b91e4032718f1b8d1f60a307f28403afe8
…ility Origin-commit: 183457c62577085c312f9f694a241bbe18aa0ae8
…njob e2e Origin-commit: fcc740f1c4a6b50b953921287259e6afc0b88f4a
openshift-io/node-selector if scheduler.alpha.kubernetes.io/node-selector is set. Origin-commit: f2d078606421a611f377038149dd161a6263e04f
Origin-commit: d6648903cd21a8fe333c2572dda003ac78760b12
Origin-commit: 8b9969bab1311082afd90a0018376b2731e758f1
Origin-commit: 653bec41e858a8086f9b3b45d1f9d7f9ce703a9a
Signed-off-by: Mrunal Patel <mpatel@redhat.com> Origin-commit: 075640e111e0316b775bb4a6d0dcae5d9fc8f389
We are seeing flakes where pod event isn't yet visible when we check for it leading to test failure. Signed-off-by: Mrunal Patel <mpatel@redhat.com> Origin-commit: 1f7577f947464fd386981f770437f9461aa1bee3
Origin-commit: 10523f8f8001565ac4de1c4c0b0fdb241ffe0b37
With CRI-O we've been hitting a lot of flakes with the following test:
[sig-apps] CronJob should remove from active list jobs that have been deleted
The events shown in the test failures in both kube and openshift were the following:
STEP: Found 13 events.
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid: {cronjob-controller } SuccessfulCreate: Created job forbid-1540412040
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid-1540412040: {job-controller } SuccessfulCreate: Created pod: forbid-1540412040-z7n7t
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid-1540412040-z7n7t: {default-scheduler } Scheduled: Successfully assigned e2e-tests-cronjob-rjr2m/forbid-1540412040-z7n7t to 127.0.0.1
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Created: Created container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Started: Started container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:12 +0000 UTC - event for forbid: {cronjob-controller } MissingJob: Active job went missing: forbid-1540412040
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid: {cronjob-controller } SuccessfulCreate: Created job forbid-1540412100
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid-1540412100: {job-controller } SuccessfulCreate: Created pod: forbid-1540412100-rq89l
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid-1540412100-rq89l: {default-scheduler } Scheduled: Successfully assigned e2e-tests-cronjob-rjr2m/forbid-1540412100-rq89l to 127.0.0.1
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Started: Started container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Created: Created container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
The code in test is racy because the Forbid policy can still let the controller to create
a new pod for the cronjob. CRI-O is fast at re-creating the pod and by the time
the test code reaches the check, it fails. The events are as follow:
[It] should remove from active list jobs that have been deleted
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:192
STEP: Creating a ForbidConcurrent cronjob
STEP: Ensuring a job is scheduled
STEP: Ensuring exactly one is scheduled
STEP: Deleting the job
STEP: deleting Job.batch forbid-1540412040 in namespace e2e-tests-cronjob-rjr2m, will wait for the garbage collector to delete the pods
Oct 24 20:14:02.533: INFO: Deleting Job.batch forbid-1540412040 took: 2.699182ms
Oct 24 20:14:02.634: INFO: Terminating Job.batch forbid-1540412040 pods took: 100.223228ms
STEP: Ensuring job was deleted
STEP: Ensuring there are no active jobs in the cronjob
[AfterEach] [sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
It looks clear that by the time we're ensuring that there are no more active jobs, there
could be _already_ a new job spinned, making the test flakes.
This PR fixes all the above by making sure that the _deleted_ job is not in the Active
list anymore, besides other pod already running but with different UUID which is
going to be fine anyway for the purpose of the test.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
Perform bootstrapping in the background when client cert rotation is on, enabling static pods to start before a control plane is reachable.
c5e2820 to
c94ff00
Compare
Author
SGTM I've fixed unit tests, so this currently is green in units. I'll leave it open until I get a reasonable proof it's working as it should in origin. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The following commit I dropped and need careful examination:
UPSTREAM: <carry>: patch in a non-standard location for apiservices@deads2kUPSTREAM: <drop>: make RootFsInfo error non-fatal on start@sjenningAdditionally, do we still need these commits:
UPSTREAM: <drop>: hack to "fix" period problem.@sjenning @deads2kUPSTREAM: <carry>: coerce string->int, empty object -> slice for backwards compatibility@deads2kThe full pick list: https://docs.google.com/spreadsheets/d/1xi5SNL96wqBlIpuIB7d4vRlhBNawhoe2-MWDZMbL6Ig/edit?usp=sharing
@openshift/sig-master
@deads2k
@smarterclayton for c8921588f8068ca61e9703da7782fd5b95452080