add a clusterquota reconciliation controller#9658
add a clusterquota reconciliation controller#9658openshift-bot merged 2 commits intoopenshift:masterfrom
Conversation
pkg/quota/api/register.go
Outdated
| &ClusterResourceQuotaList{}, | ||
|
|
||
| &kapi.DeleteOptions{}, | ||
| &kapi.ListOptions{}, |
There was a problem hiding this comment.
Why ListOptions is twice here?
There was a problem hiding this comment.
Also shouldn't kapi objects be registered upstream?
|
Updated, test added, ready for review. [test] |
|
#9701 re[test] |
|
@liggitt ptal |
pkg/client/clusteresourcequota.go
Outdated
| return | ||
| } | ||
|
|
||
| // Update updates an existing deploymentConfig |
| kapi.Kind("ReplicationController"), | ||
| kapi.Kind("PersistentVolumeClaim"), | ||
| kapi.Kind("Secret"), | ||
| kapi.Kind("ConfigMap"), |
There was a problem hiding this comment.
Are these all we can replenish? Image strems? Deployment configs?
There was a problem hiding this comment.
Are these all we can replenish? Image strems? Deployment configs?
for now, there's an item on the card: https://trello.com/c/qJUVjQnj/713-13-multi-project-quota
There was a problem hiding this comment.
I will add a utility method upstream to pull this list so we can pull from a common set?
There was a problem hiding this comment.
I will add a utility method upstream to pull this list so we can pull from a common set?
It's more structurally flawed. The choice should be based on GroupResource and that needs to built based on an intersection with enabled resources. It doesn't actually have to be limited to this server's hosting, since it could observe changes in federated servers.
In addition, we need to extend our InformerFactory to transparently handle generic listener requests to properly feed this beast.
We should probably restructure the replenishment controller to work as a listener/notifier and upstream the bucketter I built here to be able to handle partial updates.
I haven't prioritized the list of TODOs, but its a significant endeavor.
|
I left some comments but @derekwaynecarr or @liggitt should have a look as well |
| } | ||
|
|
||
| // MappingChangeListener is notified of changes to the mapping. It must not block. | ||
| type MappingChangeListener interface { |
There was a problem hiding this comment.
The listener is new, the rest of this is a straight move. Don't fear.
|
@derekwaynecarr @liggitt bump |
| m.lock.RUnlock() | ||
|
|
||
| if !selectorMatches(selector, selectorExists, quota) { | ||
| return false, false, true |
There was a problem hiding this comment.
how do we know the namespace matches? don't we need to evaluate labelsMatch?
There was a problem hiding this comment.
how do we know the namespace matches? don't we need to evaluate labelsMatch?
added
|
Comments addressed. Did you make it all the way through? This upstream has permission to [merge]. |
|
Evaluated for origin test up to cca0e89 |
|
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5881/) |
|
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5881/) (Image: devenv-rhel7_4543) |
|
Evaluated for origin merge up to cca0e89 |
| delete(e.dirtyWork, key) | ||
|
|
||
| if len(work) != 0 { | ||
| e.inProgress[key] = true |
There was a problem hiding this comment.
need to set this unconditionally, otherwise any work enqueued before Done() is called will be put into work instead of dirtyWork, and lost when Done() is called and we do e.work[key] = e.dirtyWork[key]
|
Great work, this LGTM. |
This adds the controller to do the priming, handle namespaces coming and going, and replenishment.
It's still short tests, but an algorithmic check is welcome.
@Kargakis