{"id":80,"date":"2025-03-23T10:00:40","date_gmt":"2025-03-23T06:00:40","guid":{"rendered":"https:\/\/www.kerloys.com\/?p=80"},"modified":"2025-03-23T10:03:21","modified_gmt":"2025-03-23T06:03:21","slug":"openshift-api-server-understanding-openshift-4-x-api-server-exposure-on-bare-metalopenshift-api-server","status":"publish","type":"post","link":"https:\/\/www.kerloys.com\/index.php\/2025\/03\/23\/openshift-api-server-understanding-openshift-4-x-api-server-exposure-on-bare-metalopenshift-api-server\/","title":{"rendered":"Understanding OpenShift 4.x API Server Exposure on Bare Metal Openshift API server"},"content":{"rendered":"\n<p><\/p>\n\n\n\n<p>Running OpenShift 4.x on bare metal has a number of advantages: you get to maintain control of your own environment without being beholden to a cloud provider\u2019s networking or load-balancing solution. But with that control comes a bit of extra work, especially around how the OpenShift API server is exposed.<\/p>\n\n\n\n<p>In this post, we\u2019ll discuss:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>How the OpenShift API server is bound on each control-plane (master) node.<\/li>\n\n\n\n<li>Load-balancing options for the API server in a bare-metal environment.<\/li>\n\n\n\n<li>The difference between external load balancers, keepalived\/HAProxy, and MetalLB.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">1. How OpenShift 4.x Binds the API Server<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Static Pods with Host Networking<\/h3>\n\n\n\n<p>In Kubernetes, control-plane components like the API server can run as <em>static pods<\/em> on each control-plane node. In OpenShift 4.x, the <code>kube-apiserver<\/code> pods use <code>hostNetwork: true<\/code>, which means they bind directly to the host\u2019s network interface\u2014specifically on port <code>6443<\/code> by default.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Location of static pod manifests<\/strong>: These are managed by the Machine Config Operator and typically live in <code>\/etc\/kubernetes\/manifests<\/code> on each master node.<\/li>\n\n\n\n<li><strong>Direct binding<\/strong>: Because these pods use host networking, port <code>6443<\/code> on the master node itself is used. This is not a standard Kubernetes <code>Service<\/code> or <code>NodePort<\/code>; it is bound at the OS level.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Implications<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>There is <em>no<\/em> <code>Service<\/code>, <code>Route<\/code>, or <code>Ingress<\/code> object for the control-plane API endpoint.<\/li>\n\n\n\n<li>The typical Service\/Route-based exposure flow doesn\u2019t apply to these system components; they live outside the usual Kubernetes networking model to ensure reliability and isolation.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">2. Load-Balancing the API Server<\/h2>\n\n\n\n<p>In a production environment, you typically want the API server to be highly available. You accomplish that by putting a load balancer in front of the master nodes, each listening on its own <code>6443<\/code> port. This helps ensure that if one node goes down, the others can still respond to API requests.<\/p>\n\n\n\n<p>Below are three common ways to achieve this on bare metal.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Option A: External Hardware\/Virtual Load Balancer (F5, etc.)<\/h3>\n\n\n\n<p><strong>Overview<\/strong><br>Many on-prem or private datacenter environments already have a load-balancing solution in place\u2014e.g., F5, A10, Citrix, or Netscaler appliances. If that\u2019s the case, you can simply:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Configure a virtual server that listens on <code>api.&lt;cluster-domain&gt;:6443<\/code>.<\/li>\n\n\n\n<li>Point it to the IP addresses of your OpenShift master nodes on port <code>6443<\/code>.<\/li>\n<\/ol>\n\n\n\n<p><strong>Pros<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Extremely common in enterprise scenarios.<\/li>\n\n\n\n<li>Well-supported by OpenShift documentation and typical best practices.<\/li>\n\n\n\n<li>Often includes advanced features (SSL offloading, health checks, etc.).<\/li>\n<\/ul>\n\n\n\n<p><strong>Cons<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Requires specialized hardware or a VM\/appliance with a license in some cases.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Option B: Keepalived + HAProxy on the Master Nodes<\/h3>\n\n\n\n<p><strong>Overview<\/strong><br>If you lack a dedicated external load balancer, you can run a keepalived\/HAProxy setup <em>within<\/em> your cluster\u2019s control-plane nodes themselves. Typically:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Keepalived manages a floating Virtual IP (VIP).<\/li>\n\n\n\n<li>HAProxy listens on the VIP (on port <code>6443<\/code>) and forwards traffic to the local node or other master nodes.<\/li>\n<\/ul>\n\n\n\n<p><strong>Pros<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>No extra hardware or external appliances needed.<\/li>\n\n\n\n<li>Still provides a single endpoint (<code>api.&lt;cluster-domain&gt;:6443<\/code>) that floats among the masters.<\/li>\n<\/ul>\n\n\n\n<p><strong>Cons<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>More complex to configure and maintain.<\/li>\n\n\n\n<li>You\u2019re hosting the load-balancing solution on the same nodes as your control-plane, so it\u2019s critical to ensure these components remain stable.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Option C: MetalLB for LoadBalancer Services<\/h3>\n\n\n\n<p><strong>Overview<\/strong><br>MetalLB is an open-source solution that brings \u201ccloud-style\u201d LoadBalancer services to bare-metal Kubernetes clusters. It typically works in Layer 2 (ARP) or BGP mode to announce addresses, allowing you to create a <code>Service<\/code> of <code>type: LoadBalancer<\/code> that obtains a routable IP.<\/p>\n\n\n\n<p><strong>Should You Use It for the API Server?<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>While MetalLB is great for <em>application workloads<\/em> requiring a LoadBalancer IP, it is generally <em>not<\/em> the recommended approach for the cluster\u2019s control-plane traffic in OpenShift 4.x.<\/li>\n\n\n\n<li>The API server is not declared as a standard \u201cservice\u201d in the cluster; instead, it\u2019s a static pod using host networking.<\/li>\n\n\n\n<li>You would need additional customizations to treat the API endpoint like a load-balancer service. This is a non-standard pattern in OpenShift 4.x, and official documentation typically recommends either an external LB or keepalived\/HAProxy.<\/li>\n<\/ul>\n\n\n\n<p><strong>Pros<\/strong> (for application workloads)<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Provides a simple way to assign external IP addresses to your apps without external hardware.<\/li>\n\n\n\n<li>Lightweight solution that integrates neatly with typical Kubernetes workflows.<\/li>\n<\/ul>\n\n\n\n<p><strong>Cons<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Not officially supported for the API server\u2019s main endpoint.<\/li>\n\n\n\n<li>Missing advanced features you might find in dedicated appliances (SSL termination, advanced health checks, etc.).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">3. Recommended Approaches<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>If You Have an Existing Load Balancer<\/strong>\n<ul class=\"wp-block-list\">\n<li>Point it at your master nodes\u2019 IP addresses, forwarding <code>:6443<\/code> to each node\u2019s <code>:6443<\/code>.<\/li>\n\n\n\n<li>You\u2019ll typically have a DNS entry like <code>api.yourcluster.example.com<\/code> that resolves to the load balancer\u2019s VIP or IP.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>If You Don\u2019t Have One<\/strong>\n<ul class=\"wp-block-list\">\n<li>Consider deploying keepalived + HAProxy on the master nodes. You can designate one floating IP that is managed by keepalived. HAProxy on each node can route requests to local or other masters\u2019 API endpoints.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Use MetalLB for App Workloads, Not the Control Plane<\/strong>\n<ul class=\"wp-block-list\">\n<li>If you are on bare metal and need load-balancing for normal application services (i.e., front-end web apps), then MetalLB is a great choice.<\/li>\n\n\n\n<li>However, for the control-plane API, it\u2019s best to stick to the official recommended approach of an external LB or keepalived\/HAProxy.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>The API server in OpenShift 4.x is bound at the host network level (port 6443) on each control-plane node via static pods, which is different from how typical workloads are exposed. To achieve high availability on bare metal, you need some form of load balancer\u2014commonly an external appliance or keepalived + HAProxy. MetalLB is excellent for exposing standard application workloads via <code>type: LoadBalancer<\/code>, but it isn\u2019t the typical path for the OpenShift control-plane traffic.<\/p>\n\n\n\n<p>By understanding these different paths, you can tailor your OpenShift 4.x deployment strategy to match your on-prem infrastructure, making sure your cluster\u2019s API remains accessible, robust, and highly available.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Running OpenShift 4.x on bare metal has a number of advantages: you get to maintain control of your own environment without being beholden to a cloud provider\u2019s networking or load-balancing solution. But with that control comes a bit of extra work, especially around how the OpenShift API server is exposed. In this post, we\u2019ll discuss: &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.kerloys.com\/index.php\/2025\/03\/23\/openshift-api-server-understanding-openshift-4-x-api-server-exposure-on-bare-metalopenshift-api-server\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Understanding OpenShift 4.x API Server Exposure on Bare Metal Openshift API server&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21,5],"tags":[],"class_list":["post-80","post","type-post","status-publish","format-standard","hentry","category-openshift","category-technology-networking"],"_links":{"self":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts\/80","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/comments?post=80"}],"version-history":[{"count":5,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts\/80\/revisions"}],"predecessor-version":[{"id":85,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts\/80\/revisions\/85"}],"wp:attachment":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/media?parent=80"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/categories?post=80"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/tags?post=80"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}