{"id":86,"date":"2025-03-23T11:54:22","date_gmt":"2025-03-23T07:54:22","guid":{"rendered":"https:\/\/www.kerloys.com\/?p=86"},"modified":"2025-03-23T11:55:49","modified_gmt":"2025-03-23T07:55:49","slug":"understanding-how-the-openshift-console-is-exposed-on-bare-metal","status":"publish","type":"post","link":"https:\/\/www.kerloys.com\/index.php\/2025\/03\/23\/understanding-how-the-openshift-console-is-exposed-on-bare-metal\/","title":{"rendered":"Understanding How the OpenShift Console Is Exposed on Bare Metal"},"content":{"rendered":"\n<p><\/p>\n\n\n\n<p>If you\u2019ve ever used OpenShift, you\u2019re probably familiar with its feature-rich web console. It\u2019s a central hub for managing workloads, projects, security policies, and more. While the console is easy to access in a typical cloud environment, the mechanics behind exposing it on bare metal are equally interesting. In this article, we\u2019ll explore how OpenShift 4.x (including 4.16) serves and secures the console in a bare-metal setting.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. The Basics: Console vs. API Server<\/h2>\n\n\n\n<p>In OpenShift 4.x, there are two main entry points for cluster interactions:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>API server<\/strong>: Runs on port <code>6443<\/code>, usually exposed by external load balancers or keepalived\/HAProxy in bare-metal environments.<\/li>\n\n\n\n<li><strong>Web console<\/strong>: Typically accessed at port <code>443<\/code> via an OpenShift \u201croute,\u201d backed by the cluster\u2019s router\/ingress infrastructure.<\/li>\n<\/ol>\n\n\n\n<p>The API server uses a special out-of-band mechanism (static pods on master nodes). By contrast, the console takes a path much more familiar to standard Kubernetes applications: it\u2019s served by a deployment, a service, and ultimately a Route object in the <code>openshift-console<\/code> namespace. Let\u2019s focus on that Route-based exposure.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2. How the Console Is Deployed<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Console Operator<\/h3>\n\n\n\n<p>The console itself is managed by the <strong>OpenShift Console Operator<\/strong>, which:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Deploys the console pods into the <code>openshift-console<\/code> namespace.<\/li>\n\n\n\n<li>Ensures they remain healthy and up-to-date.<\/li>\n\n\n\n<li>Creates the relevant Kubernetes resources (Deployment, Service, and Route) that expose the console to external users.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Where the Pods Run<\/h3>\n\n\n\n<p>By default, the console pods run on worker nodes (though in some topologies, you might have dedicated infrastructure nodes). The important point is that these pods are scheduled like normal Kubernetes workloads.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3. How the Console Is Exposed<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">The OpenShift Router (Ingress Controller)<\/h3>\n\n\n\n<p>OpenShift comes with a built-in <strong>Ingress Controller<\/strong>\u2014often referred to as the \u201crouter.\u201d It\u2019s usually an HAProxy-based router deployed on worker (or infra) nodes. By default, it will listen on:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>HTTP port <code>80<\/code><\/strong><\/li>\n\n\n\n<li><strong>HTTPS port <code>443<\/code><\/strong><\/li>\n<\/ul>\n\n\n\n<p>When you create a Route, the router matches the host name in the incoming request and forwards traffic to the corresponding service. In the console\u2019s case, that route is typically named <code>console<\/code> in the <code>openshift-console<\/code> namespace.<\/p>\n\n\n\n<p><strong>Typical Hostname<\/strong><br>During installation, OpenShift configures the default \u201capps\u201d domain. For instance:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted has-white-color has-black-background-color has-text-color has-background has-small-font-size\">console-openshift-console.apps.&lt;cluster-domain&gt;<\/pre>\n\n\n\n<p>So when you browse to, say, <code>https:\/\/console-openshift-console.apps.mycluster.example.com<\/code>, your request hits the router, which looks for the matching route and then forwards you to the console service.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Route Object<\/h3>\n\n\n\n<p>OpenShift 4.x uses the Route resource to direct external traffic to an internal service. You can find the console route by running:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted has-white-color has-black-background-color has-text-color has-background has-small-font-size\">oc get route console -n openshift-console<\/pre>\n\n\n\n<p>You\u2019ll usually see something like:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted has-white-color has-black-background-color has-text-color has-background\">NAME      HOST\/PORT                                                   PATH   SERVICES    PORT   TERMINATION   WILDCARD<br>console   console-openshift-console.apps.mycluster.example.com               console     https  edge          None<br><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Service<\/strong>: The route points to the <code>console<\/code> service in the <code>openshift-console<\/code> namespace.<\/li>\n\n\n\n<li><strong>Edge Termination<\/strong>: The router typically provides TLS termination, ensuring secure communication.<\/li>\n\n\n\n<li><strong>Host<\/strong>: The domain you\u2019ll use to access the console externally.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">4. Traffic Flow on Bare Metal<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">External Access<\/h3>\n\n\n\n<p>On bare metal, you typically have one of the following configurations:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Direct Node Access<\/strong>: If each worker node has a publicly (or at least internally routable) IP, you create a wildcard DNS record (or direct DNS records) that point to those node IPs (or to a load balancer fronting them).<\/li>\n\n\n\n<li><strong>External Load Balancer<\/strong>: You can place an external L4 or L7 load balancer in front of the worker nodes\u2019 port 443, distributing traffic across the router pods. This approach mirrors the cloud LB approach but uses an on-prem solution (F5, Netscaler, etc.).<\/li>\n<\/ol>\n\n\n\n<p>Either way, the router\u2019s service IP on each node is listening on port 443. By default, the <strong>Ingress Operator<\/strong> ensures that all router pods share a common DNS domain like <code>*.apps.&lt;cluster-domain&gt;<\/code>. This means that any Route you create automatically becomes externally accessible, assuming your DNS points to the router\u2019s IP or load balancer VIP.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">TLS Certificates<\/h3>\n\n\n\n<p>By default, the console route has a certificate created and managed by the cluster. You can optionally configure a custom TLS certificate for the router if you want to serve the console (and all other routes) with your own wildcard certificate.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">5. Customizing the Console Domain or Certificate<\/h2>\n\n\n\n<p>You might want to customize how users access the console\u2014maybe you don\u2019t like the default subdomain or you want to serve it at a corporate domain. There are a couple of ways:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Change the <code>apps<\/code> domain<\/strong>: During installation, you can specify a custom domain.<\/li>\n\n\n\n<li><strong>Edit the Console Route<\/strong>: You can change the route\u2019s host name, but you must ensure DNS for that host name points to your router\u2019s public IP.<\/li>\n\n\n\n<li><strong>Configure a Custom Cert<\/strong>: If you have a wildcard certificate for <code>mycompany.com<\/code>, you can apply it at the router level, so the console route and all other routes share the same certificate authority.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">6. Scaling and Availability<\/h2>\n\n\n\n<p>Since the console runs as a standard Deployment, you can scale it up (e.g., set <code>replicas: 3<\/code>) if you anticipate heavy usage. The router itself is typically deployed on multiple nodes for high availability\u2014ensuring that even if one node goes down, the router remains functional, and your console remains accessible.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">7. How This Differs From the API Server<\/h2>\n\n\n\n<p>One point of confusion is that both the API server and the console run in the cluster\u2014so why is the API server not also behind a Route?<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>API Server<\/strong>: Runs as static pods with <code>hostNetwork: true<\/code> on each master node, typically exposed on port <code>6443<\/code>. It\u2019s <em>not<\/em> a normal deployment and doesn\u2019t rely on the cluster\u2019s router. Instead, it usually sits behind a separate load balancer (external or keepalived\/HAProxy).<\/li>\n\n\n\n<li><strong>Console<\/strong>: A normal deployment plus a Route, served by the ingress router on port <code>443<\/code>.<\/li>\n<\/ul>\n\n\n\n<p>So while the console takes advantage of standard Kubernetes networking patterns, the API server intentionally bypasses them for isolation, reliability, and the ability to run even if cluster networking is partially down.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">8. Frequently Asked Questions<\/h2>\n\n\n\n<p><strong>Q: Can I use MetalLB to expose the console on a LoadBalancer-type service?<\/strong><br>A: You technically <em>could<\/em> set up a LoadBalancer service if you had MetalLB. However, the standard approach in OpenShift is to rely on the built-in router for console traffic. The console route is automatically configured, and the router takes care of HTTPS termination and routing.<\/p>\n\n\n\n<p><strong>Q: Do I need a separate load balancer for the console traffic?<\/strong><br>A: If your bare-metal nodes themselves are routable (for example, each worker node has a valid IP and your DNS points <code>console-openshift-console.apps.mycluster.example.com<\/code> to those nodes), then you may not need an additional LB. However, some organizations prefer to place a load balancer in front of all worker nodes for consistency, health checks, and easier SSL management.<\/p>\n\n\n\n<p><strong>Q: How do I get a custom domain to work with the console?<\/strong><br>A: You can edit the route\u2019s hostname or specify a custom domain in your Ingress configuration. Then, point DNS for that new domain (e.g. <code>console.internal.mycompany.com<\/code>) to the external IP(s) of your router or your load balancer. Make sure TLS certificates match if you\u2019re providing your own certificate.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>In OpenShift 4.x, the <strong>web console<\/strong> is exposed via a standard Kubernetes Route and served by the built-in router on port 443. The <strong>Console Operator<\/strong> takes care of deploying and managing the console pods, while the <strong>Ingress Operator<\/strong> ensures a default router is up and running. On bare metal, the key to making the console accessible is to ensure your DNS points at the router\u2019s external interface\u2014whether that\u2019s a dedicated IP on each worker node or an external load balancer VIP.<\/p>\n\n\n\n<p>By understanding these mechanics, you can customize the console domain, certificate, and scaling strategy to best fit your environment. And once your console is online, you\u2019ll have the full power of the OpenShift UI at your fingertips\u2014no matter where your cluster happens to be running!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you\u2019ve ever used OpenShift, you\u2019re probably familiar with its feature-rich web console. It\u2019s a central hub for managing workloads, projects, security policies, and more. While the console is easy to access in a typical cloud environment, the mechanics behind exposing it on bare metal are equally interesting. In this article, we\u2019ll explore how OpenShift &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.kerloys.com\/index.php\/2025\/03\/23\/understanding-how-the-openshift-console-is-exposed-on-bare-metal\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Understanding How the OpenShift Console Is Exposed on Bare Metal&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[],"class_list":["post-86","post","type-post","status-publish","format-standard","hentry","category-openshift"],"_links":{"self":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts\/86","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/comments?post=86"}],"version-history":[{"count":4,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts\/86\/revisions"}],"predecessor-version":[{"id":90,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/posts\/86\/revisions\/90"}],"wp:attachment":[{"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/media?parent=86"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/categories?post=86"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kerloys.com\/index.php\/wp-json\/wp\/v2\/tags?post=86"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}