Skip to content


OpenShift Container Platform 4 (OCP 4)

Single Node OpenShift

OpenShift sizing and subscription guide

OpenShift Platform Plus

Best Practices

Setting up OCP4 on AWS

ROSA Red Hat OpenShift Service on AWS

OCP 4 Architecture

CI/CD in OpenShift


OpenShift End-to-End. Day 0, Day 1 & Day 2

OCP 4 Pland and Deploy

OCP 4 Overview


Three New Functionalities

  1. Self-Managing Platform
  2. Application Lifecycle Management (OLM):
    • OLM Operator:
      • Responsible for deploying applications defined by ClusterServiceVersion (CSV) manifest.
      • Not concerned with the creation of the required resources; users can choose to manually create these resources using the CLI, or users can choose to create these resources using the Catalog Operator.
    • Catalog Operator:
      • Responsible for resolving and installing CSVs and the required resources they specify. It is also responsible for watching CatalogSources for updates to packages in channels and upgrading them (optionally automatically) to the latest available versions.
      • A user that wishes to track a package in a channel creates a Subscription resource configuring the desired package, channel, and the CatalogSource from which to pull updates. When updates are found, an appropriate InstallPlan is written into the namespace on behalf of the user.
  3. Automated Infrastructure Management (Over-The-Air Updates)

ocp update1 ocp update2 ocp update3

New Technical Components

  • New Installer:
  • Storage: Cloud integrated storage capability used by default via OCS Operator (Red Hat)
  • Operators End-To-End!: responsible for reconciling the system to the desired state
    • Cluster configuration kept as API objects that ease its maintenance (“everything-as-code” approach):
      • Every component is configured with Custom Resources (CR) that are processed by operators.
      • No more painful upgrades and synchronization among multiple nodes and no more configuration drift.
    • List of operators that configure cluster components (API objects):
      • API server
      • Nodes via Machine API
      • Ingress
      • Internal DNS
      • Logging (EFK) and Monitoring (Prometheus)
      • Sample applications
      • Networking
      • Internal Registry
      • Oauth (and authentication in general)
      • etc
  • At the Node Level:
    • RHEL CoreOS is the result of merging CoreOS Container Linux and RedHat Atomic host functionality and is currently the only supported OS to host OpenShift 4.
    • Node provisioning with ignition, which came with CoreOS Container Linux
    • Atomic host updates with rpm-ostree
    • CRI-O as a container runtime
    • SELinux enabled by default
  • Machine API: Provisioning of nodes. Abstraction mechanism added (API objects to declaratively manage the cluster):
    • Based on Kubernetes Cluster API project
    • Provides a new set of machine resources:
      • Machine
      • Machine Deployment
      • MachineSet:
        • distributes easily your nodes among different Availability Zones
        • manages multiple node pools (e.g. pool for testing, pool for machine learning with GPU attached, etc)
  • Everything “just another pod”

Installation and Cluster Autoscaler

  • New installer openshift-install tool, replacement for the old Ansible scripts.
  • 40 min (AWS). Terraform.
  • 2 installation patterns:
    1. Installer Provisioned Infrastructure (IPI)
    2. User Provisioned Infrastructure (UPI)
  • The whole process can be done in one command and requires minimal infrastructure knowledge (IPI): openshift-install create cluster




  • 2 installation patterns:
    1. Installer Provisioned Infrastructure (IPI): On supported platforms, the installer is capable of provisioning the underlying infrastructure for the cluster. The installer programmatically creates all portions of the networking, machines, and operating systems required to support the cluster. Think of it as best-practice reference architecture implemented in code.  It is recommended that most users make use of this functionality to avoid having to provision their own infrastructure.  The installer will create and destroy the infrastructure components it needs to be successful over the life of the cluster.
    2. User Provisioned Infrastructure (UPI): For other platforms or in scenarios where installer provisioned infrastructure would be incompatible, the installer can stop short of creating the infrastructure, and allow the platform administrator to provision their own using the cluster assets generated by the install tool. Once the infrastructure has been created, OpenShift 4 is installed, maintaining its ability to support automated operations and over-the-air platform updates.



Cluster Autoscaler Operator

  • Adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments
  • Increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The ClusterAutoscaler does not increase the cluster resources beyond the limits that you specify.
  • A huge improvement over the manual, error-prone process used in the previous version of OpenShift and RHEL nodes.

OCP Autoscaler1 OCP Autoscaler2



  • Core of the platform
  • The hierarchy of operators, with clusterversion at the top, is the single door for configuration changes and is responsible for reconciling the system to the desired state.
  • For example, if you break a critical cluster resource directly, the system automatically recovers itself. 
  • Similarly to cluster maintenance, operator framework used for applications. As a user, you get SDK, OLM (Lifecycle Manager of all Operators and their associated services running across their clusters) and embedded operator hub.
  • OLM Arquitecture
  • Adding Operators to a Cluster (They can be added via CatalogSource)
  • The supported method of using Helm charts with Openshift is via the Helm Operator
  • View the list of Operators available to the cluster from the OperatorHub:
$ oc get packagemanifests -n openshift-marketplace 
amq-streams 14h 
packageserver 15h 
couchbase-enterprise 14h 
mongodb-enterprise 14h 
etcd 14h myoperator 14h 

OCP Operators


  • Developer Catalog
  • Installed Operators
  • OperatorHub (OLM)
  • Operator Management:
    • Operator Catalogs are groups of Operators you can make available on the cluster. They can be added via CatalogSource (i.e. “catalogsource.yaml”). Subscribe and grant a namespace access to use the installed Operators.
    • Operator Subscriptions keep your services up to date by tracking a channel in a package. The approval strategy determines either manual or automatic updates.

Operator Subscriptions

Certified Opeators, OLM Operators and Red Hat Operators

  • Certified Operators packaged by Certified:
    • Not provided by Red Hat
    • Supported by Red Hat
    • Deployed via “Package Server” OLM Operator
  • OLM Operators:
    • Packaged by Red Hat
    • “Package Server” OLM Operator includes a CatalogSource provided by Red Hat
  • Red Hat Operators:
    • Packaged by Red Hat
    • Deployed via “Package Server” OLM Operator
  • Community Edition Operators:
    • Deployed by any means
    • Not supported by Red Hat

OCP Certified Operators

Deploy and bind enterprise-grade microservices with Kubernetes Operators

OpenShift Container Storage Operator (OCS)

OCS 3 (OpenShift 3)
  • OpenShift Container Storage based on GlusterFS technology.
  • Not OpenShift 4 compliant: Migration tooling will be available to facilitate the move to OCS 4.x (OpenShift Gluster APP Mitration Tool).
OCS 4 (OpenShift 4)
  • OCS Operator based on with Operator LifeCycle Manager (OLM).
  • Tech Stack:
    • Rook (don’t confuse this with non-redhat “Rook Ceph” -> RH ref).
      • Replaces Heketi (OpenShift 3)
      • Uses Red Hat Ceph Storage and Noobaa.
    • Red Hat Ceph Storage
    • Noobaa:
      • Red Hat Multi Cloud Gateway (AWS, Azure, GCP, etc)
      • Asynchronous replication of data between my local ceph and my cloud provider
      • Deduplication
      • Compression
      • Encryption
  • Backups available in OpenShift 4.2+ (Snapshots + Restore of Volumes)
  • OCS Dashboard in OCS Operator

OCS Dashboard

Cluster Network Operator (CNO) & Routers

oc describe clusteroperators/ingress
oc logs --namespace=openshift-ingress-operator deployments/ingress-operator

ServiceMesh Operator

OCS Servicemesh 1 OCS Servicemesh 2 OCS Servicemesh 3

OCS Servicemesh 4

Serverless Operator (Knative)

Monitoring and Observability


  • Integrated Grafana v5.4.3 (deployed by default):
  • Monitoring -> Dashboards
  • Project “openshift-monitoring”


Alerts and Silences

  • Integrated Alertmanager 0.16.2 (deployed by default):
    • Monitoring -> Alerts
    • Monitoring -> Silences
    • Silences temporarily mute alerts based on a set of conditions that you define. Notifications are not sent for alerts that meet the given conditions.
  • Project “openshift-monitoring”

Cluster Logging (EFK)

  • Log Management for Red Hat OpenShift
  • EFK: Elasticsearch + Fluentd + Kibana
  • Cluster Logging EFK not deployed by default
  • As an OpenShift Container Platform cluster administrator, you can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services.
  • The OpenShift Container Platform cluster logging solution requires that you install both the Cluster Logging Operator and Elasticsearch Operator. There is no use case in OpenShift Container Platform for installing the operators individually. You must install the Elasticsearch Operator using the CLI following the directions below. You can install the Cluster Logging Operator using the web console or CLI. Deployment procedure based on CLI + web console:
OCP Release Elasticsearch Fluentd Kibana EFK deployed by default
OpenShift 3.11 0.12.43 5.6.13 No
OpenShift 4.1 5.6.16 ? 5.6.16 No

Build Images. Next-Generation Container Image Building Tools

  • Redesign of how images are built on the platform.
  • Instead of relying on a daemon on the host to manage containers, image creation, and image pushing, we are leveraging Buildah running inside our build pods.
  • This aligns with the general OpenShift 4 theme of making everything “just another pod”
  • A simplified set of build workflows, not dependent on the node host having a specific container runtime available. 
  • Dockerfiles that built under OpenShift 3.x will continue to build under OpenShift 4.x and S2I builds will continue to function as well.
  • The actual BuildConfig API is unchanged, so a BuildConfig from a v3.x cluster can be imported into a v4.x cluster and work without modification.
  • Podman & Buildah for docker users
  • Openshift ImageStreams
  • Openshift 4 image builds
  • Custom image builds with Buildah
  • Rootless podman and NFS


OpenShift Registry and Quay Registry

Local Development Environment

  • For version 3 we have Container Development Kit (or its open source equivalent for OKD - minishift) which launches a single node VM with Openshift and it does it in a few minutes. It’s perfect for testing also as a part of CI/CD pipeline.
  • Openshift 4 on your laptop: There is a working solution for single node OpenShift cluster. It is provided by a new project called CodeReady Containers.
  • Procedure:
crc setup
crc start
environment variables
oc login

OpenShift on Azure

OpenShift Youtube

OpenShift 4 Training

OpenShift 4 Roadmap

Kubevirt Virtual Machine Management on Kubernetes

Networking and Network Policy in OCP4. SDN/CNI plug-ins

ocp4 cni arch

Multiple Networks with SDN/CNI plug-ins. Usage scenarios for an additional network

  • Understanding multiple networks In Kubernetes, container networking is delegated to networking plug-ins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. During cluster installation, you configure your default Pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your Pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure Pods that deliver network functionality, such as switching or routing.
  • You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:
    • Performance: You can send traffic on two different planes in order to manage how much traffic is along each plane.
    • Security: You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.
  • All of the Pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every Pod has an eth0 interface that is attached to the cluster-wide Pod network. You can view the interfaces for a Pod by using the oc exec -it – ip a command. If you add additional network interfaces that use Multus CNI, they are named net1, net2, …​, netN.
  • To attach additional network interfaces to a Pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a Custom Resource (CR) that has a NetworkAttachmentDefinition type. A CNI configuration inside each of these CRs defines how that interface is created.
  • Demystifying Multus 🌟

Istio CNI plug-in

  • Istio CNI plug-in 🌟 Red Hat OpenShift Service Mesh includes CNI plug-in, which provides you with an alternate way to configure application pod networking. The CNI plug-in replaces the init-container network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges.

Calico CNI Plug-in

Third Party Network Operators with OpenShift

Ingress Controllers in OpenShift using IPI

Storage in OCP 4. OpenShift Container Storage (OCS)

Red Hat Advanced Cluster Management for Kubernetes

OpenShift Kubernetes Engine (OKE)

openshift4 architecture

Red Hat CodeReady Containers. OpenShift 4 on your laptop

OpenShift Hive: Cluster-as-a-Service. Easily provision new PaaS environments for developers

OpenShift 4 Master API Protection in Public Cloud

Backup and Migrate to OpenShift 4

OKD4. OpenShift 4 without enterprise-level support

OpenShift Serverless with Knative

Helm Charts and OpenShift 4

Red Hat Marketplace

Kubestone. Benchmarking Operator for K8s and OpenShift

OpenShift Cost Management

Operators in OCP 4

Quay Container Registry

  • Red Hat Introduces open source Project Quay container registry
  • Red Hat Quay
  • GitHub Quay (OSS)
  • Introducing Red Hat Quay
  • Keep Your Applications Secure With Automatic Rebuilds 🌟
    • OpenShift Container Platform historically has addressed this challenge by using Image Streams. An image stream is an abstraction for referencing container images from within OpenShift while the referenced images are an image registry such as OpenShift internal registry, Quay, or other external registries. Image streams are capable of defining triggers which allow your builds and deployments to be automatically invoked when a new version of an image is available in the backing image registry. This in effect enables rebuilding all images that are based on a particular base image as soon as a new version of the base image is available in the Red Hat container catalog and therefore updates all images with the latest bug, CVE, and vulnerability fixes delivered in the latest base image. The challenge, however, is that this capability is limited to BuildConfigs in OpenShift and does not allow more complex workflows to be triggered when images are updated in the Red Hat container catalog. Furthermore, it is also limited to the scope of a single cluster and its internal OpenShift registry.
    • Fortunately, though, using Red Hat Quay as a central registry in combination with OpenShift Pipelines enables infinite possibilities in designing sophisticated workflows for ensuring a secure software supply chain and automatically performing any set of actions whenever images are pushed, updated, or security vulnerabilities are discovered in the Red Hat container catalog.
    • In this blog post, we will highlight how Red Hat Quay can be integrated with Tekton pipelines to trigger application rebuilds when images are updated in the Red Hat container catalog. At a high level, the flow will look like this:
  • medium: Securing Containers with Red Hat Quay and Clair — Part I

Application Migration Toolkit

Developer Sandbox

OpenShift Topology View

OpenBuilt Platform for the Construction Industry



Click to expand!


Click to expand!
Back to top