OpenShift

What is OpenShift?

OpenShift is a Kubernetes-based enterprise container platform developed by Red Hat.
It provides everything you need to build, deploy, manage, and scale applications using containers.

Think of OpenShift as:

  • Kubernetes + more automation + built-in security + developer tools + enterprise support.


Key Components of OpenShift

1. Kubernetes (Core Orchestrator)

OpenShift is built on top of Kubernetes and includes:

  • Pod & container orchestration

  • Service discovery

  • Auto-scaling

  • Load balancing

2. OpenShift API Server

  • Central control point

  • Manages all cluster operations

3. OpenShift Container Registry (OCR)

  • Built-in Docker-compatible registry

  • Stores images internally in a secure manner

4. OpenShift Router

  • Based on HAProxy

  • Handles external traffic

  • Supports routes, TLS, sticky sessions, etc.

5. Operators

  • Automate lifecycle of apps & infrastructure (install → upgrade → manage)

  • OpenShift uses OperatorHub to provide many certified operators

6. Developer Tools

  • Source-to-Image (S2I): build apps from source code

  • Dev Spaces (formerly CodeReady Workspaces)

  • Pipelines (Tekton-based CI/CD)

  • GitOps (ArgoCD)


Types of OpenShift Platforms

1. OpenShift Container Platform (OCP)

  • Self-managed version

  • Install on your own infrastructure (on-prem, private cloud, bare-metal)

2. OpenShift Online

  • Fully managed SaaS offering by Red Hat

3. OpenShift Dedicated

  • Customer-specific cluster

  • Managed by Red Hat on AWS or GCP

4. ROSA (Red Hat OpenShift on AWS)

  • Joint Red Hat–AWS managed service

5. ARO (Azure Red Hat OpenShift)

  • Joint Red Hat–Microsoft managed platform


Architecture Overview

OpenShift has 3 major layers:

1. Master/Control Plane

  • API Server

  • etcd

  • Scheduler

  • Controller Manager

  • Machine API

2. Worker Nodes

  • Run containers/pods

  • Include:

    • CRI-O or Docker runtime

    • Kubelet

    • Node services

3. Services Layer

  • Monitoring (Prometheus)

  • Logging / EFK stack

  • Image registry

  • Networking (OpenShift SDN / OVN-Kubernetes)


Key Features of OpenShift

Enterprise Security

  • Role-Based Access Control (RBAC)

  • Image scanning (Clair)

  • Network policies

  • Security Context Constraints (SCC)

Built-in CI/CD

  • Tekton pipelines

  • ArgoCD for GitOps

Developer-Friendly

  • Web console + dashboard

  • S2I build system

  • UI-based deployment

  • Integrated logging & monitoring

Autoscaling

  • Horizontal Pod Autoscaler

  • Cluster autoscaler

  • Machine autoscaler

Multi-Cloud + Hybrid Support

  • Deploy anywhere

  • Consistent experience across environments


How Deployment Works in OpenShift

  1. Developer pushes code to Git repo

  2. OpenShift pipeline triggers build

  3. Build creates Docker image / S2I image

  4. Image is stored in internal registry

  5. DeploymentConfig or Deployment creates pods

  6. Router exposes app through a public route


Command Line Tools

1. oc CLI (OpenShift Client)

Used to interact with OpenShift, similar to kubectl but with extra features:

oc login oc new-project oc new-app oc get pods oc logs oc expose service/myapp

2. kubectl

Also works because OpenShift is Kubernetes under the hood.


OpenShift vs Kubernetes

FeatureKubernetesOpenShift
InstallationComplexAutomated installer
SecurityBasicStrict, enterprise-grade
UI DashboardBasicAdvanced web console
Built-in CI/CDNoYes (Tekton + ArgoCD)
Image registryExternalBuilt-in
Multi-tenancyLimitedStrong security & isolation
Developer toolsMinimalStrong developer experience


Security Enhancements in OpenShift

  • SCCs (Security Context Constraints)

  • Enforces non-root containers

  • Audit logs

  • TLS everywhere

  • Vulnerability scanning


Use Cases of OpenShift

  • Microservices architecture

  • Modernizing legacy applications

  • Hybrid cloud deployments

  • Banking & FinTech applications

  • Telecom workloads

  • Enterprise CI/CD pipelines


Advantages

  • Enterprise-ready

  • Highly secure

  • Great for large teams

  • Rich developer ecosystem


Disadvantages

  • Cost is high

  • Complex for small projects

  • Steeper learning curve than basic Kubernetes



What Is OpenShift Local?

OpenShift Local is a single-node OpenShift cluster (1 master + 1 worker inside one VM).
It is used for:

  • Local learning

  • Testing

  • Small development

  • Proof-of-concept






Setup OpenShift


Enable Hyper-V on Windows 10/11 Home Edition

Option 1: Using PowerShell (Admin Mode)
Run this in PowerShell (as Administrator):

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

Restart your PC.

Option 2: Enable using reference script

Reference: https://mylemans.online/posts/EnableHyperv-Windows11-Home/

Cross-check: Go to → Control Panel → Programs → Turn Windows features on or off → Ensure Hyper-V is checked.


Step 1 : Create Red Hat Developer Account

Go to: https://developers.redhat.com/register
Sign up (free) or log in with an existing Red Hat account.

You’ll need this account to download OpenShift Local and your Pull Secret.


Step 2 : Download and Install OpenShift Local (CRC)

Visit: https://developers.redhat.com/products/openshift-local/overview

Download: crc-windows-amd64.msi and install it.

Verify installation:

crc version

You should see version output.

Download Pull Secret

Go to: https://cloud.redhat.com/openshift/install/pull-secret
Save it somewhere: D:\Software\crc\pull-secret.txt


Step 3 : Setup the OpenShift Cluster

Run in CMD:

crc setup

This step:

  • Prepares virtualization & networking

  • Validates system

  • Downloads required drivers

  • Sets up image cache


Step 4 : Start the OpenShift Cluster

Run:

crc start --pull-secret-file D:\Software\crc\pull-secret.txt

You’ll see output like:

👉 Save these credentials safely.

This step will:

  • Start a VM in Hyper-V

  • Create the OpenShift cluster

  • Configure kubeadmin login

  • Set up networking

Time required: 10–20 minutes.


Step 5 : Login to OpenShift Cluster

CLI Login

Run:

crc oc-env

👉 Copy the output and run it (It will sets up the oc CLI path).

Login:

oc login -u kubeadmin -p <password>

(Password was shown during crc start.)


Web Console Login

Open browser →
https://console-openshift-console.apps-crc.testing/

Login:

  • Username: kubeadmin

  • Password: shown during crc start

You now have a running local OpenShift cluster.



OpenShift Local Is Now Ready!

You can:

  • Create projects

  • Deploy apps

  • Use S2I

  • Test routes

  • Try pipelines

  • Explore Operators


How to fully reset our CRC/OpenShift Local cluster

1. Stop the CRC VM: crc stop

2. Delete the CRC VM Completely: crc delete 
👉This clears the entire OpenShift cluster

3. Delete the CRC Cache + Configuration (Optional but recommended): Delete following folders manually,

  • C:\Users\<your-username>\.crc\
  • C:\Users\<your-username>\.kube\
4. Start CRC fresh: crc start --pull-secret-file D:\Software\crc\pull-secret.txt 
👉This will create a brand-new OpenShift cluster.

5. Login Again (fresh credentials):
Get credentials: crc console --credentials
Than login: oc login -u kubeadmin -p <password>

6. Verify API Server: oc whoami
You should now see something kubeadmin






Deploy a Spring Boot Application 

(Podman Containerized Application)

on OpenShift


Step 1 : Prepare Your App & Build Podman Image

  1. Build Spring Boot JAR:

        mvn clean install
  1. Update your application.properties or application.yml:

spring.data.mongodb.uri=mongodb://10.107.6.110:27017/truck_lease_service_db

(Use your system IP from ipconfig.)

  1. Build Podman image inside project folder (where your Containerfile is):

        podman build -t springboot-mongodb-example .
  1. Verify image:

        podman images


Step 2 : Login Podman to OpenShift Internal Registry

Make sure you set CRC environment variable PATH value: C:\Users\siraj\.crc\bin\oc

On OpenShell: Get OpenShift token 

$TOKEN = oc whoami -t

Login Podman:

podman login --tls-verify=false -u kubeadmin -p $TOKEN default-route-openshift-image-registry.apps-crc.testing


Step 3 : Tag and Push Your Image to OpenShift

Create project:

oc new-project springboot-demo

Tag your image:

podman tag springboot-mongodb-example:latest default-route-openshift-image-registry.apps-crc.testing:443/springboot-demo/springboot-mongodb-example:latest

👉 Note: Due to firewall restrictions on my local system, I was unable to push the image to the local OpenShift cluster’s image registry using the following step. As an alternative, the Podman image can also be pushed to Docker Hub and then deployed and run in the local OpenShift cluster.

Push the image:

podman push --tls-verify=false default-route-openshift-image-registry.apps-crc.testing:443/springboot-demo/springboot-mongodb-example:latest

Step 4 : Deploy Application in OpenShift

Deploy:

oc new-app springboot-mongodb-example --image-stream=springboot-mongodb-example:latest
oc expose svc/springboot-mongodb-example

Check the route:

oc get routes

Output example:

springboot-mongodb-example springboot-mongodb-example-springboot-demo.apps-crc.testing

Open:

http://springboot-mongodb-example-springboot-demo.apps-crc.testing/api/trucks

Your Spring Boot app is now deployed and accessible!


Step 5 : Verify Logs & Connectivity

Logs:

oc logs -f deployment/springboot-mongodb-example

Verify MongoDB connectivity:

oc rsh deployment/springboot-mongodb-example
nc -zv 10.107.6.110 27017

If you see Connection succeeded! → MongoDB is reachable.


Step 6 : Access Application Locally

Browser or CLI:

curl http://springboot-mongodb-example-springboot-demo.apps-crc.testing/api/trucks

You should receive your API output.


Summary

StepActionCommand/URL
1Register Red Hat Accounthttps://developers.redhat.com/register
2Download and install CRChttps://developers.redhat.com/products/openshift-local/overview
5Pull Secrethttps://cloud.redhat.com/openshift/install/pull-secret
6Setup CRCcrc setup
7Start CRCcrc start --pull-secret-file D:\Software\crc\pull-secret.txt
8Login to Consolehttps://console-openshift-console.apps-crc.testing
9Login CLIoc login -u kubeadmin -p <password>
10Build Image (Podman)podman build -t springboot-mongodb-example .
11Push to OpenShiftpodman push default-route-openshift-image-registry.apps-crc.testing/springboot-demo/springboot-mongodb-example
12Deployoc new-app & oc expose svc/...






Deploy a Spring Boot Application

(without containerized)

on Openshift using S2I

  ðŸ‘‰ No containerization needed — no Podman or Docker required

This approach allows OpenShift to automatically build your app from your GitHub repository using Source-to-Image (S2I).

Step 1: Build Your Spring Boot Project

mvn clean package

Push your project code to GitHub or GitLab.

👉 you can skip this Step-1. It is NOT required for OpenShift S2I deployment. This is only for local testing.

Step 2: Login to OpenShift

Login using the OpenShift CLI:

oc login -u kubeadmin -p <password>

Note: You can get credentials with command crc console --credentials

Create a new OpenShift project:

oc new-project springboot-helloworld-project

Step 3: Deploy the Application Using S2I

Deploy your application using the Red Hat UBI 8 OpenJDK 21 S2I builder image:

oc new-app --strategy=source registry.access.redhat.com/ubi8/openjdk-21~https://github.com/SirajChaudhary/springboot-helloworld-service.git --name=springboot-helloworld-app

OpenShift S2I will automatically:
  • Clone your Git repository

  • Build the Spring Boot JAR using Maven (inside OpenShift)

  • Create an image via S2I

  • Deploy the application pod

After completing this step, wait for 5–10 minutes to allow the build to finish within the OpenShift cluster.

Step 4: Monitor Build and Deployment

Track the build logs (Note: Track the build with following command and make sure build complete before hitting next commands):

oc logs -f buildconfig/springboot-helloworld-app

Check pod status:
oc get pods

Step 5: Expose the Service (Create a Route)

Expose your application so it can be accessed externally:

oc expose service/springboot-helloworld-app

Retrieve the route URL:

oc get route

Open the route URL in a browser. your Spring Boot Hello World Application will now be running on OpenShift!


Step 6: Access OpenShift deployed microservice API

curl http://springboot-helloworld-app-springboot-helloworld-project.apps-crc.testing/hello


Podman


What is Podman 

  • Podman (short for Pod Manager) is an open-source, daemonless container engine for developing, managing, and running OCI-compliant containers on Linux, macOS, and Windows. 

  • Developed by Red Hat as part of the libpod project.
  • Introduced in 2018 to address:
    • Security concerns of running containers as root.
    • Dependency on the centralized Docker daemon (dockerd).
  • Key objective: Deliver tooling that aligns with Kubernetes architecture and follows OCI (Open Container Initiative) standards for containers and images.
  • Core components:
    • Podman → Manages and runs containers and pods.
    • Buildah → Builds container images efficiently without a daemon.
    • Skopeo → Handles image transfers between registries.
  • Rootless operation:
    • Users can create and manage containers without admin privileges, enhancing security.
  • Pod support:
    • Follows the Kubernetes pod concept, allowing multiple containers to share the same network and namespace.
  • User interfaces available:
    • Podman CLI – Command-line tool for developers and admins.
    • Podman Desktop – Graphical interface for easy container management.


Architecture Overview

  • Daemonless operation:
    • No central background service (unlike Docker’s dockerd).
    • Each container is a separate process, reducing complexity and improving reliability.
  • Fork/Exec model:
    • Each container is started as its own process.
    • Avoids single points of failure.
  • Rootless containers:
    • Containers can be run as non-root users, preventing privilege escalation.
  • Systemd integration:
    • Containers and pods can be managed as systemd services for automatic startup, monitoring, and lifecycle management.
  • Modular architecture:
    • Based on the libpod library, enabling flexibility and enhanced troubleshooting.
  • Cross-platform support:
    • Provides a consistent experience across different operating systems (Linux, macOS, Windows).


Core Building Blocks

  • Image → A template or blueprint used to create containers.
  • Container → A running instance of an image that encapsulates an application and its dependencies.
  • Network → Provides communication pathways between containers.
  • Volume → Enables persistent data storage beyond the container’s lifecycle.
  • Pod → A group of containers sharing the same network and namespaces.


Podman Setup (Windows)

Podman offers two main options for managing containers — Podman Desktop (Graphical Interface) and Podman CLI (Command-Line Interface).

Both are used to build, run, and manage OCI-compliant containers, but each serves different user preferences:

  • Podman Desktop is designed for users who prefer a visual, user-friendly interface.
  • Podman CLI is suited for developers who prefer automation and command-line operations.

1. Podman Desktop (Graphical User Interface)

  • Built on top of the Podman engine to provide a graphical way of managing containers.
  • Allows you to pull images, run containers, create pods, and manage registries without using command-line commands.
  • Integrates seamlessly with Kubernetes and Docker Hub, simplifying deployment and management tasks.
  • Includes Podman CLI, kubectl, and Docker Compose tools by default for hybrid use (GUI + terminal).
  • Ideal for developers who want a simplified container management experience with minimal manual setup.

Installation Steps (Windows)

  1. Enable Virtualization and WSL
    • Press Win + R → type optionalfeatures → Enter.
    • Check the boxes for Virtual Machine Platform and Windows Subsystem for Linux (WSL).
    • Click OK and restart your PC.
    • After restarting, open PowerShell and verify WSL installation: wsl -l -v
      Ensure it shows WSL 2 installed.
  2. Download and Install Podman Desktop
    • Visit https://podman.io → select Download Podman Desktop for Windows.
    • Run the installer and keep all default options enabled:
      • Podman CLI
      • kubectl
      • Docker Compose
    • Complete the setup following the on-screen instructions.
  3. Launch and Configure
    • Open Podman Desktop from the Start Menu.
    • The first launch automatically configures your local Podman environment using WSL2.
    • You can now visually pull images, run containers, and manage pods or registries directly from the interface.

2. Podman CLI (Command-Line Interface)

Note: Podman CLI get installed automatically with above Podman Desktop setup. Check with command "podman --version", If it isn't than follow following instructions.
  • daemonless container engine for building, running, and managing containers directly from the terminal.
  • Fully Docker-compatible, allowing easy migration from Docker to Podman.
  • Offers complete flexibility for automation, scripting, and DevOps pipelines.
  • Works independently or alongside Podman Desktop.

Installation Steps (Windows)

Option A: Install via Winget (Recommended)

    1. Open PowerShell as Administrator.
    2. Run the command:
    3. winget install -e --id RedHat.Podman
    4. Verify installation:
    5. podman --version

Option B: Download Installer Manually

    1. Go to https://podman.io → click Get Podman → choose Windows.
    2. Download the latest Podman Windows Installer (.msi).
    3. Run the installer with default options and complete the setup.
    4. Verify installation: podman info

(Optional) Initialize Podman Machine

Podman on Windows runs inside a lightweight WSL2-based virtual machine. Initialize it once using:

podman machine init podman machine start


Common Podman CLI Commands

Command

Description

podman --version

Check Podman version

podman info

Display system and environment details

podman images

List all downloaded container images

podman pull <image>

Pull an image from a registry

podman ps

List currently running containers

podman ps -a

List all containers, including stopped ones

podman run -d -p 8080:80 <image>

Run a container in detached mode with port mapping

podman exec -it <container> bash

Open an interactive shell inside a running container

podman stop <container>

Stop a running container

podman rm <container>

Remove a stopped container

podman rmi <image>

Remove an image from the local system

podman logs <container>

View logs for a specific container

podman inspect <container>

Display detailed container information

podman build -t myapp .

Build an image from a Dockerfile in the current directory

podman pod create

Create a new pod for grouping related containers

podman network ls

List all container networks

podman volume ls

List all container volumes




Containerfile / Dockerfile 

A Dockerfile (or Containerfile in Podman) is a text file that contains a series of instructions to build a container image.

Each instruction creates a new layer in the image and defines how the final container should behave.

Instruction

Description

Example

FROM

Defines the base image from which the build process starts. Every Dockerfile must start with a FROM instruction (except multi-stage builds where multiple FROM statements are used).

FROM openjdk:17-jdk

WORKDIR

Sets the working directory inside the image for subsequent instructions like COPY, RUN, and CMD. If the directory doesn’t exist, it’s created automatically.

WORKDIR /app

COPY

Copies files or directories from the host (build context) into the image’s filesystem. It’s typically used to include application code, libraries, or configuration files.

COPY target/app.jar /app/app.jar

ADD

Similar to COPY, but with extra features: it can extract local tar archives automatically and supports remote URLs. Use COPY when possible, as ADD is less explicit.

ADD https://example.com/config.tar.gz /tmp/

RUN

Executes a command during the image build (in a new layer). Commonly used to install packages, set permissions, or perform setup tasks.

RUN apt update && apt install -y curl

CMD

Defines the default command to run when the container starts. It can be overridden by specifying a command in podman run or docker run. Only one CMD is used per Dockerfile (last one takes effect).

CMD ["java", "-jar", "app.jar"]

ENTRYPOINT    

Defines the main executable for the container. Unlike CMD, it cannot be overridden easily at runtime (unless using --entrypoint). Often combined with CMD to provide default arguments.

ENTRYPOINT ["java", "-jar", "app.jar"]

ENV

Sets environment variables inside the container. These persist during the build and runtime unless overridden.

ENV JAVA_HOME=/usr/lib/jvm/java-17

EXPOSE

Declares the port number(s) the container will listen on at runtime. It doesn’t actually publish the ports but acts as documentation or a hint to orchestration tools.

EXPOSE 8080

ARG

Defines a build-time variable that can be passed using the --build-arg flag during image creation. Unlike ENV, it’s not available at runtime unless explicitly exported.

ARG version=1.0

LABEL

Adds metadata to the image, such as maintainer info, version, or description. Useful for tracking and automation.

LABEL maintainer="siraj@avk.com" version="1.0"

MAINTAINER        

Specifies the author or maintainer of the image. This instruction is deprecated; prefer using LABEL instead.

MAINTAINER Siraj <siraj@avk.com>

ONBUILD

Adds a trigger instruction to the image that executes when the image is used as a base for another image. Useful for defining automatic setup steps in base images.

ONBUILD COPY . /app

USER

Specifies the user or UID under which the container’s processes will run. Helps improve security by avoiding running as root.

USER appuser

VOLUME

Creates a mount point for persistent or shared storage. It allows containers to store data outside their writable layer.

VOLUME /data

WORKDIR

Sets the working directory for commands that follow (RUN, CMD, ENTRYPOINT, COPY, etc.). Multiple WORKDIR instructions can be used and will stack paths.

WORKDIR /usr/src/app

Additional Notes 

  • RUN vs CMD: RUN executes during build time, while CMD runs when the container starts.
  • ENTRYPOINT + CMD: Often used together — ENTRYPOINT defines the executable, and CMD defines default arguments.
    Example:
    ENTRYPOINT ["python3"]
CMD ["app.py"] 
This runs python3 app.py by default.
  • COPY vs ADD: Prefer COPY for predictable behavior unless you specifically need the features of ADD.
  • Layer Optimization: Combine multiple RUN instructions into one to reduce image size and layer count.


Podman Networking

Podman provides flexible networking for containers, allowing communication between containers, the host, and external systems. It supports two main modes:

  • Rootful Mode (Netavark):
    Uses the Netavark backend with full networking control (custom subnets, routing, DNS).

  • Rootless Mode (slirp4netns):
    Runs without root privileges using a simplified, user-space network stack.

Common Networking Commands

Command

Description

Example

podman network ls

List all networks

podman network ls

podman network create <name>

Create a custom network

podman network create my-net

podman network inspect <name>

Show details of a network

podman network inspect my-net

podman network rm <name>

Remove a network

podman network rm my-net

podman network prune

Remove unused networks

podman network prune

To see all options:

podman network --help

Example: Custom Network and Container Communication

Step 1: Create a custom network

podman network create my-podman-network podman network ls

Step 2: Inspect details

podman network inspect my-podman-network

Step 3: Run two containers on the same network

podman run -d --name=test1 -p 8085:80 --net=my-podman-network httpd:latest podman run -d --name=test2 -p 8086:80 --net=my-podman-network httpd:latest

Step 4: Connect from one container to another

podman exec -it test1 /bin/bash apt update && apt install curl -y curl test2:80

Both containers can communicate using their names (test1, test2) as hostnames within the custom network.

Summary:

  • Rootful uses Netavark; Rootless uses slirp4netns.
  • Custom networks allow container-to-container communication.
  • Containers on the same network can access each other by name.


Podman Volumes

Podman volumes are used to persist data beyond a container’s lifecycle and to share files between containers or with the host system. Without volumes, all data inside a container is deleted when the container is removed.

Types of Volumes


1. Named Volumes

  • Managed by Podman and stored in its internal storage.
  • Ideal for sharing data across containers.
    Example:

podman volume create myvol

podman run -d --name web --mount type=volume,src=myvol,target=/usr/local/apache2/htdocs httpd

2. Bind Mounts

  • Directly mount a host directory into a container.
  • Great for development or real-time file sync.
    Example:

podman run -d --name web --mount type=bind,src=/data,target=/usr/local/apache2/htdocs httpd

3. Anonymous Volumes

  • Auto-created when no name is specified.
  • Removed when the container is deleted.
    Example:

podman run -d --name temp-app --mount type=volume,target=/app/data httpd

Common Volume Commands

Command

Description

Example

podman volume ls

List volumes

podman volume ls

podman volume create <name>

Create volume

podman volume create myvol

podman volume inspect <name>

Show volume details

podman volume inspect myvol

podman volume rm <name>

Remove a volume

podman volume rm myvol

podman volume prune

Remove unused volumes

podman volume prune


Example: Persisting Data
# Create volume podman volume create webdata # Run container using volume podman run -d --name myweb --mount type=volume,src=webdata,target=/usr/local/apache2/htdocs httpd:latest # Add file inside container podman exec -it myweb bash echo "Hello from Podman Volume!" > /usr/local/apache2/htdocs/index.html exit # Remove and recreate container podman rm -f myweb podman run -d --name myweb --mount type=volume,src=webdata,target=/usr/local/apache2/htdocs httpd:latest

Data inside webdata remains intact even after the container is removed.

Summary:

  • Named volumes persist data.
  • Bind mounts share host files.
  • Anonymous volumes are temporary.
  • Use podman volume commands to create and manage persistent storage.


Pods in Podman

A pod in Podman is a group of one or more containers that share the same network and namespace resources, similar to a Kubernetes Pod. Containers inside a pod can communicate via localhost, share ports, and be managed together as a single unit.

Each pod includes an infra container (for shared namespaces) and one or more application containers.

Common Pod Commands

Command

Description

Example

podman pod create <name>

Create a new pod

podman pod create --name mypod

podman pod ps

List active pods

podman pod ps

podman ps -a --pod

List containers with pod info

podman ps -a --pod

podman pod run -d --name <container> --pod <pod> <image>

Run a container inside a pod

podman pod run -d --name web --pod mypod httpd

podman pod inspect <pod>

View pod details

podman pod inspect mypod

`podman pod start

stop

rm <pod>`

For more options:

podman pod --help

Example: Create and Run Containers in a Pod

Step 1: Create and view the pod

podman pod create --name mypod podman pod ps

Step 2: Run two containers inside the pod

podman run -itd --name my-httpd --pod mypod httpd:latest podman run -itd --name my-redis --pod mypod redis:latest podman pod ps podman ps -a --pod

Step 3: Stop and remove the pod

podman pod stop mypod podman pod rm mypod

Summary:

  • Pods let containers share the same network, IPC, and namespace.
  • Managed together via podman pod commands.
  • Ideal for running and testing multi-container (Kubernetes-style) applications locally.


Namespaces and Cgroups

Podman uses Linux Namespaces to isolate containers and Cgroups (Control Groups) to control system resource usage. These two features together provide process separation, security, and resource management.

1. Namespaces

Namespaces isolate different parts of the Linux system so that containers run independently from each other and the host.

Namespace

Purpose

Example

pid

Isolates process IDs

Each container has its own process list.

net

Isolates networking

Containers get their own IP and network stack.

ipc

Isolates inter-process communication

Shared memory is private per container.

mnt

Isolates filesystem mounts

Each container sees its own filesystem.

uts

Isolates hostnames

Containers can have unique hostnames.

user

Isolates user IDs

Enables rootless containers.

Example:

podman run -it --name c1 alpine hostname podman run -it --name c2 alpine hostname

Each container reports a different hostname, showing UTS namespace isolation.

2. Cgroups (Control Groups)

Cgroups manage how much CPU, memory, and I/O a container can use, preventing any single container from consuming all resources.

Option

Description

Example

--memory

Limit container memory

podman run --memory=512m nginx

--cpus

Limit CPU usage

podman run --cpus=1 nginx

Example:

podman run -d --name limited --memory=512m --cpus=1 nginx

This container can use up to 512 MB RAM and 1 CPU core.

Summary:
Namespaces isolate containers; Cgroups limit their resource usage — ensuring containers remain secure, efficient, and lightweight.



Container Registries

A container registry is a storage and distribution system for container images. It allows users to push, pull, and manage container images for deployment. Registries can be public (shared globally) or private (restricted to an organization or team).

Types of Registries

1. Public Registries
These are openly accessible and widely used for sharing container images.

  • Docker Hub: Default and most common public registry.
  • Quay.io: Red Hat’s secure image registry.
  • GitHub Packages: Used for storing and managing container images alongside code repositories.

Example:

podman pull docker.io/library/nginx:latest podman push quay.io/username/myapp:1.0

2. Private Registries
Used within organizations to store internal or sensitive images securely.

  • Harbor: Open-source enterprise registry with authentication and image scanning.
  • JFrog Artifactory: Provides version control and advanced access management for images.

Example:

podman login myregistry.example.com podman push myregistry.example.com/myteam/app:latest

Configuring Default Registries

You can define which registries Podman uses to search or push images.

  • In Podman Desktop:
    Go to Settings → Registries to add, remove, or reorder default registries.

  • In Podman CLI:
    Edit the configuration file:

    /etc/containers/registries.conf

    Example section:

    [registries.search] registries = ['docker.io', 'quay.io', 'ghcr.io']

Summary

  • Public registries (e.g., Docker Hub, Quay.io) are open and widely used.
  • Private registries (e.g., Harbor, Artifactory) offer secure internal image storage.
  • Default registry sources can be configured in Podman Desktop settings or /etc/containers/registries.conf for the CLI.


Containerizing a Spring Boot Application

Containerizing your Spring Boot application with Podman allows you to package and run your app as an isolated container, ensuring consistent behavior across environments.

Example : 👉 Find here step-by-step instructions to containerize a Spring Boot application using Podman. 

Containerfile 

# Use an official OpenJDK base image FROM openjdk:17-jdk-slim # Set working directory inside the container WORKDIR /app # Copy the built JAR file into the container COPY target/springboot-app.jar app.jar # Expose the application port EXPOSE 8080 # Command to run the Spring Boot application CMD ["java", "-jar", "app.jar"]

Build and Run the Container

1. Build the image

podman build -t springboot-app .

2. Run the container

podman run -d -p 8080:8080 springboot-app

3. Verify the running container

podman ps

Summary

  • The Containerfile defines how the Spring Boot JAR is packaged.
  • Podman builds the image using podman build.
  • The application runs in a lightweight, isolated container accessible on port 8080.


Docker vs Podman 

Feature

Docker

Podman

Architecture

Daemon-based

Daemonless

Rootless Mode

Added later

Native feature

Pod Support

Not available

Yes (Kubernetes-style pods)

Security

Daemon runs as root

More secure rootless operation

CLI Compatibility

docker

alias docker=podman

Performance

Slightly slower startup

Faster container startup

Image Building

Uses daemon

Lightweight via Buildah

Systemd Integration

Limited

Native

Networking

Daemon-managed

Netavark (modern, modular)

OS Compatibility

Linux, macOS, Windows

Linux, macOS, Windows (via WSL2)