<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Untitled Publication]]></title><description><![CDATA[Untitled Publication]]></description><link>https://blog.joerismissaert.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 17:58:05 GMT</lastBuildDate><atom:link href="https://blog.joerismissaert.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Kubernetes 101: Building Scalable Applications - ConfigMaps & Secrets]]></title><description><![CDATA[{{}}
Providing Variables to Kubernetes Applications
While we shouldn't run naked Pods, we've already seen we can pass environment variables when creating a Pod:kubectl run mydb --image=mysql --env="MYSQL_ROOT_PASSWORD=password"
When creating a Deploy...]]></description><link>https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-configmaps-and-secrets</link><guid isPermaLink="true">https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-configmaps-and-secrets</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{</p>}}<p></p>
<h2 id="heading-providing-variables-to-kubernetes-applications">Providing Variables to Kubernetes Applications</h2>
<p>While we shouldn't run naked Pods, we've already seen we can pass environment variables when creating a Pod:<br /><code>kubectl run mydb --image=mysql --env="MYSQL_ROOT_PASSWORD=password"</code></p>
<p>When creating a Deployment, however, there's no command line option to provide variables. We'll need to create the Deployment first, then set the environment variables:  </p>
<ul>
<li><code>kubectl create deploy mydb --image=mysql</code></li>
<li><code>kubectl set env deploy mydb MYSQL_ROOT_PASSWORD=password</code></li>
</ul>
<p>Obviously you could generate the Deployment YAML file first and add your variables to the YAML file before creating the Deployment.</p>
<pre><code>student@minikube:~$ kubectl create deployment mydb --image=mariadb
deployment.apps/mydb created

student@minikube:~$ kubectl get pods
NAME                   READY   STATUS   RESTARTS   AGE
mydb-fb7ff4d78-kqbvj   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     <span class="hljs-built_in">Error</span>    <span class="hljs-number">0</span>          <span class="hljs-number">40</span>s

student@minikube:~$ kubectl logs mydb-fb7ff4d78-kqbvj
<span class="hljs-number">2022</span><span class="hljs-number">-04</span><span class="hljs-number">-16</span> <span class="hljs-number">07</span>:<span class="hljs-number">41</span>:<span class="hljs-number">41</span>+<span class="hljs-number">00</span>:<span class="hljs-number">00</span> [Note] [Entrypoint]: Entrypoint script <span class="hljs-keyword">for</span> MariaDB Server <span class="hljs-number">1</span>:<span class="hljs-number">10.7</span><span class="hljs-number">.3</span>+maria~focal started.
<span class="hljs-number">2022</span><span class="hljs-number">-04</span><span class="hljs-number">-16</span> <span class="hljs-number">07</span>:<span class="hljs-number">41</span>:<span class="hljs-number">41</span>+<span class="hljs-number">00</span>:<span class="hljs-number">00</span> [Note] [Entrypoint]: Switching to dedicated user <span class="hljs-string">'mysql'</span>
<span class="hljs-number">2022</span><span class="hljs-number">-04</span><span class="hljs-number">-16</span> <span class="hljs-number">07</span>:<span class="hljs-number">41</span>:<span class="hljs-number">41</span>+<span class="hljs-number">00</span>:<span class="hljs-number">00</span> [Note] [Entrypoint]: Entrypoint script <span class="hljs-keyword">for</span> MariaDB Server <span class="hljs-number">1</span>:<span class="hljs-number">10.7</span><span class="hljs-number">.3</span>+maria~focal started.
<span class="hljs-number">2022</span><span class="hljs-number">-04</span><span class="hljs-number">-16</span> <span class="hljs-number">07</span>:<span class="hljs-number">41</span>:<span class="hljs-number">41</span>+<span class="hljs-number">00</span>:<span class="hljs-number">00</span> [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified
    You need to specify one <span class="hljs-keyword">of</span> MARIADB_ROOT_PASSWORD, MARIADB_ALLOW_EMPTY_ROOT_PASSWORD and MARIADB_RANDOM_ROOT_PASSWORD

student@minikube:~$ kubectl set env deploy mydb MYSQL_ROOT_PASSWORD=password
deployment.apps/mydb env updated

student@minikube:~$ kubectl get pods
NAME                    READY   STATUS              RESTARTS      AGE
mydb<span class="hljs-number">-6</span>df85bcdbb-thm2h   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>             <span class="hljs-number">5</span>s
mydb-fb7ff4d78-kqbvj    <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     <span class="hljs-built_in">Error</span>               <span class="hljs-number">3</span> (<span class="hljs-number">47</span>s ago)   <span class="hljs-number">108</span>s

student@minikube:~$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
mydb<span class="hljs-number">-6</span>df85bcdbb-thm2h   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">13</span>s

student@minikube:~$ kubectl get deploy mydb -o yaml &gt; mydb.yml
...
</code></pre><pre><code>student@minikube:~$ kubectl create deploy mynewdb --image=mariadb --dry-run=client -o yaml &gt; mynewdb.yaml
student@minikube:~$ kubectl create -f mynewdb.yaml 
deployment.apps/mynewdb created

student@minikube:~$ kubectl set env deploy mynewdb MYSQL_ROOT_PASSWORD=password --dry-run=client -o yaml &gt; mynewdb.yaml 
student@minikube:~$ grep -i password mynewdb.yaml 
        - name: MYSQL_ROOT_PASSWORD
          <span class="hljs-attr">value</span>: password

student@minikube:~$ kubectl describe deploy mynewdb | grep -i password
student@minikube:~$ kubectl apply -f mynewdb.yaml 
deployment.apps/mynewdb configured

student@minikube:~$ kubectl describe deploy mynewdb | grep -i password
      <span class="hljs-attr">MYSQL_ROOT_PASSWORD</span>:  password
</code></pre><h2 id="heading-configmaps">ConfigMaps</h2>
<p>Code should be static, which makes it portable so that it can be used in other environments. To achieve this we need to separate site-specific information, like environment variables, from the code. These should not be provided in the Deployment configuration.</p>
<p>ConfigMaps are the solution to this issue, we can define variables and have our Deployment point to the ConfigMap.
ConfigMaps are created in a different way depending what it will be used for:</p>
<ul>
<li>Variables</li>
<li>Configuration Files</li>
<li>Command line arguments</li>
</ul>
<h3 id="heading-providing-variables-with-configmaps">Providing Variables with ConfigMaps</h3>
<p>We can create a ConfigMap for variables in two ways:</p>
<ul>
<li>By passing a file that contains the variables in a <code>key=value</code> format:<br /><code>kubectl create cm mycm --from-env-file=myfile</code></li>
<li>By passing the variables directly:<br /><code>kubectl create cm mycm--from-literal=MYSQL_ROOT_PASSWORD=password</code></li>
</ul>
<p>Once you have the ConfigMap, you can update your deployment so that it points to the ConfigMap:
<code>kubectl set env --from=configmap/mycm deploy/mydeployment</code></p>
<pre><code>student@minikube:~$ cat dbvarsfile 
MYSQL_ROOT_PASSWORD=password
MYSQL_USER=joeri

student@minikube:~$ kubectl create cm mydbvars --<span class="hljs-keyword">from</span>-env-file=dbvarsfile 
configmap/mydbvars created

student@minikube:~$ kubectl create deploy mydb --image=mariadb
deployment.apps/mydb created

student@minikube:~$ kubectl set env deploy mydb --<span class="hljs-keyword">from</span>=configmap/mydbvars
deployment.apps/mydb env updated

student@minikube:~$ kubectl describe deploy mydb | grep MYSQL_
      <span class="hljs-attr">MYSQL_ROOT_PASSWORD</span>:  &lt;set to the key 'MYSQL_ROOT_PASSWORD' of config map 'mydbvars'&gt;  Optional: false
      MYSQL_USER:           &lt;set to the key 'MYSQL_USER' of config map 'mydbvars'&gt;           Optional: false

student@minikube:~$ kubectl get deploy mydb -o yaml &gt; mydb.yaml
...
</code></pre><h3 id="heading-providing-configuration-files-with-configmaps">Providing Configuration Files with ConfigMaps</h3>
<p>In addition to providing variables, we can provide configuration files to our application by making use of ConfigMaps:<br /><code>kubectl create cm myconf --from-file=/my/file.conf</code></p>
<p>If a ConfigMap is created from a directory instead of a file, all files in that directory will be included in the ConfigMap.
When using ConfigMap for configuration files the ConfigMap must be mounted in the application, it behaves similarly to a Volume.</p>
<p>From a high level, we need to:</p>
<ul>
<li>Generate the base YAML code, then add the ConfigMap mount to it later</li>
<li>Define a Volume of the ConfigMap type in the application manifest</li>
<li>Mount this volume on a specific directory, the configuration file will appear inside that directory. </li>
</ul>
<p>In the below example we'll provide an <code>index.html</code> file to Nginx via a ConfigMap:</p>
<pre><code>student@minikube:~$ echo <span class="hljs-string">"Hello World!"</span> &gt; index.html
student@minikube:~$ kubectl create cm myindex --<span class="hljs-keyword">from</span>-file=index.html
configmap/myindex created

student@minikube:~$ kubectl describe cm myindex
<span class="hljs-attr">Name</span>:         myindex
<span class="hljs-attr">Namespace</span>:    <span class="hljs-keyword">default</span>
<span class="hljs-attr">Labels</span>:       &lt;none&gt;
Annotations:  &lt;none&gt;

Data
====
index.html:
----
Hello World!


BinaryData
====

Events:  &lt;none&gt;

student@minikube:~$ kubectl create deploy myweb --image=nginx
deployment.apps/myweb created
</code></pre><p>We'll edit the deployment and add <code>volumes</code> and <code>volumeMounts</code> to <code>spec.template.spec</code>:</p>
<pre><code>student@minikube:~$ kubectl edit deploy myweb
...
    spec:
      volumes:
      - name: cmvol
        <span class="hljs-attr">configMap</span>:
          name: myindex
      <span class="hljs-attr">containers</span>:
        volumeMounts:
        - mountPath: <span class="hljs-regexp">/usr/</span>share/nginx/html
          <span class="hljs-attr">name</span>: cmvol
      - image: nginx
        <span class="hljs-attr">imagePullPolicy</span>: Always
        <span class="hljs-attr">name</span>: nginx
        <span class="hljs-attr">resources</span>: {}
...
</code></pre><p>Let's verify our changes:</p>
<pre><code>student@minikube:~$ kubectl describe deploy myweb
Pod Template:
  Labels:  app=myweb
  <span class="hljs-attr">Containers</span>:
   nginx:
    Image:        nginx
    <span class="hljs-attr">Port</span>:         &lt;none&gt;
    Host Port:    &lt;none&gt;
    Environment:  &lt;none&gt;
    Mounts:
      /usr/share/nginx/html from cmvol (rw)
  Volumes:
   cmvol:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      myindex
...

student@minikube:~$ kubectl get all --selector app=myweb
NAME                        READY   STATUS    RESTARTS   AGE
pod/myweb-ff8bf9988-287n2   1/1     Running   0          13m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myweb   1/1     1            1           19m

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/myweb-8764bf4c8   0         0         0       19m
replicaset.apps/myweb-ff8bf9988   1         1         1       13m

student@minikube:~$ kubectl exec pod/myweb-ff8bf9988-287n2 -- cat /usr/share/nginx/html/index.html
Hello World!
</code></pre><h2 id="heading-understanding-secrets">Understanding Secrets</h2>
<p>Secrets allow you to store sensitive data such as passwords, authentication tokens and SSH keys, outside of a Pod to reduce the risk of accidental expose. Some Secrets are automatically created by Kubernetes while others can be created by the user. System-created Secrets are important for Kubernetes resources to connect to other cluster resources.</p>
<blockquote>
<p>Secrets are Base64 encoded and not encrypted.</p>
</blockquote>
<p>Three types of Secret types are offered:</p>
<ul>
<li><code>docker-registry</code>: Used for connecting to a Docker registry.</li>
<li><code>TLS</code>: Used to store TLS key material.</li>
<li><code>generic:</code> Creates a secret from a local file, directory or literal value</li>
</ul>
<p>You need to specify the type when defining the Secret:
<code>kubectl create secret generic ...</code></p>
<h3 id="heading-how-kubernetes-uses-secrets">How Kubernetes Uses Secrets</h3>
<p>All Kubernetes resources need access to TLS keys in order to access the Kubernetes API. These keys are provided by Secrets and used through ServiceAccounts. By using the ServiceAccount, the application has access to its Secret.</p>
<p>Let's inspect one of the secrets Kubernetes uses.
As mentioned previously, Secrets are used through ServiceAccounts, so we need to find out the ServiceAccount first before we can inspect the details of the Secret:</p>
<pre><code>student@minikube:~$ kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS        AGE
coredns<span class="hljs-number">-64897985</span>d-lhqq6            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">1</span> (<span class="hljs-number">6</span>m49s ago)   <span class="hljs-number">25</span>m
etcd-minikube                      <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">1</span> (<span class="hljs-number">6</span>m49s ago)   <span class="hljs-number">25</span>m
kube-apiserver-minikube            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">1</span> (<span class="hljs-number">6</span>m49s ago)   <span class="hljs-number">25</span>m
kube-controller-manager-minikube   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">1</span> (<span class="hljs-number">6</span>m49s ago)   <span class="hljs-number">25</span>m
kube-proxy-khgjl                   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">1</span> (<span class="hljs-number">6</span>m49s ago)   <span class="hljs-number">25</span>m
kube-scheduler-minikube            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">1</span> (<span class="hljs-number">6</span>m49s ago)   <span class="hljs-number">25</span>m
storage-provisioner                <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">2</span> (<span class="hljs-number">6</span>m49s ago)   <span class="hljs-number">25</span>m

student@minikube:~$ kubectl get pods -n kube-system coredns<span class="hljs-number">-64897985</span>d-lhqq6 -o yaml | grep serviceAccount
  <span class="hljs-attr">serviceAccount</span>: coredns
  <span class="hljs-attr">serviceAccountName</span>: coredns
      - serviceAccountToken:

student@minikube:~$ kubectl get sa -n kube-system coredns -o yaml
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">kind</span>: ServiceAccount
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-string">"2022-02-01T15:48:54Z"</span>
  <span class="hljs-attr">name</span>: coredns
  <span class="hljs-attr">namespace</span>: kube-system
  <span class="hljs-attr">resourceVersion</span>: <span class="hljs-string">"299"</span>
  <span class="hljs-attr">uid</span>: <span class="hljs-number">519</span>a806e<span class="hljs-number">-35</span>c0<span class="hljs-number">-45</span>be-a5a0<span class="hljs-number">-495</span>d9f7c7586
<span class="hljs-attr">secrets</span>:
- name: coredns-token-j6qdj

student@minikube:~$ kubectl get secret -n kube-system coredns-token-j6qdj -o yaml
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">data</span>:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCakNDQWU2Z0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwdGFXNXAKYTNWaVpVTkJNQjRYRFRJeU1ETXdOekUxTlRrd05Wb1hEVE15TURNd05URTFOVGt3TlZvd0ZURVRNQkVHQTFVRQpBeE1LYldsdWFXdDFZbVZEUVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS0pDCnhPam5XYXRKSW9tdUcrSGtsM3J0aFhGV0NwUGhic3FXTkhsbGlqeTlWWVRvYlYwdHVrUW1sRkRvZGc2N1RULzQKWmlYNVFvUXdyV0NOSTYrYmtPMGpGMUhPRXJQNUF2S3ZJMEpabzliSTZzN1NPVmVsNHJsRGtRUGFScjBWajhrZwpGZTZNb2tUZGswQlBmQ1l5c2hhNmNBUGNaaHl1Wjl3clJRYi83dnZkS3BzZ2tLZ1ZOMmVEQnNqRzNGWFc1M2JvCkx6azJsT1NORHRxNndVSTdlZzIrNjR2UEQ5YkdWU09IU3JraVNMTVdtU3ZWL0d3SlV3dFd6YVhtZWhJZ1NLRVAKY3ZxMWtRN0dvUEVzTUF6TUtMb2F4bXdpZlUxQ0xISE93akhWTlZvVXcvVmNOQlZCOGlnRGd4cmJMSjg3bU9pOQpqbzJpck1BNTZqZExPVk1rUFlzQ0F3RUFBYU5oTUY4d0RnWURWUjBQQVFIL0JBUURBZ0trTUIwR0ExVWRKUVFXCk1CUUdDQ3NHQVFVRkJ3TUNCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVcKQkJTODl5UHdEYzJxZG13VGFlbWxZcndvclRqVTdqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFlOTdNNjV3WQpxUU5nR2NzT3A4Tm4rbzdGdXQ0cWMyWldjdll5bEZKUnFURjFIVjhwZDIzTFR0V3VoRkQraVk5SDJuLzFNdzdwCnVFcHdVUjAzVHpIUUVpL1JjTUJPV0JBakFGVzJHck5RelhVbzdyOE03a3FHdEN3MVd4WXduQVBhNGJ1SG41SWcKT0lhQTA4V25udW4rcFFRMW5WL25aU04yV2xwRzRrblhGcHAzcjhTQ21uVkd1L296VjV3bGZ3WU9Ea3prZExSMgp4bjA5SHhTWkJsclpDdFZqWUxDaVRYbkN3Q3pTVXZSNjhYWkNZVWRWTHF5ZzZyZXBYb0dsSkJzY0ZMZURtKzZrCnVZR1ZvY0Ezd0FpWktoazJPV2EzcGdnTFJod2xTdTRqaFZZNk82WFpvQXMzOHcvYzFTeWY1WDBqcFB1OVdPRW8KSDdsTEpCOTdhamhYdVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  namespace: a3ViZS1zeXN0ZW0=
  token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklsWlplWEJ4UkdoUVNUQnJaM1JTTjJGNVdUQTRlakJZUjBWS2VVaHdNRlJSYzFoQ1JFOXJPWGRGYmxFaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpyZFdKbExYTjVjM1JsYlNJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZqY21WMExtNWhiV1VpT2lKamIzSmxaRzV6TFhSdmEyVnVMV28yY1dScUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVibUZ0WlNJNkltTnZjbVZrYm5NaUxDSnJkV0psY201bGRHVnpMbWx2TDNObGNuWnBZMlZoWTJOdmRXNTBMM05sY25acFkyVXRZV05qYjNWdWRDNTFhV1FpT2lJMU1UbGhPREEyWlMwek5XTXdMVFExWW1VdFlUVmhNQzAwT1RWa09XWTNZemMxT0RZaUxDSnpkV0lpT2lKemVYTjBaVzA2YzJWeWRtbGpaV0ZqWTI5MWJuUTZhM1ZpWlMxemVYTjBaVzA2WTI5eVpXUnVjeUo5LmN6cGp6SUM5NG9jSV81N21vZU5wZ2xXLWtVZnpHdUlUUktfa09qbkw3M0xuN1p2M2tLMWU2TjNqbUpPSW95d2RMcms5NWNwZC1pT1VjQWdpQVcxN3dJRUZ1THR4WnVkbmsyNnBwWU1sdDNLWHpBMkJycjdkYzZGM0xjdG9RNTdPMEY0MnEybXpJS0dnVDBVYkhmYTNwTjd4ZDY0Zk04RVFpZUc2bEZBSlNuYjlBTGVqSjd6X1JjeWdkLU1SOE9Qc2gtd05KMW1RSlVrUktzenVwTHdZcERKSXVCSGx6a093Rm04YXJ5ODZ3Y0pGdzNSbm5mcFo4ZTF0aWwtWUVSTmV3aDdMdzhvTGRrSzJNUnVVSnBKZmtGZ1kteWhWejdwa3MtNW53U05BUWVpTEk2RG9oUFBqd3BYa3hzWWVCbUhZVWo1b3JMNlZ5NG9Xb1ZZTnJIU0JvUQ==
kind: Secret
<span class="hljs-attr">metadata</span>:
  annotations:
    kubernetes.io/service-account.name: coredns
    kubernetes.io/service-account.uid: <span class="hljs-number">519</span>a806e<span class="hljs-number">-35</span>c0<span class="hljs-number">-45</span>be-a5a0<span class="hljs-number">-495</span>d9f7c7586
  <span class="hljs-attr">creationTimestamp</span>: <span class="hljs-string">"2022-02-01T15:48:55Z"</span>
  <span class="hljs-attr">name</span>: coredns-token-j6qdj
  <span class="hljs-attr">namespace</span>: kube-system
  <span class="hljs-attr">resourceVersion</span>: <span class="hljs-string">"294"</span>
  <span class="hljs-attr">uid</span>: d5b84f10-e6fa<span class="hljs-number">-46</span>d0<span class="hljs-number">-92</span>ec<span class="hljs-number">-2</span>b7895a799eb
<span class="hljs-attr">type</span>: kubernetes.io/service-account-token
</code></pre><p>Notice how the values in the Secret Yaml output above are base64 encoded, e.g. for the <code>namespace</code>:</p>
<pre><code>student@minikube:~$ echo a3ViZS1zeXN0ZW0= | base64 -d
kube-system
</code></pre><h3 id="heading-configuring-applications-to-use-secrets">Configuring Applications to Use Secrets</h3>
<p>There are different use cases for using Secrets in applications:</p>
<ul>
<li>Providing TLS keys to the application:<br /><code>kubectl create secret tls my-tls-keys --cert=pathto/my.crt --key=pathto/my.key</code></li>
<li>Provide security to passwords:<br /><code>kubectl create generic my-secret-pw --from-literal=password=verysecret</code></li>
<li>Provide access to an SSH private key:<br /><code>kubectl create generic my-ssh-key --from-file=ssh-private-key=.ssh/id_rsa</code></li>
<li>Provide access to sensitive files which would be mounted in the application with root access only:<br /><code>kubectl create secret generic my-secret-file --from-file=/my/secretfile</code></li>
</ul>
<p>Secrets are used in a similar way to using ConfigMaps in applications:</p>
<ul>
<li>If your Secret contains variables (like a password), use <code>kubectl set env</code>.</li>
<li>If it contains files (like keys), mount the Secret. Consider using <code>defaultMode: 0400</code> permissions when mounting the Secret in the Pod spec.</li>
</ul>
<blockquote>
<p>Mounted Secrets are automatically updated in the application when the Secret is updated.</p>
</blockquote>
<p>Let's demonstrate this:</p>
<pre><code>student@minikube:~$ kubectl create secret generic dbpw --<span class="hljs-keyword">from</span>-literal=ROOT_PASSWORD=password
secret/dbpw created

student@minikube:~$ kubectl describe secret dbpw
<span class="hljs-attr">Name</span>:         dbpw
<span class="hljs-attr">Namespace</span>:    <span class="hljs-keyword">default</span>
<span class="hljs-attr">Labels</span>:       &lt;none&gt;
Annotations:  &lt;none&gt;

Type:  Opaque

Data
====
ROOT_PASSWORD:  8 bytes

student@minikube:~$ kubectl get secret dbpw -o yaml
apiVersion: v1
data:
  ROOT_PASSWORD: cGFzc3dvcmQ=
kind: Secret
metadata:
  creationTimestamp: "2022-02-01T16:30:07Z"
  name: dbpw
  namespace: default
  resourceVersion: "1661"
  uid: 6aff6adf-73e1-4ffd-b99a-fd036a034c6b
type: Opaque

student@minikube:~$ echo cGFzc3dvcmQ= | base64 -d
password
</code></pre><p>Now, let's deploy our Secret to an app:</p>
<pre><code>student@minikube:~$ kubectl create deployment mynewdb --image=mariadb
deployment.apps/mynewdb created
</code></pre><p>Remember that <code>mariadb</code> is expecting at the very least a <code>MYSQL_ROOT_PASSWORD</code> environment variable. But since we created our Secret with <code>ROOT_PASSWORD</code> instead of <code>MYSQL_ROOT_PASSWORD</code> we would need to set a prefix when attaching the Secret to the application. This can come in handy in case I have other applications that could potentially use the same Secret.</p>
<pre><code>student@minikube:~$ kubectl set env deployment mynewdb --<span class="hljs-keyword">from</span>=secret/dbpw --prefix=MYSQL_
deployment.apps/mynewdb env updated
</code></pre><p>Now, while our password is base64 encoded, this isn't the case inside the Pod where it's in clear text:</p>
<pre><code>student@minikube:~$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
mynewdb<span class="hljs-number">-7</span>cc5fb9c55<span class="hljs-number">-58</span>wkz   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">13</span>m

student@minikube:~$ kubectl exec mynewdb<span class="hljs-number">-7</span>cc5fb9c55<span class="hljs-number">-58</span>wkz -- env
PATH=<span class="hljs-regexp">/usr/</span>local/sbin:<span class="hljs-regexp">/usr/</span>local/bin:<span class="hljs-regexp">/usr/</span>sbin:<span class="hljs-regexp">/usr/</span>bin:<span class="hljs-regexp">/sbin:/</span>bin
HOSTNAME=mynewdb<span class="hljs-number">-7</span>cc5fb9c55<span class="hljs-number">-58</span>wkz
MYSQL_ROOT_PASSWORD=password
</code></pre><h3 id="heading-configuring-docker-registry-access-secret">Configuring Docker Registry Access Secret</h3>
<p>The <code>docker-registry</code> Secret type stores container registry (Docker Hub, Quay.io, self hosted, ...) authentication credentials. While you don't <em>need</em> to authenticate, it's recommended to prevent pull rate errors in case you're running a busy cluster.</p>
<p>There's two ways to create the Secret: Either by directly passing the credentials, or by passing an existing Docker Config file which contains the credentials:</p>
<pre><code>student@minikube:~$ kubectl create secret docker-registry -h
<span class="hljs-attr">Examples</span>:
  # If you don<span class="hljs-string">'t already have a .dockercfg file, you can create a dockercfg secret directly by using:
  kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER
--docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL

  # Create a new secret named my-secret from ~/.docker/config.json
  kubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json</span>
</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes 101: Building Scalable Applications - Storage]]></title><description><![CDATA[{{}}
Storage Options
Files stored in a container will only live as long as the container itself: they are ephemeral. To solve this problem we can use Pod Volumes, they outlive containers and stay available during the Pod lifetime. The Pod Volume is a...]]></description><link>https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-storage</link><guid isPermaLink="true">https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-storage</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Mon, 03 Jan 2022 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{</p>}}<p></p>
<h2 id="heading-storage-options">Storage Options</h2>
<p>Files stored in a container will only live as long as the container itself: they are ephemeral. To solve this problem we can use <code>Pod Volumes</code>, they outlive containers and stay available during the Pod lifetime. The Pod Volume is a property of the Pod, not the container.  </p>
<p>Pod Volumes can directly bind to any specific <a target="_blank" href="https://kubernetes.io/docs/concepts/storage/volumes/#volume-types">storage type</a>, e.g. Cephs, emptyDir, fibre channel, NFS, ...   By using <code>Persistent Volume Claims</code>, you can decouple the Pod from site-specific storage: You make the Pod specification more portable since you don't configure the site-specific storage but only describe what's needed from the storage: Size and permissions.  </p>
<p>The Persistent Volume Claim connects to a <code>Persistent Volume</code> which in turn defines access to external storage available in the cluster. A site administrator must make sure this Persistent Volume exists. So when a Persistent Volume Claim is created, it will search for an available Persistent Volume that matches the requirements of the storage request in the Persistent Volume Claim. If no match is found, there's <code>StorageClass</code> that can automatically create and allocate the storage.</p>
<p>This abstraction allows a developer to create and distribute generic Pod manifest files and leave the storage up to the site where it's being deployed.
We'll go over examples of this to make the concept more clear.</p>
<h2 id="heading-configuring-volume-storage">Configuring Volume Storage</h2>
<p>Pod local volumes are defined in <code>pod.spec.volumes</code>, they point to a specific volume type but for testing purposes <a target="_blank" href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir">emptyDir</a> and <a target="_blank" href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath">hostPath</a> are common. This volume is mounted through <code>pod.spec.containers.volumeMounts</code>.</p>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>: 
  name: volpod
<span class="hljs-attr">spec</span>:
  volumes: 
    - name: test
      <span class="hljs-attr">emptyDir</span>: {}
  <span class="hljs-attr">containers</span>:
  - name: centos1
    <span class="hljs-attr">image</span>: centos:<span class="hljs-number">7</span>
    <span class="hljs-attr">command</span>:
      - sleep
      - <span class="hljs-string">"3600"</span> 
    <span class="hljs-attr">volumeMounts</span>:
      - mountPath: /centos1
        <span class="hljs-attr">name</span>: test
  - name: centos2
    <span class="hljs-attr">image</span>: centos:<span class="hljs-number">7</span>
    <span class="hljs-attr">command</span>:
      - sleep
      - <span class="hljs-string">"3600"</span>
    <span class="hljs-attr">volumeMounts</span>:
      - mountPath: /centos2
        <span class="hljs-attr">name</span>: test
</code></pre><p>In the above Pod Spec, we've defined a volume named <code>test</code> with the volume type <code>emptyDir</code>. This volume is mounted in two containers on the <code>/centos1</code> and <code>/centos2</code> path inside the container. Both containers can share data via this volume:</p>
<pre><code>student@minikube:~$ kubectl create -f volpod.yaml 
pod/volpod created
...
student@minikube:~$ kubectl exec -it volpod -c centos1 -- bash -c <span class="hljs-string">'echo "Hi there!" &gt; /centos1/hello'</span>
student@minikube:~$ kubectl exec -it volpod -c centos2 -- cat /centos2/hello
Hi there!
</code></pre><h2 id="heading-persistent-volume-storage">Persistent Volume Storage</h2>
<p>A Persistent Volume is a resource that exists independently from any Pod, it ensures that data is kept during container or Pod restarts. We'll use a Persistent Volume Claim to connect to a Persistent Volume. The Persistent Volume Claim is what actually talks to the backend storage provider and it will use volumes available on that storage type: It will search for available volumes depending on the requested capacity and access mode. </p>
<pre><code>kind: PersistentVolume
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">metadata</span>:
  name: pv-volume
  <span class="hljs-attr">labels</span>:
      type: local
<span class="hljs-attr">spec</span>:
  capacity:
    storage: <span class="hljs-number">2</span>Gi
  <span class="hljs-attr">accessModes</span>:
    - ReadWriteOnce
  <span class="hljs-attr">hostPath</span>:
    path: <span class="hljs-string">"/mydata"</span>
</code></pre><p>In the above <code>PersistentVolume</code> we've created a Persistent Volume resource with the name <code>pv-volume</code>, a capacity of 2GB, an accessMode of <code>ReadWriteOnce</code> and a hostPath of <code>/mydata</code>. The hostPath is created on the worker-node where the Pod that will use this PersistentVolume is running. <code>ReadWriteOnce</code> makes sure that only one Pod can read/write data at the same time, <code>ReadWriteMany</code> or <code>ReadOnly</code> can be used as well. </p>
<pre><code>student@minikube:~$ kubectl create -f pv.yaml 
persistentvolume/pv-volume created

student@minikube:~$ kubectl get pv
NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-volume   <span class="hljs-number">2</span>Gi        RWO            Retain           Available                                   <span class="hljs-number">70</span>s

student@minikube:~$ kubectl describe pv pv-volume
<span class="hljs-attr">Name</span>:            pv-volume
<span class="hljs-attr">Labels</span>:          type=local
<span class="hljs-attr">Annotations</span>:     &lt;none&gt;
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    
Status:          Available
Claim:           
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        2Gi
Node Affinity:   &lt;none&gt;
Message:         
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /mydata
    HostPathType:  
Events:            &lt;none&gt;
</code></pre><h2 id="heading-configuring-persistent-volume-claims">Configuring Persistent Volume Claims</h2>
<p>To use a Persistent Volume, we need a Persistent Volume Claim which requests access to Persistent Volume. The Pod Volume spec uses the name of the Persistent Volume Claim and in turn the PVC accesses the Persistent Volume. After connecting to a Persistent Volume, the Persistent Volume Claim will show as bound. The bind is exclusive, the Persistent Volume cannot be used by another Persistent Volume Claim.</p>
<pre><code>kind: PersistentVolumeClaim
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">metadata</span>:
  name: pv-claim
<span class="hljs-attr">spec</span>:
  accessModes:
    - ReadWriteOnce
  <span class="hljs-attr">resources</span>:
    requests:
      storage: <span class="hljs-number">1</span>Gi
</code></pre><p>Notice that in the above spec, the PVC does not connect to a specific Persistent Volume. The only thing we see is that we need a volume with a capacity of 1GB and ReadWriteOnce access.</p>
<pre><code>student@minikube:~$ kubectl create -f pvc.yaml 
persistentvolumeclaim/pv-claim created

student@minikube:~$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pv-claim   Bound    pvc-d5f02edb<span class="hljs-number">-3</span>d71<span class="hljs-number">-4</span>a69-b977<span class="hljs-number">-71</span>fd5bfa020e   <span class="hljs-number">1</span>Gi        RWO            standard       <span class="hljs-number">22</span>s

student@minikube:~$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM              STORAGECLASS   REASON   AGE
pv-volume                                  <span class="hljs-number">2</span>Gi        RWO            Retain           Available                                              <span class="hljs-number">11</span>m
pvc-d5f02edb<span class="hljs-number">-3</span>d71<span class="hljs-number">-4</span>a69-b977<span class="hljs-number">-71</span>fd5bfa020e   <span class="hljs-number">1</span>Gi        RWO            Delete           Bound       <span class="hljs-keyword">default</span>/pv-claim   standard                <span class="hljs-number">39</span>s
</code></pre><p>Our <code>pv-volume</code> Persistent Volume was not used and instead a new Persistent Volume was created by StorageClass because there was no available match due to the capacity request in the PVC.</p>
<h2 id="heading-pod-storage-with-pv-and-pvc">Pod Storage with PV and PVC</h2>
<p>The purpose of configuring a Pod with a Persistent Volume Claim is to decouple from site-specific information: When distributing a Pod spec with a PVC spec we do not need to know anything about site-specific storage.  The PVC will find the necessary Persistent Volume storage to bind to:</p>
<pre><code>---
kind: PersistentVolumeClaim
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">metadata</span>:
  name: nginx-pvc
<span class="hljs-attr">spec</span>:
  accessModes:
    - ReadWriteMany
  <span class="hljs-attr">resources</span>:
    requests:
      storage: <span class="hljs-number">2</span>Gi
---
kind: Pod
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">metadata</span>:
   name: nginx-pvc-pod
<span class="hljs-attr">spec</span>:
  volumes:
    - name: site-storage
      <span class="hljs-attr">persistentVolumeClaim</span>:
        claimName: nginx-pvc
  <span class="hljs-attr">containers</span>:
    - name: pv-container
      <span class="hljs-attr">image</span>: nginx
      <span class="hljs-attr">ports</span>:
        - containerPort: <span class="hljs-number">80</span>
          <span class="hljs-attr">name</span>: webserver
      <span class="hljs-attr">volumeMounts</span>:
        - mountPath: <span class="hljs-string">"/usr/share/nginx/html"</span>
          <span class="hljs-attr">name</span>: site-storage
</code></pre><pre><code>student@minikube:~$ kubectl create -f ckad/pvc-pod.yaml 
persistentvolumeclaim/nginx-pvc created
pod/nginx-pvc-pod created

student@minikube:~$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pvc-b7327501-ff4c<span class="hljs-number">-4</span>f6d<span class="hljs-number">-9</span>c79-d10c6ce771e8   <span class="hljs-number">2</span>Gi        RWX            standard       <span class="hljs-number">13</span>s

student@minikube:~$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-b7327501-ff4c<span class="hljs-number">-4</span>f6d<span class="hljs-number">-9</span>c79-d10c6ce771e8   <span class="hljs-number">2</span>Gi        RWX            Delete           Bound    <span class="hljs-keyword">default</span>/nginx-pvc   standard                <span class="hljs-number">9</span>s

student@minikube:~$ kubectl describe pv pvc-b7
<span class="hljs-attr">Name</span>:            pvc-b7327501-ff4c<span class="hljs-number">-4</span>f6d<span class="hljs-number">-9</span>c79-d10c6ce771e8
<span class="hljs-attr">Labels</span>:          &lt;none&gt;
Annotations:     hostPathProvisionerIdentity: 3d1fa9ec-a297-4851-8782-97e7eb238447
                 pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    standard
Status:          Bound
Claim:           default/nginx-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        2Gi
Node Affinity:   &lt;none&gt;
Message:         
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/hostpath-provisioner/default/nginx-pvc
    HostPathType:  
Events:            &lt;none&gt;

student@minikube:~$ kubectl exec -it nginx-pvc-pod -- touch /usr/share/nginx/html/testfile
student@minikube:~$ minikube ssh
docker@minikube:~$ ls /tmp/hostpath-provisioner/default/nginx-pvc/
testfile
</code></pre><h2 id="heading-storageclass">StorageClass</h2>
<p>Kubernetes StorageClass allows for automatic provisioning of Persistent Volumes when a Persistent Volume Claim request comes in. This must be backed by a Storage Provisioner which ultimately takes care of the volume configuration.<br />StorageClass can also be used a a selector label with the <code>storageClassName</code> field. Normally, PVC to PV binding is done on best match. </p>
<pre><code>---
apiVersion: v1
<span class="hljs-attr">kind</span>: PersistentVolume
<span class="hljs-attr">metadata</span>:
  name: task-pv-volume
  <span class="hljs-attr">labels</span>:
    type: local
<span class="hljs-attr">spec</span>:
  storageClassName: manual
  <span class="hljs-attr">capacity</span>:
    storage: <span class="hljs-number">2</span>Gi
  <span class="hljs-attr">accessModes</span>:
    - ReadWriteMany
  <span class="hljs-attr">hostPath</span>:
    path: <span class="hljs-string">"/mnt/data"</span>
---
apiVersion: v1
<span class="hljs-attr">kind</span>: PersistentVolumeClaim
<span class="hljs-attr">metadata</span>:
  name: task-pv-claim
<span class="hljs-attr">spec</span>:
  storageClassName: manual
  <span class="hljs-attr">accessModes</span>:
    - ReadWriteMany
  <span class="hljs-attr">resources</span>:
    requests:
      storage: <span class="hljs-number">2</span>Gi
---
apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: task-pv-pod
<span class="hljs-attr">spec</span>:
  volumes:
    - name: task-pv-storage
      <span class="hljs-attr">persistentVolumeClaim</span>:
        claimName: pv-claim
  <span class="hljs-attr">containers</span>:
    - name: task-pv-container
      <span class="hljs-attr">image</span>: httpd
      <span class="hljs-attr">ports</span>:
        - containerPort: <span class="hljs-number">80</span>
          <span class="hljs-attr">name</span>: <span class="hljs-string">"httpd-server"</span>
      <span class="hljs-attr">volumeMounts</span>:
        - mountPath: <span class="hljs-string">"/var/www/html"</span>
          <span class="hljs-attr">name</span>: task-pv-storage
</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes 101: Building Scalable Applications - Networking]]></title><description><![CDATA[{{}}
The Kubernetes network model dictates that:

Every Pod has its own IP address
Containers within a Pod share the Pod IP address and can communicate with each other using a loopback interface (localhost).
Pods can communicate with all other Pods i...]]></description><link>https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-networking</link><guid isPermaLink="true">https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-networking</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Mon, 06 Dec 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{</p>}}<p></p>
<p>The Kubernetes network model dictates that:</p>
<ul>
<li>Every Pod has its own IP address</li>
<li>Containers within a Pod share the Pod IP address and can communicate with each other using a loopback interface (<code>localhost</code>).</li>
<li>Pods can communicate with all other Pods <em>in the cluster</em> using the Pod IP addresses and <strong>without</strong> using NAT.</li>
<li>Isolation is defined by using network policies.</li>
</ul>
<p>Pod-to-Pod communication is the foundation of Kubernetes.
You can look at a Pod like you would look at a VM, the VM has a unique IP address. The containers within the Pods are like processes running within a  VM, they run in the same network namespace and share an IP address.</p>
<p>Basic network connectivity is built-in with <a target="_blank" href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet">kubenet</a> but can be extended by using third-party network implementations that plug into Kubernetes using the Container Network Interface API.</p>
<p>The Kubernetes networking model relies heavily on IP addresses. Services, Pods, containers, and nodes communicate using IP addresses and ports:</p>
<ul>
<li>ClusterIP: The IP address assigned to a Service. This address is stable for the lifetime of the Service.</li>
<li>Pod IP: The IP address assigned to a given Pod. This is ephemeral.</li>
<li>Node IP: The IP address assigned to a given node.</li>
</ul>
<h1 id="heading-services">Services</h1>
<p>A <a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/service/">Service</a> is an API resource that is used to expose a logical set of Pods, determined by a selector (label), to an external network by applying round-robin load balancing that forwards the traffic. <code>kube-controller-manager</code> will continuously scan for Pods that match a selector and include those in the Service. Adding or removing Pods immediately impacts the Service.</p>
<p>Services exist independently from the applications or Pods they provide access to, e.g. removing a Deployment will not remove a Service.  This means that one Service can provide access to Pods in multiple Deployments, Kubernetes will automatically load balance between these Pods.  </p>
<p><code>kube-proxy</code> on the nodes watches the Kubernetes API for new Services and endpoints (connected Pods). It opens random ports and listens for traffic to the Service port on the Cluster IP address, then redirects traffic to a Pod that is specified as an endpoint. It typically doesn't require any configuration.</p>
<p>There are different Service Types:</p>
<ul>
<li><code>ClusterIP</code>: The default type which exposes the Service on an internal cluster IP address.</li>
<li><code>NodePort</code>: Opens a specific port on the node that forwards to the Service cluster IP address.</li>
<li><code>LoadBalancer</code>: Used on public cloud, it will provision a load balancer in the cloud for the Service.</li>
<li><code>ExternalName</code>: Works with DNS names.</li>
</ul>
<p>We will focus on <code>ClusterIP</code> and <code>NodePort</code>.  </p>
<h2 id="heading-creating-services">Creating Services</h2>
<p><code>kubectl expose</code> can be used to create Services, providing access to Deployments, ReplicaSets, Pods or other. In most cases it exposes a Deployment which in turn allocates its Pods as the Service Endpoint. If you inspect the Service, you'll see it doesn't actually connect to the Deployment but to the Pods in the Deployment by using the Selector label. The <code>--port</code> argument is required to specify the port that the Service should use. </p>
<p>There are different types of ports in Services:  </p>
<ul>
<li><code>port</code>: The port on which the Service is accessible.</li>
<li><code>targetport</code>: The port on the application that the Service addresses. The same value for <code>port</code> will be used if <code>targetport</code> is not specified. </li>
<li><code>nodeport</code>: The port that is exposed externally while using the nodePort Service type. Required when using the nodePort Service Type but is set automatically.</li>
</ul>
<p><code>kubectl create service</code> can be used as an alternative solution to create Services. When creating a NodePort Service type, the <code>port</code> and <code>targetport</code> are specified as a key:value pair in the <code>--tcp</code> argument:</p>
<pre><code>kubectl create service nodeport my-node-port-service --tcp=<span class="hljs-number">80</span>:<span class="hljs-number">80</span>
</code></pre><p>Here we are not targeting a Deployment, but because I'm naming the NodePort Service <code>my-node-port-service</code> the service will look for all Pods that have the label selector <code>app=my-node-port-service</code>. </p>
<p>Let's expose a simple Nginx application.</p>
<pre><code>student@minikube:~$ kubectl create deploy nginx-app --image=nginx:latest --replicas=<span class="hljs-number">3</span>
deployment.apps/nginx-app created

student@minikube:~$ kubectl expose deploy nginx-app --port=<span class="hljs-number">80</span>
service/nginx-app exposed

student@minikube:~$ kubectl get service nginx-app
NAME        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
nginx-app   ClusterIP   <span class="hljs-number">10.97</span><span class="hljs-number">.84</span><span class="hljs-number">.163</span>   &lt;none&gt;        <span class="hljs-number">80</span>/TCP    <span class="hljs-number">8</span>s
</code></pre><p>We've created a service of the type <code>ClusterIP</code> which is available on the <em>internal</em> IP address <code>10.97.84.163</code>. The IP address is internal from the point of view of the Kubernetes cluster. Remember, we're not working inside the cluster:</p>
<pre><code>student@minikube:~$ docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED       STATUS       PORTS                                                                                                                                  NAMES
<span class="hljs-number">4680</span>f20c93ff   gcr.io/k8s-minikube/kicbase:v0<span class="hljs-number">.0</span><span class="hljs-number">.30</span>   <span class="hljs-string">"/usr/local/bin/entr…"</span>   <span class="hljs-number">2</span> hours ago   Up <span class="hljs-number">2</span> hours   <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>:<span class="hljs-number">49157</span>-&gt;<span class="hljs-number">22</span>/tcp, <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>:<span class="hljs-number">49156</span>-&gt;<span class="hljs-number">2376</span>/tcp, <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>:<span class="hljs-number">49155</span>-&gt;<span class="hljs-number">5000</span>/tcp, <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>:<span class="hljs-number">49154</span>-&gt;<span class="hljs-number">8443</span>/tcp, <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>:<span class="hljs-number">49153</span>-&gt;<span class="hljs-number">32443</span>/tcp   minikube
</code></pre><p>On our <code>minikube</code> machine, we have a <code>minikube</code> Docker container running which runs the Kubernetes cluster and node inside.
This means that we cannot reach the ClusterIP address from outside of Docker.
In order to achieve that, we need to open a port on our Kubernetes Node using the <code>NodePort</code> service type. Edit the service, change the <code>type</code> and add the <code>nodePort</code> value:</p>
<pre><code>student@minikube:~$ kubectl edit service nginx-app

# Please edit the object below. Lines beginning <span class="hljs-keyword">with</span> a <span class="hljs-string">'#'</span> will be ignored,
# and an empty file will abort the edit. If an error occurs <span class="hljs-keyword">while</span> saving <span class="hljs-built_in">this</span> file will be
# reopened <span class="hljs-keyword">with</span> the relevant failures.
#
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">kind</span>: Service
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">labels</span>:
    app: nginx-app
  <span class="hljs-attr">name</span>: nginx-app
  <span class="hljs-attr">namespace</span>: <span class="hljs-keyword">default</span>
  <span class="hljs-attr">resourceVersion</span>: <span class="hljs-string">"6160"</span>
  <span class="hljs-attr">uid</span>: <span class="hljs-number">8</span>d2e2744<span class="hljs-number">-328</span>d<span class="hljs-number">-4e1</span>f-b8f8<span class="hljs-number">-96404515</span>faae
<span class="hljs-attr">spec</span>:
  clusterIP: <span class="hljs-number">10.97</span><span class="hljs-number">.84</span><span class="hljs-number">.163</span>
  <span class="hljs-attr">clusterIPs</span>:
  - <span class="hljs-number">10.97</span><span class="hljs-number">.84</span><span class="hljs-number">.163</span>
  <span class="hljs-attr">externalTrafficPolicy</span>: Cluster
  <span class="hljs-attr">internalTrafficPolicy</span>: Cluster
  <span class="hljs-attr">ipFamilies</span>:
  - IPv4
  <span class="hljs-attr">ipFamilyPolicy</span>: SingleStack
  <span class="hljs-attr">ports</span>:
  - nodePort: <span class="hljs-number">32000</span>
    <span class="hljs-attr">port</span>: <span class="hljs-number">80</span>
    <span class="hljs-attr">protocol</span>: TCP
    <span class="hljs-attr">targetPort</span>: <span class="hljs-number">80</span>
  <span class="hljs-attr">selector</span>:
    app: nginx-app
  <span class="hljs-attr">sessionAffinity</span>: None
  <span class="hljs-attr">type</span>: NodePort
<span class="hljs-attr">status</span>:
  loadBalancer: {}
</code></pre><p>Save your changes.  </p>
<pre><code>student@minikube:~$ kubectl get service nginx-app
NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-app   NodePort   <span class="hljs-number">10.97</span><span class="hljs-number">.84</span><span class="hljs-number">.163</span>   &lt;none&gt;        <span class="hljs-number">80</span>:<span class="hljs-number">32000</span>/TCP   <span class="hljs-number">4</span>m8s
</code></pre><p>We see that our Service Type has changed and the Service is running on port 80 accessible trough NodePort 32000:</p>
<pre><code>student@minikube:~$ curl http:<span class="hljs-comment">//$(minikube ip):32000</span>
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
&lt;title&gt;Welcome to nginx!&lt;/title&gt;
</code></pre><p>The <code>minikube ip</code> command shows what IP address your Kubernetes node is using, in the above example I applied <a target="_blank" href="https://joerismissaert.dev/introduction-to-bash-shell-scripting/">command substitution</a>.</p>
<p><code>kubectl create service</code> can be used as an alternative solution to create Services. When creating a NodePort Service type, the <code>port</code> and <code>targetport</code> are specified as a key:value pair in the <code>--tcp</code> argument:</p>
<pre><code>kubectl create service nodeport nginx-app --tcp=<span class="hljs-number">80</span>:<span class="hljs-number">80</span>
</code></pre><p>As opposed to the <code>kubectl expose deployment</code>, here we are not targeting a Deployment, but because I'm naming the NodePort Service <code>nginx-app</code> the service will look for all Pods that have the label selector <code>app=nginx-app</code> which would be all the Pods in our <code>nginx-app</code> Deployment. </p>
<pre><code>student@minikube:~$ kubectl create service nodeport nginx-app --tcp=<span class="hljs-number">80</span>:<span class="hljs-number">80</span>
service/nginx-app created

student@minikube:~$ kubectl describe service nginx-app
<span class="hljs-attr">Name</span>:                     nginx-app
<span class="hljs-attr">Namespace</span>:                <span class="hljs-keyword">default</span>
<span class="hljs-attr">Labels</span>:                   app=nginx-app
<span class="hljs-attr">Annotations</span>:              <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">none</span>&gt;</span>
Selector:                 app=nginx-app</span>
</code></pre><h2 id="heading-using-service-resources-in-microservices">Using Service Resources in Microservices</h2>
<p>In a microservices architecture, different frontend and backend Pods are used to provide the application:</p>
<ul>
<li>Frontend Pods (e.g. webservers) can be exposed for external access using the NodePort Service type.</li>
<li>Backend Pods (e.g. databases) can be exposed internally only using the clusterIP Service type.</li>
</ul>
<p>An example would be a frontend Deployment with WordPress and a backend Deployment with MariaDB. You don't want to expose MariaDB to external traffic, only the frontend Pods should be able to communicate with the database. They can do so using the Cluster IP address, or even without IP address by using a headless ClusterIP Service type. We'll cover that later on.  </p>
<h2 id="heading-services-and-dns">Services and DNS</h2>
<p>Exposed Services automatically register with the Kubernetes internal DNS. The internal DNS consists of the <code>kube-dns</code> Service and the <code>coreDNS</code> Pod.<br />This allows all Pods to address Services using the Service name:</p>
<pre><code>student@minikube:~$ kubectl get service,pods -n kube-system
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.10</span>   &lt;none&gt;        <span class="hljs-number">53</span>/UDP,<span class="hljs-number">53</span>/TCP,<span class="hljs-number">9153</span>/TCP   <span class="hljs-number">3</span>h39m

NAME                                   READY   STATUS    RESTARTS        AGE
pod/coredns<span class="hljs-number">-64897985</span>d<span class="hljs-number">-2</span>fwlb            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>               <span class="hljs-number">3</span>h39m
</code></pre><p>Notice the Cluster IP address of the <code>kube-dns</code> service above.
Now, let's run a Pod and have a look at its DNS settings:</p>
<pre><code>student@minikube:~$ kubectl run testpod --image=busybox -- sleep <span class="hljs-number">3600</span>
pod/testpod created

student@minikube:~$ kubectl exec -it testpod -- cat /etc/resolv.conf 
nameserver <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.10</span>
search <span class="hljs-keyword">default</span>.svc.cluster.local svc.cluster.local cluster.local
options ndots:<span class="hljs-number">5</span>
</code></pre><p>The <code>nameserver</code> is set to the Cluster IP address of the <code>kube-dns</code> service.
Lookups are also done in the <code>default.svc.cluster.local</code> domain, where <code>default</code> is the name of the Name Space:</p>
<pre><code>student@minikube:~$ kubectl exec -it testpod -- nslookup nginx-app
<span class="hljs-attr">Server</span>:        <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.10</span>
<span class="hljs-attr">Address</span>:    <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.10</span>:<span class="hljs-number">53</span>

<span class="hljs-attr">Name</span>:    nginx-app.default.svc.cluster.local
<span class="hljs-attr">Address</span>: <span class="hljs-number">10.96</span><span class="hljs-number">.165</span><span class="hljs-number">.179</span>

student@minikube:~$ kubectl get service nginx-app
NAME        TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-app   NodePort   <span class="hljs-number">10.96</span><span class="hljs-number">.165</span><span class="hljs-number">.179</span>   &lt;none&gt;        <span class="hljs-number">80</span>:<span class="hljs-number">32000</span>/TCP   <span class="hljs-number">8</span>m35s
</code></pre><h1 id="heading-ingress">Ingress</h1>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress</a> is a Kubernetes API resource used to provide external access using DNS to internal Kubernetes cluster Services by means of an externally Ingress managed load balancer, also known as an <a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/">Ingress Controller</a>. Creating an Ingress resource without Ingress Controller has no effect, you need both. The Ingress Controller can be anything you're already familiar with: HAProxy, Nginx, Apache, traefik, kong, ...</p>
<p>{{</p>}}<p></p>
<p>To  summarize, Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress can be configured to do the following:</p>
<ul>
<li>Give Services externally-reachable URLs</li>
<li>Terminate SSL/TLS</li>
<li>Load balance traffic</li>
<li>Offer name based virtual hosting</li>
</ul>
<h2 id="heading-configuring-the-minikube-ingress-controller">Configuring the Minikube Ingress Controller</h2>
<p>Minikube provides an easy Ingress integration using a Minikube addon:</p>
<pre><code>student@minikube:~$ minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | third-party (ambassador)       |
| auto-pause                  | minikube | disabled     | google                         |
| csi-hostpath-driver         | minikube | disabled     | kubernetes                     |
| dashboard                   | minikube | disabled     | kubernetes                     |
| <span class="hljs-keyword">default</span>-storageclass        | minikube | enabled ✅   | kubernetes                     |
| efk                         | minikube | disabled     | third-party (elastic)          |
| freshpod                    | minikube | disabled     | google                         |
| gcp-auth                    | minikube | disabled     | google                         |
| gvisor                      | minikube | disabled     | google                         |
| helm-tiller                 | minikube | disabled     | third-party (helm)             |
| ingress                     | minikube | disabled     | unknown (third-party)          |
| ingress-dns                 | minikube | disabled     | google                         |
| istio                       | minikube | disabled     | third-party (istio)            |
| istio-provisioner           | minikube | disabled     | third-party (istio)            |
| kong                        | minikube | disabled     | third-party (Kong HQ)          |
| kubevirt                    | minikube | disabled     | third-party (kubevirt)         |
| logviewer                   | minikube | disabled     | unknown (third-party)          |
| metallb                     | minikube | disabled     | third-party (metallb)          |
| metrics-server              | minikube | disabled     | kubernetes                     |
| nvidia-driver-installer     | minikube | disabled     | google                         |
| nvidia-gpu-device-plugin    | minikube | disabled     | third-party (nvidia)           |
| olm                         | minikube | disabled     | third-party (operator          |
|                             |          |              | framework)                     |
| pod-security-policy         | minikube | disabled     | unknown (third-party)          |
| portainer                   | minikube | disabled     | portainer.io                   |
| registry                    | minikube | disabled     | google                         |
| registry-aliases            | minikube | disabled     | unknown (third-party)          |
| registry-creds              | minikube | disabled     | third-party (upmc enterprises) |
| storage-provisioner         | minikube | enabled ✅   | google                         |
| storage-provisioner-gluster | minikube | disabled     | unknown (third-party)          |
| volumesnapshots             | minikube | disabled     | kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

student@minikube:~$ minikube addons enable ingress
🌟  The <span class="hljs-string">'ingress'</span> addon is enabled

student@minikube:~$ kubectl get ns
NAME              STATUS   AGE
<span class="hljs-keyword">default</span>           Active   <span class="hljs-number">73</span>m
ingress-nginx     Active   <span class="hljs-number">88</span>s
kube-node-lease   Active   <span class="hljs-number">73</span>m
kube-public       Active   <span class="hljs-number">73</span>m
kube-system       Active   <span class="hljs-number">73</span>m

student@minikube:~$ kubectl get all -n ingress-nginx
NAME                                           READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-qz4sr       <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">115</span>s
pod/ingress-nginx-admission-patch-tbzsw        <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">1</span>          <span class="hljs-number">115</span>s
pod/ingress-nginx-controller-cc8496874-nrsq6   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running     <span class="hljs-number">0</span>          <span class="hljs-number">115</span>s

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    <span class="hljs-number">10.101</span><span class="hljs-number">.67</span><span class="hljs-number">.244</span>   &lt;none&gt;        <span class="hljs-number">80</span>:<span class="hljs-number">30708</span>/TCP,<span class="hljs-number">443</span>:<span class="hljs-number">31969</span>/TCP   <span class="hljs-number">116</span>s
service/ingress-nginx-controller-admission   ClusterIP   <span class="hljs-number">10.106</span><span class="hljs-number">.3</span><span class="hljs-number">.163</span>    &lt;none&gt;        <span class="hljs-number">443</span>/TCP                      <span class="hljs-number">116</span>s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">116</span>s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-cc8496874   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">116</span>s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>           <span class="hljs-number">12</span>s        <span class="hljs-number">116</span>s
job.batch/ingress-nginx-admission-patch    <span class="hljs-number">1</span>/<span class="hljs-number">1</span>           <span class="hljs-number">13</span>s        <span class="hljs-number">116</span>s
</code></pre><h2 id="heading-using-ingress">Using Ingress</h2>
<p>The below example continues to build on the <code>nginx-app</code> Deployment and Service.</p>
<pre><code>student@minikube:~$ kubectl create ingress nginx-app-ingress --rule=<span class="hljs-string">"/=nginx-app:80"</span> --rule=<span class="hljs-string">"/hello=newdeploy:8080"</span>
ingress.networking.k8s.io/nginx-app-ingress created
</code></pre><p>We create a new Ingress resource with the name <code>nginx-app-ingress</code>:</p>
<ul>
<li>The first rule routes traffic from the root <code>/</code> to our <code>nginx-app</code> Service on port 80.</li>
<li>The second rule routes traffic from the URI <code>/hello</code> to a non existing <code>newdeploy</code> Service on port 8080.</li>
</ul>
<pre><code>student@minikube:~$ kubectl describe ingress nginx-app-ingress
<span class="hljs-attr">Name</span>:             nginx-app-ingress
<span class="hljs-attr">Labels</span>:           &lt;none&gt;
Namespace:        default
Address:          192.168.49.2
Default backend:  default-http-backend:80 (&lt;error: endpoints "default-http-backend" not found&gt;)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /        nginx-app:80 (172.17.0.4:80,172.17.0.5:80,172.17.0.6:80 + 2 more...)
              /hello   newdeploy:8080 (&lt;error: endpoints "newdeploy" not found&gt;)
Annotations:  &lt;none&gt;
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    3m10s (x2 over 3m11s)  nginx-ingress-controller  Scheduled for sync
</code></pre><p>Notice that the backends or Pods for <code>newdeploy</code> are not found.</p>
<p>Before proceeding, update the <code>/etc/hosts</code> file to associate a domain with the IP address of our minikube container (which is running our K8s cluster). You can find the IP by running the <code>minikube ip</code> command.
e.g. <code>192.168.42.2    nginx-app.demo</code></p>
<p>Next, let's test our Ingress resource:</p>
<pre><code>student@minikube:~$ kubectl get ingress
NAME                CLASS   HOSTS   ADDRESS        PORTS   AGE
nginx-app-ingress   nginx   *       <span class="hljs-number">192.168</span><span class="hljs-number">.49</span><span class="hljs-number">.2</span>   <span class="hljs-number">80</span>      <span class="hljs-number">12</span>m

student@minikube:~$ curl nginx-app.demo
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
&lt;title&gt;Welcome to nginx!&lt;/title&gt;

student@minikube:~$ curl nginx-app.demo/hello
&lt;html&gt;
&lt;head&gt;&lt;title&gt;503 Service Temporarily Unavailable&lt;/title&gt;&lt;/head&gt;
</code></pre><p>We should fix the <code>/hello</code> URI by creating the <code>newdeploy</code> Deployment and Service:</p>
<pre><code>student@minikube:~$ kubectl create deployment newdeploy --image=gcr.io/google-samples/hello-app:<span class="hljs-number">2.0</span>
deployment.apps/newdeploy created

student@minikube:~$ kubectl expose deployment newdeploy --port=<span class="hljs-number">8080</span>
service/newdeploy exposed

student@minikube:~$ curl nginx-app.demo/hello
Hello, world!
Version: <span class="hljs-number">2.0</span><span class="hljs-number">.0</span>
<span class="hljs-attr">Hostname</span>: newdeploy<span class="hljs-number">-698574</span>c958-kvnbc
</code></pre><h2 id="heading-configuring-ingress-rules">Configuring Ingress Rules</h2>
<p>In the previous example, we've configured the <code>nginx-app-ingress</code> Ingress resource with the rules <code>--rule="/=nginx-app:80" --rule="/hello=newdeploy:8080</code>.
Each Ingress Rules contains the following:</p>
<ul>
<li>An optional host. If no host is specified, the rule applies to all inbound HTTP traffic.</li>
<li>A list of paths, each path has its own backend. Paths can be exposed as regular expressions.</li>
<li>The backend, which consists of either a service or a resource. You can configure a default backend for incoming traffic that doesn't match any of the defined backends. The service backed relates to a Service while a resource backend refers to Cloud based object storage. We'll focus on service backends.</li>
</ul>
<p>The Ingress <code>pathType</code> specifies how to deal with path requests:</p>
<ul>
<li>The <code>Exact</code> pathType indicates that an exact match should occur: If the path is set to <code>/foo</code> and the request is <code>/foo/</code>, there is no match.</li>
<li>The <code>Prefix</code> pathType indicates that the requested path should start with:<ul>
<li>If the path is set to <code>/</code>, any requested path will match.</li>
<li>If the path is set to <code>/foo</code>, then <code>/foo</code> as well as <code>/foo/</code> and <code>/foo/bar</code> will match.</li>
</ul>
</li>
</ul>
<p>There are different Ingress Types:</p>
<ul>
<li>Single Service: <code>kubectl create ingress ingress-name --rule="/hello=hello-service:80"</code></li>
<li>Simple fanout: <code>kubectl create ingress ingress-name --rule="/hello=hello-service:80" --rule="/goodbye=goodbye-service:80"</code></li>
<li>Name-based Virtual Hosting: <code>kubectl create ingress ingress-name --rule="my.example.com/hello*=hello-service:80" --rule="my.example.org/goodbye*=goodbye-service:80"</code></li>
</ul>
<p>Let's cover this in an example:</p>
<pre><code>student@minikube:~$ kubectl create deploy foo --image=nginx
deployment.apps/foo created

student@minikube:~$ kubectl create deploy bar --image=httpd
deployment.apps/bar created

student@minikube:~$ kubectl expose deploy foo --port=<span class="hljs-number">80</span>
service/foo exposed

student@minikube:~$ kubectl expose deploy bar --port=<span class="hljs-number">80</span>
service/bar exposed

student@minikube:~$ kubectl create ingress multihost --rule=<span class="hljs-string">"foo.example.com/=foo:80"</span> --rule=<span class="hljs-string">"bar.example.com/=bar:80"</span>
ingress.networking.k8s.io/multihost created
</code></pre><p>Create the necessary <code>/etc/hosts</code> entries for <code>foo.example.com</code> and <code>bar.example.com</code>.
Edit the <code>multihost</code> Ingress resource and set the <code>pathType</code> to <code>Prefix</code> for both backends</p>
<pre><code>student@minikube:~$ kubectl edit ingress multihost
ingress.networking.k8s.io/multihost edited

student@minikube:~$ kubectl get ingress multihost
NAME        CLASS   HOSTS                             ADDRESS        PORTS   AGE
multihost   nginx   foo.example.com,bar.example.com   <span class="hljs-number">192.168</span><span class="hljs-number">.49</span><span class="hljs-number">.2</span>   <span class="hljs-number">80</span>      <span class="hljs-number">75</span>s

student@minikube:~$ curl foo.example.com
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
&lt;title&gt;Welcome to nginx!&lt;/title&gt;

student@minikube:~$ curl foo.example.com/lololol
&lt;html&gt;
&lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt;

student@minikube:~$ curl bar.example.com
&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt;
student@minikube:~$ curl bar.example.com/lololol
&lt;!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"&gt;
&lt;html&gt;&lt;head&gt;
&lt;title&gt;404 Not Found&lt;/title&gt;
</code></pre><h2 id="heading-network-policies">Network Policies</h2>
<p>By default there are no restrictions to network traffic in Kubernetes: Pods can always communicate, even if they're in other Name Spaces.
We can limit this by using Network Policies, however, this needs to be supported by the network plugin. Remember that by default Kubernetes only offers basic network connectivity and this can be expanded with third party plugins.  </p>
<p>If you don't use a Network Policy, all traffic is allowed. If using a Network Policy and there's <em>no</em> match, traffic is denied.
Minikube doesn't automatically start with a network plugin, so let's restart minikube and configure it to use the <a target="_blank" href="https://www.tigera.io/blog/calico-networking-for-kubernetes/">Calico</a> network plugin:</p>
<pre><code>student@minikube:~$ minikube stop
✋  Stopping node <span class="hljs-string">"minikube"</span>  ...
🛑  Powering off <span class="hljs-string">"minikube"</span> via SSH ...
🛑  <span class="hljs-number">1</span> node stopped.

student@minikube:~$ minikube <span class="hljs-keyword">delete</span>
🔥  Deleting <span class="hljs-string">"minikube"</span> <span class="hljs-keyword">in</span> docker ...
🔥  Deleting container <span class="hljs-string">"minikube"</span> ...
🔥  Removing /home/student/.minikube/machines/minikube ...
💀  Removed all traces <span class="hljs-keyword">of</span> the <span class="hljs-string">"minikube"</span> cluster.

student@minikube:~$ minikube start --cni=calico
😄  minikube v1<span class="hljs-number">.25</span><span class="hljs-number">.2</span> on Ubuntu <span class="hljs-number">18.04</span> (amd64)
✨  Automatically selected the docker driver. Other choices: ssh, none
👍  Starting control plane node minikube <span class="hljs-keyword">in</span> cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=<span class="hljs-number">2</span>, Memory=<span class="hljs-number">2200</span>MB) ...
🐳  Preparing Kubernetes v1<span class="hljs-number">.23</span><span class="hljs-number">.3</span> on Docker <span class="hljs-number">20.10</span><span class="hljs-number">.12</span> ...
    ▪ kubelet.housekeeping-interval=<span class="hljs-number">5</span>m
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, <span class="hljs-keyword">default</span>-storageclass
💡  kubectl not found. If you need it, <span class="hljs-attr">try</span>: <span class="hljs-string">'minikube kubectl -- get pods -A'</span>
🏄  Done! kubectl is now configured to use <span class="hljs-string">"minikube"</span> cluster and <span class="hljs-string">"default"</span> namespace by <span class="hljs-keyword">default</span>

student@minikube:~$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers<span class="hljs-number">-8594699699</span>-r4rwl   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">2</span>m7s
calico-node<span class="hljs-number">-8</span>qzhj                          <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">2</span>m7s
</code></pre><p>As with other Kubernetes resources, when defining a Pod- or NameSpace-based NetworkPolicy, a selector label is used to specify what traffic is allowed to and from the Pods that match the selector.</p>
<p>Three different NetworkPolicy Identifiers can be used to match network traffic:</p>
<ul>
<li><code>podSelector</code>: Allows access to a Pod with the corresponding selector label.</li>
<li><code>namespaceSelector</code>: Allows incoming traffic from namespaces with the matching selector label.</li>
<li><code>ipBlock</code>: Do not confuse with the verb <em>to block</em> - Specify a range of IP addresses that should get access.</li>
</ul>
<p>Here's an example NetworkPolicy:</p>
<pre><code>apiVersion: networking.k8s.io/v1
<span class="hljs-attr">kind</span>: NetworkPolicy
<span class="hljs-attr">metadata</span>:
  name: access-nginx
<span class="hljs-attr">spec</span>:
  podSelector:
    matchLabels:
      app: nginx
  <span class="hljs-attr">ingress</span>:
  - <span class="hljs-keyword">from</span>:
    - podSelector:
        matchLabels:
          access: <span class="hljs-string">"true"</span>
...

---
apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: nginx
  <span class="hljs-attr">labels</span>: 
    app: nginx
<span class="hljs-attr">spec</span>:
  containers:
  - name: nwp-nginx
    <span class="hljs-attr">image</span>: nginx:<span class="hljs-number">1.17</span>
...

---
apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: busybox
  <span class="hljs-attr">labels</span>:
    app: sleepy
<span class="hljs-attr">spec</span>:
  containers:
  - name: nwp-busybox
    <span class="hljs-attr">image</span>: busybox
    <span class="hljs-attr">command</span>:
    - sleep
    - <span class="hljs-string">"3600"</span>
</code></pre><p>The above NetworkPolicy can be understood as follows:</p>
<ul>
<li>Apply the Network Policy to Pods that have the label <code>app: nginx</code></li>
<li>Allow incoming traffic from Pods that have the label <code>access: "true"</code></li>
</ul>
<p>In other words, our <code>nginx</code> Pod will only accept traffic from Pods that have the <code>access: "true"</code> label set:</p>
<pre><code>student@minikube:~$ kubectl create -f ckad/nwpolicy-complete-example.yaml 
networkpolicy.networking.k8s.io/access-nginx created
pod/nginx created
pod/busybox created

student@minikube:~$ kubectl get networkpolicy
NAME           POD-SELECTOR   AGE
access-nginx   app=nginx      <span class="hljs-number">2</span>m59s

student@minikube:~$ kubectl describe networkpolicy
<span class="hljs-attr">Name</span>:         access-nginx
<span class="hljs-attr">Namespace</span>:    <span class="hljs-keyword">default</span>
Created on:   <span class="hljs-number">2021</span><span class="hljs-number">-12</span><span class="hljs-number">-01</span> <span class="hljs-number">18</span>:<span class="hljs-number">12</span>:<span class="hljs-number">12</span> +<span class="hljs-number">0000</span> UTC
<span class="hljs-attr">Labels</span>:       &lt;none&gt;
Annotations:  &lt;none&gt;
Spec:
  PodSelector:     app=nginx
  Allowing ingress traffic:
    To Port: &lt;any&gt; (traffic allowed to all ports)
    From:
      PodSelector: access=true
  Not affecting egress traffic
  Policy Types: Ingress

student@minikube:~$ kubectl expose pod nginx --port=80
service/nginx exposed

student@minikube:~$ kubectl exec -it busybox -- wget --spider --timeout=1 nginx
Connecting to nginx (10.108.90.255:80)
wget: download timed out
command terminated with exit code 1

student@minikube:~$ kubectl label pod busybox access=true
pod/busybox labeled

student@minikube:~$ kubectl exec -it busybox -- wget --spider --timeout=1 nginx
Connecting to nginx (10.108.90.255:80)
remote file exists
student@minikube:~$
</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes 101: Building Scalable Applications - Deployments]]></title><description><![CDATA[{{}}
Deployments
Deployments are the standard for running applications in Kubernetes, it protects Pods and will automatically restart them if anything goes wrong. Additionally, it offer features that add to the scalability and reliability of the appl...]]></description><link>https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-deployments</link><guid isPermaLink="true">https://blog.joerismissaert.dev/kubernetes-101-building-scalable-applications-deployments</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Wed, 03 Nov 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{</p>}}<p></p>
<h1 id="heading-deployments">Deployments</h1>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployments</a> are the standard for running applications in Kubernetes, it protects Pods and will automatically restart them if anything goes wrong. Additionally, it offer features that add to the scalability and reliability of the application:</p>
<ul>
<li>Scalability: Scaling the number of application instances to meet the demand.</li>
<li>Updates and Update Strategy: Zero-downtime application updates</li>
</ul>
<p>We use the <code>kubectl create deploy</code> command to create a Deployment:</p>
<pre><code>student@minikube:~$ kubectl create deployment myweb --image=nginx --replicas=<span class="hljs-number">3</span>
deployment.apps/myweb created

student@minikube:~$ kubectl describe deploy myweb
<span class="hljs-attr">Name</span>:                   myweb
<span class="hljs-attr">Namespace</span>:              <span class="hljs-keyword">default</span>
<span class="hljs-attr">CreationTimestamp</span>:      Mon, <span class="hljs-number">01</span> Nov <span class="hljs-number">2021</span> <span class="hljs-number">09</span>:<span class="hljs-number">08</span>:<span class="hljs-number">57</span> +<span class="hljs-number">0000</span>
<span class="hljs-attr">Labels</span>:                 app=myweb
<span class="hljs-attr">Annotations</span>:            deployment.kubernetes.io/revision: <span class="hljs-number">1</span>
<span class="hljs-attr">Selector</span>:               app=myweb
<span class="hljs-attr">Replicas</span>:               <span class="hljs-number">3</span> desired | <span class="hljs-number">3</span> updated | <span class="hljs-number">3</span> total | <span class="hljs-number">3</span> available | <span class="hljs-number">0</span> unavailable
<span class="hljs-attr">StrategyType</span>:           RollingUpdate
<span class="hljs-attr">MinReadySeconds</span>:        <span class="hljs-number">0</span>
<span class="hljs-attr">RollingUpdateStrategy</span>:  <span class="hljs-number">25</span>% max unavailable, <span class="hljs-number">25</span>% max surge
Pod Template:
  Labels:  app=myweb
  <span class="hljs-attr">Containers</span>:
   nginx:
    Image:        nginx
    <span class="hljs-attr">Port</span>:         &lt;none&gt;
    Host Port:    &lt;none&gt;
    Environment:  &lt;none&gt;
    Mounts:       &lt;none&gt;
  Volumes:        &lt;none&gt;
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  &lt;none&gt;
NewReplicaSet:   myweb-8764bf4c8 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  2m29s  deployment-controller  Scaled up replica set myweb-8764bf4c8 to 3


student@minikube:~$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/myweb-8764bf4c8-6gxv8   1/1     Running   0          4m23s
pod/myweb-8764bf4c8-6mvn8   1/1     Running   0          4m23s
pod/myweb-8764bf4c8-q72nq   1/1     Running   0          4m23s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    &lt;none&gt;        443/TCP   36m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myweb   3/3     3            3           4m23s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/myweb-8764bf4c8   3         3         3       4m23s
</code></pre><p>We created the <code>myweb</code> deployment based on the <code>nginx</code> image with 3 replicas or desired Pods. Notice the <code>Labels</code> and <code>Selector</code> fields.<br />The Deployment created the <a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> to ensure that a specified number of Pods are always running at any given time, and it created the Pods. Both the ReplicaSet and the Pods are managed by the Deployment.  </p>
<p>You cannot manage Pods independently when they are part of a Deployment.
When trying to delete a Pod, the Deployment kicks in and uses the ReplicaSet to make sure we have 3 running Pods:</p>
<pre><code>student@minikube:~$ kubectl <span class="hljs-keyword">delete</span> pod myweb<span class="hljs-number">-8764</span>bf4c8<span class="hljs-number">-6</span>gxv8
kpod <span class="hljs-string">"myweb-8764bf4c8-6gxv8"</span> deleted

student@minikube:~$ kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
myweb<span class="hljs-number">-8764</span>bf4c8<span class="hljs-number">-6</span>mvn8   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running             <span class="hljs-number">0</span>          <span class="hljs-number">14</span>m
myweb<span class="hljs-number">-8764</span>bf4c8-q72nq   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running             <span class="hljs-number">0</span>          <span class="hljs-number">14</span>m
myweb<span class="hljs-number">-8764</span>bf4c8-qf2vc   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">5</span>s
</code></pre><h3 id="heading-deployment-scalability">Deployment Scalability</h3>
<p>Before <code>Deployments</code> existed, <code>ReplicaSets</code> were used to manage scalability. In the previous section we saw that our deployment created the necessary <code>ReplicaSet</code>: Manage ReplicaSets only through Deployments. We do not care about managing <code>ReplicaSets</code> individually.</p>
<p>We can use the <code>kubectl scale deployment</code> command to manually scale an existing deployment:<br /><code>kubectl scale deployment my-deployment --replicas=5</code></p>
<pre><code>student@minikube:~$ kubectl scale deployment myweb --replicas=<span class="hljs-number">5</span>
deployment.apps/myweb scaled

student@minikube:~$ kubectl describe deploy myweb | grep -i replicas
<span class="hljs-attr">Replicas</span>:               <span class="hljs-number">5</span> desired | <span class="hljs-number">5</span> updated | <span class="hljs-number">5</span> total | <span class="hljs-number">5</span> available | <span class="hljs-number">0</span> unavailable

student@minikube:~$ kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
myweb<span class="hljs-number">-8764</span>bf4c8<span class="hljs-number">-44</span>zxq   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">3</span>s
myweb<span class="hljs-number">-8764</span>bf4c8<span class="hljs-number">-6</span>mvn8   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running             <span class="hljs-number">0</span>          <span class="hljs-number">36</span>m
myweb<span class="hljs-number">-8764</span>bf4c8<span class="hljs-number">-7</span>dpnx   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">3</span>s
myweb<span class="hljs-number">-8764</span>bf4c8-q72nq   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running             <span class="hljs-number">0</span>          <span class="hljs-number">36</span>m
myweb<span class="hljs-number">-8764</span>bf4c8-qf2vc   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running             <span class="hljs-number">0</span>          <span class="hljs-number">22</span>m
</code></pre><p>Additionally, there's the <code>kubectl edit deployment</code> command which opens a text-editor for you, similar to <code>systemctl edit</code> for editing Systemd Unit files. This command, however, does not allow you to modify every single setting of a deployment.<br />In the below example I changed the deployment namespace and replicas:</p>
<pre><code>student@minikube:~$ kubectl edit deploy myweb
A copy <span class="hljs-keyword">of</span> your changes has been stored to <span class="hljs-string">"/tmp/kubectl-edit-3283969971.yaml"</span>
<span class="hljs-attr">error</span>: the namespace <span class="hljs-keyword">from</span> the provided object <span class="hljs-string">"secret"</span> does not match the namespace <span class="hljs-string">"default"</span>. You must pass <span class="hljs-string">'--namespace=secret'</span> to perform <span class="hljs-built_in">this</span> operation.
</code></pre><p>As you can see, Kubernetes isn't happy about changing the namespace.</p>
<h3 id="heading-deployment-updates">Deployment Updates</h3>
<p>Deployments allow for zero-downtime application updates.<br />When an update is applied, a new ReplicaSet is created with the new properties: Pods with the new properties are started in the new ReplicaSet. After updating, the old ReplicaSet is no longer used and may be deleted. Or, you can keep it around for rolling-back. The <code>deployment.spec.revisionHistoryLimit</code> is set to keep the last 10 ReplicaSets.</p>
<p>The <code>deployment.spec.strategy.type</code> property defines how to handle updates:</p>
<ul>
<li><code>RollingUpdate</code>: The default value. Replaces old Pods with new Pods in such a way to ensure the application remains available to users.</li>
<li><code>Recreate</code>: Kill all existing Pods before creating new ones. The application will be down. 
More on the this later...</li>
</ul>
<p>Let's perform a rolling update of Nginx using the <code>kubectl set</code> command. The command only accepts a limited amount of arguments.</p>
<pre><code>
student@minikube:~$ kubectl create deploy mynginx --image=nginx:<span class="hljs-number">1.14</span>
deployment.apps/mynginx created

student@minikube:~$ kubectl describe deploy mynginx
<span class="hljs-attr">Name</span>:                   mynginx
<span class="hljs-attr">Namespace</span>:              <span class="hljs-keyword">default</span>
<span class="hljs-attr">CreationTimestamp</span>:      Sat, <span class="hljs-number">02</span> Apr <span class="hljs-number">2022</span> <span class="hljs-number">10</span>:<span class="hljs-number">29</span>:<span class="hljs-number">52</span> +<span class="hljs-number">0000</span>
<span class="hljs-attr">Labels</span>:                 app=mynginx
<span class="hljs-attr">Annotations</span>:            deployment.kubernetes.io/revision: <span class="hljs-number">1</span>
<span class="hljs-attr">Selector</span>:               app=mynginx
<span class="hljs-attr">Replicas</span>:               <span class="hljs-number">1</span> desired | <span class="hljs-number">1</span> updated | <span class="hljs-number">1</span> total | <span class="hljs-number">0</span> available | <span class="hljs-number">1</span> unavailable
<span class="hljs-attr">StrategyType</span>:           RollingUpdate
<span class="hljs-attr">MinReadySeconds</span>:        <span class="hljs-number">0</span>
<span class="hljs-attr">RollingUpdateStrategy</span>:  <span class="hljs-number">25</span>% max unavailable, <span class="hljs-number">25</span>% max surge
Pod Template:
  Labels:  app=mynginx
  <span class="hljs-attr">Containers</span>:
   nginx:
    Image:        nginx:<span class="hljs-number">1.14</span>

student@minikube:~$ kubectl get all --selector app=mynginx
NAME                           READY   STATUS    RESTARTS   AGE
pod/mynginx<span class="hljs-number">-6</span>b9d85f696-w4wpt   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">64</span>s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mynginx   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">64</span>s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/mynginx<span class="hljs-number">-6</span>b9d85f696   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">64</span>s
</code></pre><p>Notice the <code>Image</code> field in the output of the <code>kubectl describe</code> command, the default <code>StrategyType</code>,  as well as how the middle part of the Pod name matches the suffix of the <code>ReplicaSet</code> name: <code>pod/mynginx-6b9d85f696-w4wpt</code> =&gt; <code>replicaset.apps/mynginx-6b9d85f6960</code>. We can conclude that this Pod belongs to that ReplicaSet.</p>
<p>Now, update the image version to <code>1.17</code>.
The <code>kubectl set</code> command only accepts a limited amount of arguments.</p>
<pre><code>student@minikube:~$ kubectl set 
env             image           resources       selector        serviceaccount  subject 

student@minikube:~$ kubectl set image deploy mynginx nginx=nginx:<span class="hljs-number">1.17</span>
deployment.apps/mynginx image updated

student@minikube:~$ kubectl get all --selector app=mynginx
NAME                           READY   STATUS              RESTARTS   AGE
pod/mynginx<span class="hljs-number">-6</span>b9d85f696-w4wpt   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running             <span class="hljs-number">0</span>          <span class="hljs-number">7</span>m4s
pod/mynginx<span class="hljs-number">-6</span>d9cd8f877-g4dkv   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">8</span>s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mynginx   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">7</span>m4s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/mynginx<span class="hljs-number">-6</span>b9d85f696   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">7</span>m4s
replicaset.apps/mynginx<span class="hljs-number">-6</span>d9cd8f877   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">0</span>       <span class="hljs-number">9</span>s
</code></pre><p>We see that our old ReplicaSet and Pod are still there, our application is still available, while a new Pod with the new Nginx image is being created.
Once the new Pod is running, the old Pod will be deleted but the old (empty) ReplicaSet will still be there:</p>
<pre><code>student@minikube:~$ kubectl get all --selector app=mynginx
NAME                           READY   STATUS    RESTARTS   AGE
pod/mynginx<span class="hljs-number">-6</span>d9cd8f877-g4dkv   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">2</span>m4s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mynginx   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">9</span>m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/mynginx<span class="hljs-number">-6</span>b9d85f696   <span class="hljs-number">0</span>         <span class="hljs-number">0</span>         <span class="hljs-number">0</span>       <span class="hljs-number">9</span>m
replicaset.apps/mynginx<span class="hljs-number">-6</span>d9cd8f877   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">2</span>m5s
</code></pre><p>The rolling update is complete and the old ReplicaSet is still available in case  we need to roll back (covered later on in this article).</p>
<h3 id="heading-labels-selectors-and-annotations">Labels, Selectors, and Annotations</h3>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">Labels</a> are key:value pairs that are defined in resources like Pods, Deployments and Services. They are either set automatically or can be set manually by an administrator. Each label key that is attached to a single object resource must be unique, though different objects can have the same label key:value pairs. This allows us to group objects, or map a specific structure onto objects, and query only the objects with a specific label.  </p>
<p>If we look back at our previous deployment, we can see that each object in the deployment has the <code>app=mynginx</code> label set:</p>
<pre><code>student@minikube:~$ kubectl describe pod mynginx<span class="hljs-number">-6</span>d9cd8f877-g4dkv | grep Labels:
Labels:       app=mynginx

student@minikube:~$ kubectl describe rs mynginx | grep Labels:
Labels:         app=mynginx

student@minikube:~$ kubectl describe deploy mynginx | grep Labels:
Labels:                 app=mynginx
</code></pre><p>So using a <em>label selector</em>, we can target the related objects of a specific application.<br />e.g.:</p>
<pre><code>student@minikube:~$ kubectl get all --selector app=mynginx
NAME                           READY   STATUS    RESTARTS   AGE
pod/mynginx<span class="hljs-number">-6</span>d9cd8f877-g4dkv   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">29</span>m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mynginx   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">36</span>m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/mynginx<span class="hljs-number">-6</span>b9d85f696   <span class="hljs-number">0</span>         <span class="hljs-number">0</span>         <span class="hljs-number">0</span>       <span class="hljs-number">36</span>m
replicaset.apps/mynginx<span class="hljs-number">-6</span>d9cd8f877   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">29</span>m
</code></pre><p>Our <code>kubectl create deployment</code> command automatically set the <code>app=appname</code> label, where <code>appname</code> is the name of the deployment.</p>
<p>Example:</p>
<pre><code>student@minikube:~$ kubectl create deploy mylabel --image=nginx
deployment.apps/mylabel created

student@minikube:~$ kubectl label deploy mylabel state=demo
deployment.apps/mylabel labeled

student@minikube:~$ kubectl get deploy --show-labels
NAME      READY   UP-TO-DATE   AVAILABLE   AGE   LABELS
mylabel   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">45</span>s   app=mylabel,state=demo

student@minikube:~$ kubectl get deploy --selector state=demo
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
mylabel   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">70</span>s
</code></pre><p>Notice that while we've given the deployment <code>mylabel</code> a new label, this new label is <strong>not</strong> inherited by the resources or objects created by the deployment:</p>
<pre><code>student@minikube:~$ kubectl get all --show-labels
NAME                           READY   STATUS    RESTARTS   AGE     LABELS
pod/mylabel<span class="hljs-number">-566</span>dc5f574-ctkqg   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">7</span>m28s   app=mylabel,pod-template-hash=<span class="hljs-number">566</span>dc5f574

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE    LABELS
service/kubernetes   ClusterIP   <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>    &lt;none&gt;        <span class="hljs-number">443</span>/TCP   <span class="hljs-number">168</span>m   component=apiserver,provider=kubernetes

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE     LABELS
deployment.apps/mylabel   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">7</span>m28s   app=mylabel,state=demo

NAME                                 DESIRED   CURRENT   READY   AGE     LABELS
replicaset.apps/mylabel<span class="hljs-number">-566</span>dc5f574   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">7</span>m28s   app=mylabel,pod-template-hash=<span class="hljs-number">566</span>dc5f574

student@minikube:~$ kubectl get all --selector state=demo
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mylabel   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">8</span>m58s
</code></pre><p>We can also remove a label. Let's remove the label with the key <code>app</code> from the Pod <code>mylabel-566dc5f574-ctkqg</code>:</p>
<pre><code>student@minikube:~$ kubectl label pod mylabel<span class="hljs-number">-566</span>dc5f574-ctkqg app-
pod/mylabel<span class="hljs-number">-566</span>dc5f574-ctkqg unlabeled

student@minikube:~$ kubectl get all
NAME                           READY   STATUS              RESTARTS   AGE
pod/mylabel<span class="hljs-number">-566</span>dc5f574-ctkqg   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running             <span class="hljs-number">0</span>          <span class="hljs-number">12</span>m
pod/mylabel<span class="hljs-number">-566</span>dc5f574-pxkdz   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">5</span>s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>    &lt;none&gt;        <span class="hljs-number">443</span>/TCP   <span class="hljs-number">174</span>m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mylabel   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">0</span>           <span class="hljs-number">12</span>m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/mylabel<span class="hljs-number">-566</span>dc5f574   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">0</span>       <span class="hljs-number">12</span>m

student@minikube:~$ kubectl get all --selector app=mylabel
NAME                           READY   STATUS    RESTARTS   AGE
pod/mylabel<span class="hljs-number">-566</span>dc5f574-pxkdz   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">3</span>m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mylabel   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-number">15</span>m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/mylabel<span class="hljs-number">-566</span>dc5f574   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">15</span>m
</code></pre><p>Our deployment could no longer find the Pod which is supposed to have the <code>app=mylabel</code> label, so it created a new Pod: <code>mylabel-566dc5f574-pxkdz</code>.<br />Since the Pod with the removed label is no longer managed by our deployment, we can delete it without our deployment (or rather ReplicaSet) recreating it.</p>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/">Annotations</a> can't be used in queries, but are useful to provide detailed non-identifying metadata in an object: maintainer, author, license, ...</p>
<h3 id="heading-update-strategy">Update Strategy</h3>
<p>When a Deployment changes, the Pods are immediately updated according to the <a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy">Update Strategy</a>:</p>
<ul>
<li><code>RollingUpdate</code>: Updates Pods one at a time to guarantee availability of the application. </li>
<li><code>Recreate</code>:  All Pods are killed and new Pods are created. This leads to temporary unavailability of the application which can be useful when different versions of an application cannot run simultaneously (e.g. a database). </li>
</ul>
<p>The task of the Deployment is to ensure that enough Pods are running at all times. When a change is made, the changed version is deployed in a new ReplicaSet. The old ReplicaSet is scaled to 0 (deactivated) once the update was confirmed as successful. We can use <code>kubectl rollout history</code> to get details about recent transactions, and <code>kubectl rollout undo</code> to undo a previous change.</p>
<p>The <code>RollingUpdate</code> options guarantee a certain minimal and maximum number of Pods to be always available:</p>
<ul>
<li><code>maxUnavailable</code>: Determines the maximum number of Pods that are upgraded at the same time.</li>
<li><code>maxSurge</code>: The number of Pods that can run beyond the desired number of Pods specified in the ReplicaSet to guarantee minimal availability.</li>
</ul>
<pre><code>student@minikube:~$ kubectl get deploy mylabel -o yaml
...
spec:
  progressDeadlineSeconds: <span class="hljs-number">600</span>
  <span class="hljs-attr">replicas</span>: <span class="hljs-number">1</span>
  <span class="hljs-attr">revisionHistoryLimit</span>: <span class="hljs-number">10</span>
  <span class="hljs-attr">selector</span>:
    matchLabels:
      app: mylabel
  <span class="hljs-attr">strategy</span>:
    rollingUpdate:
      maxSurge: <span class="hljs-number">25</span>%
      maxUnavailable: <span class="hljs-number">25</span>%
    type: RollingUpdate
...
</code></pre><h3 id="heading-deployment-history">Deployment History</h3>
<p>At this point we know that Deployment updates create a new ReplicaSet with new properties, the old ReplicaSet is kept but is scaled down to 0 Pods. Since the old ReplicaSet is kept around, we can easily undo a change. We can use <code>kubectl rollout history</code> to get details about recent roll outs, and <code>kubectl rollout undo</code> to undo a previous change.</p>
<p>Let's start by updating our <code>mylabel</code> deployment. We'll give all the Pods a new environment variable: <code>foo=bar</code>:</p>
<pre><code>kubectl set env deploy mylabel foo=bar
deployment.apps/mylabel env updated

student@minikube:~$ kubectl rollout history deploy mylabel
deployment.apps/mylabel 
REVISION  CHANGE-CAUSE
<span class="hljs-number">1</span>         &lt;none&gt;
<span class="hljs-number">2</span>         &lt;none&gt;

student@minikube:~$ kubectl rollout history deploy mylabel --revision=<span class="hljs-number">1</span>
deployment.apps/mylabel <span class="hljs-keyword">with</span> revision #<span class="hljs-number">1</span>
Pod Template:
  Labels:    app=mylabel
    pod-template-hash=<span class="hljs-number">566</span>dc5f574
  <span class="hljs-attr">Containers</span>:
   nginx:
    Image:    nginx
    <span class="hljs-attr">Port</span>:    &lt;none&gt;
    Host Port:    &lt;none&gt;
    Environment:    &lt;none&gt;
    Mounts:    &lt;none&gt;
  Volumes:    &lt;none&gt;


student@minikube:~$ kubectl rollout history deploy mylabel --revision=2
deployment.apps/mylabel with revision #2
Pod Template:
  Labels:    app=mylabel
    pod-template-hash=57f55bcb47
  Containers:
   nginx:
    Image:    nginx
    Port:    &lt;none&gt;
    Host Port:    &lt;none&gt;
    Environment:
      foo:    bar
    Mounts:    &lt;none&gt;
  Volumes:    &lt;none&gt;
</code></pre><p>We can see that we added the environment variable in revision 2.
So let's roll back revision 1:</p>
<pre><code>student@minikube:~$ kubectl rollout undo deploy mylabel --to-revision=<span class="hljs-number">1</span>
deployment.apps/mylabel rolled back
</code></pre><h3 id="heading-deployment-alternatives">Deployment Alternatives</h3>
<p>There are two additional Deployments alternatives:</p>
<ul>
<li><code>StatefulSets</code>: the workload API object used to manage stateful applications. We'll cover these once we know more about Networking and Storage.</li>
<li><code>DaemonSet</code>: ensures that all (or some) Nodes run a copy of a Pod (1 Pod, no replicas). As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.</li>
</ul>
<p>A simple use case for a <a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/">DaemonSet</a> is for example the need to run some sort of Agent on every worker-node.</p>
<p>The YAML code for DaemonSets needs to be created from scratch, you can't use <code>kubectl create</code> to generate the YAML :(
Example YAML code:</p>
<pre><code>apiVersion: apps/v1
<span class="hljs-attr">kind</span>: DaemonSet
<span class="hljs-attr">metadata</span>:
  name: nginxdaemon
  <span class="hljs-attr">namespace</span>: <span class="hljs-keyword">default</span>
  <span class="hljs-attr">labels</span>:
    k8s-app: nginxdaemon
<span class="hljs-attr">spec</span>:
  selector:
    matchLabels:
      name: nginxdaemon
  <span class="hljs-attr">template</span>:
    metadata:
      labels:
        name: nginxdaemon
    <span class="hljs-attr">spec</span>:
      containers:
      - name: nginx
        <span class="hljs-attr">image</span>: nginx
</code></pre><pre><code>student@minikube:~$ kubectl create -f daemon.yaml 
daemonset.apps/nginxdaemon created

student@minikube:~$ kubectl get ds,pods
NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/nginxdaemon   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           &lt;none&gt;          <span class="hljs-number">13</span>s

NAME                    READY   STATUS    RESTARTS   AGE
pod/nginxdaemon<span class="hljs-number">-5n</span>n27   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">13</span>s
</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes 101: Kubernetes Essentials]]></title><description><![CDATA[{{}}
Managing Basic Pod Features

Deployments are the standard for running applications in Kubernetes. For the sake of getting familiar with Kubernetes and understanding the essentials, we'll be creating and running native Pods.

Understanding Pods
A...]]></description><link>https://blog.joerismissaert.dev/kubernetes-101-kubernetes-essentials</link><guid isPermaLink="true">https://blog.joerismissaert.dev/kubernetes-101-kubernetes-essentials</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Sat, 02 Oct 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{</p>}}<p></p>
<h1 id="heading-managing-basic-pod-features">Managing Basic Pod Features</h1>
<blockquote>
<p>Deployments are the standard for running applications in Kubernetes. For the sake of getting familiar with Kubernetes and understanding the essentials, we'll be creating and running native Pods.</p>
</blockquote>
<h2 id="heading-understanding-pods">Understanding Pods</h2>
<p>A Pod is an abstraction of a server which can run multiple containers within a single namespace, exposed by a single IP address.
The Pod is the smallest entity that can be created and managed by Kubernetes: Kubernetes does not manage containers, it manages Pods.</p>
<h3 id="heading-managing-pods-with-kubectl">Managing Pods with kubectl</h3>
<p>Typically managed Pods are started through a Deployment resource.<br />Naked Pods are started using the <code>kubectl run</code> option: <code>kubectl run mynginx --image=nginx</code><br />Naked Pods cannot be scaled, are not rescheduled in case of failure, cannot be replaced automatically and can't have rolling updates.</p>
<ul>
<li><code>kubectl run -h</code>: Show all options for creating a Pod. </li>
<li><code>kubectl run mynginx --image=nginx</code>: Start a Pod with the name mynginx from the nginx Dockerhub image.</li>
<li><code>kubectl get pods</code>: Show the parameters of all Pods</li>
<li><code>kubectl get pods mynginx</code>: Show the parameters of a specific Pod</li>
<li><code>kubectl get pods mynginx -o yaml</code>: Show the output in YAML format.</li>
<li><code>kubectl describe pods</code>: Show all details about all pods</li>
<li><code>kubectl describe pods mynginx</code>: Show all details about a specific Pod</li>
</ul>
<h2 id="heading-yaml">YAML</h2>
<p><a target="_blank" href="https://yaml.org/">YAML</a> is a human-readable data-serialization language which uses indentation to identify relations.</p>
<h3 id="heading-basic-yaml-manifest-ingredients">Basic YAML Manifest Ingredients</h3>
<p>All of the YAML manifest ingredients are defined in the API. You can use <code>kubectl explain</code> to get more information about the YAML fields or properties:</p>
<pre><code>student@minikube:~$ kubectl explain pods
<span class="hljs-attr">KIND</span>:     Pod
<span class="hljs-attr">VERSION</span>:  v1

<span class="hljs-attr">DESCRIPTION</span>:
     Pod is a collection <span class="hljs-keyword">of</span> containers that can run on a host. This resource is
     created by clients and scheduled onto hosts.

FIELDS:
   apiVersion    &lt;string&gt;
     APIVersion defines the versioned schema <span class="hljs-keyword">of</span> <span class="hljs-built_in">this</span> representation <span class="hljs-keyword">of</span> an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https:<span class="hljs-comment">//git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources</span>

   kind    &lt;string&gt;
     Kind is a string value representing the REST resource <span class="hljs-built_in">this</span> object
     represents. Servers may infer <span class="hljs-built_in">this</span> <span class="hljs-keyword">from</span> the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https:<span class="hljs-comment">//git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds</span>

   metadata    &lt;<span class="hljs-built_in">Object</span>&gt;
     Standard object<span class="hljs-string">'s metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec    &lt;Object&gt;
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status    &lt;Object&gt;
     Most recently observed status of the pod. This data may not be up to date.
     Populated by the system. Read-only. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

student@minikube:~$ kubectl explain pods.spec
...
student@minikube:~$ kubectl explain pods.spec.containers
...</span>
</code></pre><p>The <code>kubectl explain pods.spec.containers</code> command shows us that the container spec has multiple fields of which the below are the most important ones:</p>
<pre><code>FIELDS:

  name &lt;string&gt; -required-
    Name <span class="hljs-keyword">of</span> the container specified <span class="hljs-keyword">as</span> a DNS_LABEL.

  image &lt;string&gt;
    Docker image name.  

  command &lt;[]string&gt;
    Entrypoint array. Not executed within a shell. The docker image<span class="hljs-string">'s
    ENTRYPOINT is used if this is not provided.

  args &lt;[]string&gt;
    Arguments to the entrypoint. The docker image'</span>s CMD is used <span class="hljs-keyword">if</span> <span class="hljs-built_in">this</span> is not
    provided

  env  &lt;[]<span class="hljs-built_in">Object</span>&gt;
    List <span class="hljs-keyword">of</span> environment variables to set <span class="hljs-keyword">in</span> the container. Cannot be updated.
</code></pre><p>If you have a YAML file with a Pod spec you can create a Pod from it:</p>
<pre><code>student@minikube:~$ cat busybox.yaml 
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: busybox2
  <span class="hljs-attr">namespace</span>: <span class="hljs-keyword">default</span>
<span class="hljs-attr">spec</span>:
  containers:
  - name: busy
    <span class="hljs-attr">image</span>: busybox
    <span class="hljs-attr">command</span>:
      - sleep
      - <span class="hljs-string">"3600"</span> 
student@minikube:~$ kubectl create -f busybox.yaml 
pod/busybox2 created

student@minikube:~$ kubectl get pods
NAME       READY   STATUS              RESTARTS   AGE
busybox2   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">7</span>s
</code></pre><p>Similarly you can delete or update (apply Spec changes) the Pod using the same YAML file:</p>
<pre><code>student@minikube:~$ kubectl <span class="hljs-keyword">delete</span> -f busybox.yaml 
pod <span class="hljs-string">"busybox2"</span> deleted

student@minikube:~$ kubectl apply -f busybox.yaml 
pod/busybox2 created

student@minikube:~$ kubectl apply -f busybox.yaml 
pod/busybox2 unchanged
</code></pre><h2 id="heading-generating-yaml-files">Generating YAML files</h2>
<p>By using YAML files we use Kubernetes in a declarative way where the files are typically stored in a git repository and which fits well into a DevOps strategy. The imperative way of working with Kubernetes is where you create everything from the command line. </p>
<p>We can write YAML files but we should <em>generate</em> them instead and modify it to suit our specific needs:<br /><code>kubectl run mynginx --image=nginx --dry-run=client -o yaml &gt; mynginx.yaml</code><br />The <code>--dry-run</code> option prevents Kubernetes from actually running the Pod. </p>
<h2 id="heading-understanding-and-configuring-multi-container-pods">Understanding and Configuring Multi-Container Pods</h2>
<p>The one-container Pod is the standard, they are easier to build and maintain. Typically, to create applications that consists of multiple containers, micro-services should be used. In a microservice, different independently managed Pods are connected by resources provided by Kubernetes.</p>
<p>There are some use cases where you might want to run multiple containers in a single Pod:</p>
<ul>
<li>Sidecar container: A container that enhances the primary application, for example logging.</li>
<li>Ambassador container: A container that represents the primary container to the outside world, for example a proxy.</li>
<li>Adapter container: Used to adopt the traffic or data pattern to match the traffic or data pattern in other applications in the cluster.</li>
</ul>
<p>These containers are not defined by specific Pod properties, you won't find information on their specs in <code>kubectl explain pod.spec</code>.</p>
<h3 id="heading-sidecar-containers">Sidecar Containers</h3>
<p>A sidecar container is providing additional functionality to the main container where it makes no sense to run this functionality in a separate Pod. The essence is that the main container and sidecar container have access to shared resources in order to exchange information.<br />e.g. <a target="_blank" href="https://istio.io/latest/about/service-mesh/">Istio service mesh</a> injects sidecar containers in Pods to enable traffic management.</p>
<p>Here's a basic example of a multi-container Pod:</p>
<pre><code>student@minikube:~$ cat sidecar.yaml 
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">metadata</span>:
  name: sidecar-pod
<span class="hljs-attr">spec</span>:
  volumes:
  - name: logs
    <span class="hljs-attr">emptyDir</span>: {}

  <span class="hljs-attr">containers</span>:
  - name: main
    <span class="hljs-attr">image</span>: busybox
    <span class="hljs-attr">command</span>: [<span class="hljs-string">"/bin/sh"</span>]
    <span class="hljs-attr">args</span>: [<span class="hljs-string">"-c"</span>, <span class="hljs-string">"while true; do date &gt;&gt; /var/log/date.txt; sleep
10;done"</span>]
    <span class="hljs-attr">volumeMounts</span>:
    - name: logs
      <span class="hljs-attr">mountPath</span>: <span class="hljs-regexp">/var/</span>log

  - name: sidecar
    <span class="hljs-attr">image</span>: centos/httpd
    <span class="hljs-attr">ports</span>:
    - containerPort: <span class="hljs-number">80</span>
    <span class="hljs-attr">volumeMounts</span>:
    - name: logs
      <span class="hljs-attr">mountPath</span>: <span class="hljs-regexp">/var/</span>www/html
</code></pre><p>The shared resource in the above example is the volume with the name <code>logs</code> and the <code>emptyDir: {}</code> property.
An <a target="_blank" href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir">emptyDir</a> volume is initially empty and can be mounted at different paths in different containers as we can see in the above YAML by looking at the container <code>volumeMounts</code>.</p>
<p>The <code>main</code> container writes the current date and time to <code>/var/log/date.txt</code> every 10 seconds, while the <code>sidecar</code> container will be able to read and present the file to a user since it has the same volume mounted albeit on a different path from the container perspective.</p>
<p>Let's create the Pod, open a shell session in the <code>sidecar</code> container and run <code>cURL</code> to check the output created by the <code>main</code> container:</p>
<pre><code>student@minikube:~$ kubectl create -f sidecar.yaml 
pod/sidecar-pod created

student@minikube:~$ kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
sidecar-pod   <span class="hljs-number">2</span>/<span class="hljs-number">2</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">10</span>s

student@minikube:~$ kubectl exec -it sidecar-pod -c sidecar -- <span class="hljs-regexp">/bin/</span>bash
[root@sidecar-pod /]# yum install curl -y
....
[root@sidecar-pod /]# curl http:<span class="hljs-comment">//localhost/date.txt</span>
....
</code></pre><h2 id="heading-managing-init-containers">Managing Init Containers</h2>
<p>An <a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/">init container</a> is an additional container in a Pod that needs to complete a task before the "regular" container is started. As long as the init container hasn't completed its job, the regular container is not started.</p>
<p>Have a look at this official <a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use">example YAML file</a> fo init containers.
We'll work with a more simplified version here:</p>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: init-demo
<span class="hljs-attr">spec</span>:
  containers:
  - name: nginx
    <span class="hljs-attr">image</span>: nginx
  <span class="hljs-attr">initContainers</span>:
  - name: init-box
    <span class="hljs-attr">image</span>: busybox
    <span class="hljs-attr">command</span>:
    - sleep
    - <span class="hljs-string">"3600"</span>
</code></pre><p>In the above example our <code>init-box</code> container will sleep for 1 hour and only once the sleep command finishes our <code>nginx</code> container will spin up.
We can see our Pod is in the Init status:</p>
<pre><code>student@minikube:~$ kubectl create -f init-demo.yaml 
pod/init-demo created

student@minikube:~$ kubectl get pods
NAME        READY   STATUS     RESTARTS   AGE
init-demo   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Init:<span class="hljs-number">0</span>/<span class="hljs-number">1</span>   <span class="hljs-number">0</span>          <span class="hljs-number">4</span>s
</code></pre><p>We can use the describe command to get more information about the Pod:</p>
<pre><code>student@minikube:~$ kubectl describe pod init-demo
...
Init Containers:
  init-box:
    Container ID:  docker:<span class="hljs-comment">//c23ec32c3ba19d43417c730117b6319b0c57d6c8938c76ae641b1afad0e08c11</span>
    Image:         busybox
    Image ID:      docker-pullable:<span class="hljs-comment">//busybox@sha256:caa382c432891547782ce7140fb3b7304613d3b0438834dce1cad68896ab110a</span>
    Port:          &lt;none&gt;
    Host Port:     &lt;none&gt;
    Command:
      sleep
      3600
    State:          Running
...
Containers:
  nginx:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           &lt;none&gt;
    Host Port:      &lt;none&gt;
    State:          Waiting
      Reason:       PodInitializing
...
</code></pre><p>The Events section in the output of the <code>describe</code> command shows us what containers have been started:</p>
<pre><code>Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  <span class="hljs-number">78</span>s   <span class="hljs-keyword">default</span>-scheduler  Successfully assigned <span class="hljs-keyword">default</span>/init-demo to minikube
  Normal  Pulling    <span class="hljs-number">77</span>s   kubelet            Pulling image <span class="hljs-string">"busybox"</span>
  Normal  Pulled     <span class="hljs-number">65</span>s   kubelet            Successfully pulled image <span class="hljs-string">"busybox"</span> <span class="hljs-keyword">in</span> <span class="hljs-number">11.886228471</span>s
  Normal  Created    <span class="hljs-number">65</span>s   kubelet            Created container init-box
  Normal  Started    <span class="hljs-number">64</span>s   kubelet            Started container init-box
</code></pre><h2 id="heading-using-namespaces">Using NameSpaces</h2>
<p>Kubernetes leverages Linux kernel-level resource isolation: <a target="_blank" href="https://en.wikipedia.org/wiki/Linux_namespaces">NameSpaces</a>.  Different NameSpaces can be used to strictly separate between customer resources and to apply different security-related settings such as Role-Based Access Control and Quotas.</p>
<p>Let's demonstrate this the <strong><em>imparative</em></strong> way:</p>
<pre><code># Show all available namespaces
student@minikube:~$ kubectl get ns
NAME              STATUS   AGE
<span class="hljs-keyword">default</span>           Active   <span class="hljs-number">14</span>d
kube-node-lease   Active   <span class="hljs-number">14</span>d
kube-public       Active   <span class="hljs-number">14</span>d
kube-system       Active   <span class="hljs-number">14</span>d

# Show all resources per namespace
student@minikube:~$ kubectl get all -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS       AGE
kube-system   pod/coredns<span class="hljs-number">-64897985</span>d-sj5lw            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">3</span> (<span class="hljs-number">108</span>s ago)   <span class="hljs-number">14</span>d
...

#  Create a <span class="hljs-keyword">new</span> namespace
student@minikube:~$ kubectl create ns secret
namespace/secret created

# Start a <span class="hljs-keyword">new</span> Pod <span class="hljs-keyword">in</span> the <span class="hljs-keyword">new</span> namespace
student@minikube:~$ kubectl run secretnginx --image=nginx -n secret
pod/secretnginx created

student@minikube:~$ kubectl get pods
No resources found <span class="hljs-keyword">in</span> <span class="hljs-keyword">default</span> namespace.

# List all Pods <span class="hljs-keyword">in</span> the secret namespace
student@minikube:~$ kubectl get pods -n secret
NAME          READY   STATUS              RESTARTS   AGE
secretnginx   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">10</span>s
</code></pre><p>We can do the same thing the <strong><em>declarative</em></strong> way by defining <code>namespace</code> under the Pod <code>metadata</code>:</p>
<pre><code>apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: busyboxPod
  <span class="hljs-attr">namespace</span>: secret
</code></pre><p>Check properties of the namespace using the <code>describe</code> command:</p>
<pre><code>student@minikube:~$ kubectl describe ns secret
<span class="hljs-attr">Name</span>:         secret
<span class="hljs-attr">Labels</span>:       kubernetes.io/metadata.name=secret
<span class="hljs-attr">Annotations</span>:  <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">none</span>&gt;</span>
Status:       Active

No resource quota.

No LimitRange resource.</span>
</code></pre><p>Lastly, let's <strong><em>declaratively</em></strong> combine the creation of a namespace and a pod inside the same namespace:</p>
<pre><code>student@minikube:~$ kubectl create ns production --dry-run=client -o yaml &gt; nginx_prod.yml
student@minikube:~$ cat nginx_prod.yml 
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">kind</span>: Namespace
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">name</span>: production
<span class="hljs-attr">spec</span>: {}
<span class="hljs-attr">status</span>: {}
</code></pre><p>Notice that <code>kind</code> is <code>Namespace</code>.<br />Now, we add the Pod to the same namespace inside the same Yaml file:</p>
<pre><code>student@minikube:~$ kubectl run nginx-prod -n production --image=nginx --dry-run=client -o yaml &gt;&gt; nginx_prod.yml 
student@minikube:~$ cat nginx_prod.yml 
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">kind</span>: Namespace
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">name</span>: production
<span class="hljs-attr">spec</span>: {}
<span class="hljs-attr">status</span>: {}
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">labels</span>:
    run: nginx-prod
  <span class="hljs-attr">name</span>: nginx-prod
  <span class="hljs-attr">namespace</span>: production
<span class="hljs-attr">spec</span>:
  containers:
  - image: nginx
    <span class="hljs-attr">name</span>: nginx-prod
    <span class="hljs-attr">resources</span>: {}
  <span class="hljs-attr">dnsPolicy</span>: ClusterFirst
  <span class="hljs-attr">restartPolicy</span>: Always
<span class="hljs-attr">status</span>: {}
</code></pre><p>We should modify the Yaml file in such a way that it's clear we're dealing with 2 Yaml list items in a single file. We'll add the <code>---</code> lines to indicate the start of a new list item:</p>
<pre><code>student@minikube:~$ cat nginx_prod.yml 
---
apiVersion: v1
<span class="hljs-attr">kind</span>: Namespace
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">name</span>: production
<span class="hljs-attr">spec</span>: {}
<span class="hljs-attr">status</span>: {}
---
apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">labels</span>:
    run: nginx-prod
  <span class="hljs-attr">name</span>: nginx-prod
  <span class="hljs-attr">namespace</span>: production
<span class="hljs-attr">spec</span>:
  containers:
  - image: nginx
    <span class="hljs-attr">name</span>: nginx-prod
    <span class="hljs-attr">resources</span>: {}
  <span class="hljs-attr">dnsPolicy</span>: ClusterFirst
  <span class="hljs-attr">restartPolicy</span>: Always
<span class="hljs-attr">status</span>: {}
</code></pre><p>... and we can now create the actual resources from the Yaml file:</p>
<pre><code>student@minikube:~$ kubectl create -f nginx_prod.yml 
namespace/production created
pod/nginx-prod created

student@minikube:~$ kubectl get all -n production
NAME             READY   STATUS              RESTARTS   AGE
pod/nginx-prod   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">15</span>s
student@minikube:~$
</code></pre><h1 id="heading-managing-advanced-pod-features">Managing Advanced Pod Features</h1>
<h2 id="heading-exploring-pod-state">Exploring Pod State</h2>
<p><code>kubectl describe pod podname</code> is a human-readable way to see all Pod parameters and settings as currently stored in the etcd database. You can use the <a target="_blank" href="https://kubernetes.io/docs">offical documentation</a> for more information about these settings and parameters.</p>
<p>While we can <code>describe</code> the Pod externally, we can also connect to the Pod and run commands on the primary container in the Pod:  </p>
<ul>
<li>Connect using <code>kubectl exec -it podname -- sh</code></li>
<li>Disconnect by executing the <code>exit</code> command. (or CTR+P CTRL+Q if the shell is running as process ID 1)</li>
</ul>
<pre><code>student@minikube:~$ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
mynginx   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">44</span>s

student@minikube:~$ kubectl get pods mynginx -o json | less
...
student@minikube:~$ kubectl get pods mynginx -o yaml | less
...
student@minikube:~$ kubectl describe pods mynginx
...
student@minikube:~$ kubectl exec -it mynginx -- sh
# pwd
/
# ps aux
<span class="hljs-attr">sh</span>: <span class="hljs-number">1</span>: ps: not found
# cd /proc
# ls
<span class="hljs-number">1</span>   acpi       cmdline     diskstats    filesystems  irq          kmsg       locks    mounts      sched_debug  softirqs       sysvipc       version
<span class="hljs-number">34</span>  asound     consoles  dma          fs       kallsyms   kpagecgroup  mdstat   mtrr      schedstat    stat          thread-self  version_signature
<span class="hljs-number">35</span>  buddyinfo  cpuinfo     driver       interrupts   kcore      kpagecount   meminfo  net          scsi           swaps          timer_list   vmallocinfo
<span class="hljs-number">53</span>  bus        crypto     execdomains  iomem       key-users  kpageflags   misc     pagetypeinfo  self           sys          tty       vmstat
<span class="hljs-number">60</span>  cgroups    devices     fb          ioports       keys       loadavg       modules  partitions      slabinfo     sysrq-trigger  uptime       zoneinfo
# cat <span class="hljs-number">1</span>/cmdline
<span class="hljs-attr">nginx</span>: master process nginx -g daemon off;
# cat <span class="hljs-number">53</span>/cmdline
sh
# cat <span class="hljs-number">35</span>/cmdline
<span class="hljs-attr">nginx</span>: worker process
# exit
student@minikube:~$
</code></pre><p>Most containers run minimal images where not all commands may be available, in the above example the <code>ps</code> command is not available. In this case we can make advantage of the <code>proc</code> pseudo filesystem.</p>
<h2 id="heading-using-pod-logs">Using Pod Logs</h2>
<p>The Pod entrypoint application does not connect to any STDOUT, instead, application output is sent to the Kubernetes cluster. We can use <code>kubectl logs</code> to see this output and help us in troubleshooting:</p>
<pre><code>student@minikube:~$ kubectl run mydb --image=mariadb
pod/mydb created

student@minikube:~$ kubectl get pods
NAME      READY   STATUS              RESTARTS   AGE
mydb      <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">5</span>s
...
student@minikube:~$ kubectl get pods
NAME      READY   STATUS             RESTARTS      AGE
mydb      <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     CrashLoopBackOff   <span class="hljs-number">1</span> (<span class="hljs-number">15</span>s ago)   <span class="hljs-number">76</span>s

student@minikube:~$ kubectl describe pod mydb
...
    State:          Waiting
      <span class="hljs-attr">Reason</span>:       CrashLoopBackOff
    Last State:     Terminated
      <span class="hljs-attr">Reason</span>:       <span class="hljs-built_in">Error</span>
      Exit Code:    <span class="hljs-number">1</span>
...

student@minikube:~$ kubectl logs mydb
[ERROR] [Entrypoint]: Database is uninitialized and password option is not specified
    You need to specify one <span class="hljs-keyword">of</span> MARIADB_ROOT_PASSWORD, MARIADB_ALLOW_EMPTY_ROOT_PASSWORD and MARIADB_RANDOM_ROOT_PASSWORD
</code></pre><p>Looking at the log output, we needed to specify one or more specific environment variables.  Let's fix this, but since we can't update a Pod (only deployments which we'll see later) we need to delete our Pod first:</p>
<pre><code>student@minikube:~$ kubectl <span class="hljs-keyword">delete</span> pod mydb
pod <span class="hljs-string">"mydb"</span> deleted

student@minikube:~$ kubectl run mydb --image=mariadb --env=<span class="hljs-string">"MARIADB_ROOT_PASSWORD=myrootpassword"</span>
pod/mydb created

student@minikube:~$ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
mydb      <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">40</span>s

student@minikube:~$ kubectl logs mydb
[Note] mariadbd: ready <span class="hljs-keyword">for</span> connections.
Version: <span class="hljs-string">'10.7.3-MariaDB-1:10.7.3+maria~focal'</span>  socket: <span class="hljs-string">'/run/mysqld/mysqld.sock'</span>  port: <span class="hljs-number">3306</span>  mariadb.org binary distribution
</code></pre><h2 id="heading-port-forwarding">Port Forwarding</h2>
<p>A simple way of accessing a Pod is by using Port Forwarding: Expose a port on the host running the Pod that forwards to the Pod. This is useful for testing Pod accessibility on a specific cluster node but isn't used to expose the Pod to external users. Regular user access to applications in the Pod is provided via Services and Ingress.</p>
<p>When you run <code>kubectl get pods -o wide</code> or <code>kubectl describe pod podname</code> you'll see the Pod has an IP address. This IP address is accessible only from within the cluster, you cannot use it to address the Pod from outside the cluster.</p>
<pre><code>student@minikube:~$ kubectl get pods mynginx -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
mynginx   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">63</span>m   <span class="hljs-number">172.17</span><span class="hljs-number">.0</span><span class="hljs-number">.3</span>   minikube   &lt;none&gt;           <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">none</span>&gt;</span>

student@minikube:~$ ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
^C
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2053ms

student@minikube:~$ curl 172.17.0.3
curl: (7) Failed to connect to 172.17.0.3 port 80: No route to host</span>
</code></pre><p>So if we need to test network accessibility to our Pod, we use Port Forwarding:</p>
<pre><code>student@minikube:~$ kubectl port-forward mynginx <span class="hljs-number">8080</span>:<span class="hljs-number">80</span> &amp;
[<span class="hljs-number">1</span>] <span class="hljs-number">19855</span>
student@minikube:~$ Forwarding <span class="hljs-keyword">from</span> <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>:<span class="hljs-number">8080</span> -&gt; <span class="hljs-number">80</span>
Forwarding <span class="hljs-keyword">from</span> [::<span class="hljs-number">1</span>]:<span class="hljs-number">8080</span> -&gt; <span class="hljs-number">80</span>
student@minikube:~$
</code></pre><p>This command starts a port forwarding process in the foreground, so we add the <code>&amp;</code> at the end of the command to start it in the background.</p>
<pre><code>student@minikube:~$ curl localhost:<span class="hljs-number">8080</span>
Handling connection <span class="hljs-keyword">for</span> <span class="hljs-number">8080</span>
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
&lt;title&gt;Welcome to nginx!&lt;/title&gt;
</code></pre><p>To stop port forwarding, we bring the process back to the foreground and stop it using CTRL+C:</p>
<pre><code>student@minikube:~$ fg
minikube kubectl -- port-forward mynginx <span class="hljs-number">8080</span>:<span class="hljs-number">80</span>
^C
student@minikube:~$
</code></pre><h2 id="heading-configuring-securitycontext">Configuring securityContext</h2>
<p>A securityContext defines privileges and access control settings <strong><em>for a Pod and/or container</em></strong>, and includes:</p>
<ul>
<li>Discretionary Access Control</li>
<li>SELinux or AppArmor</li>
<li>Running as privileged or unprivileged user</li>
<li>AllowPrivilegeEscalation to control if a process can gain more privileges than its parent process</li>
</ul>
<p><code>kubectl explain</code> can give you a complete overview.</p>
<p>Let's work with examples:</p>
<pre><code>student@minikube:~$ kubectl explain pod.spec.securityContext
...
student@minikube:~$ kubectl explain pod.spec.containers.securityContext
...
student@minikube:~/ckad$ cat securitycontextdemo2.yaml 
<span class="hljs-attr">apiVersion</span>: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: security-context-demo
<span class="hljs-attr">spec</span>:
  securityContext:
    runAsUser: <span class="hljs-number">1000</span>
    <span class="hljs-attr">runAsGroup</span>: <span class="hljs-number">3000</span>
    <span class="hljs-attr">fsGroup</span>: <span class="hljs-number">2000</span>
  <span class="hljs-attr">volumes</span>:
  - name: sec-ctx-vol
    <span class="hljs-attr">emptyDir</span>: {}
  <span class="hljs-attr">containers</span>:
  - name: sec-ctx-demo
    <span class="hljs-attr">image</span>: busybox
    <span class="hljs-attr">command</span>: [ <span class="hljs-string">"sh"</span>, <span class="hljs-string">"-c"</span>, <span class="hljs-string">"sleep 1h"</span> ]
    <span class="hljs-attr">volumeMounts</span>:
    - name: sec-ctx-vol
      <span class="hljs-attr">mountPath</span>: <span class="hljs-regexp">/data/</span>demo
    <span class="hljs-attr">securityContext</span>:
      allowPrivilegeEscalation: <span class="hljs-literal">false</span>

student@minikube:~/ckad$ kubectl create -f securitycontextdemo2.yaml 
pod/security-context-demo created

student@minikube:~/ckad$ kubectl get pods security-context-demo -o yaml
...
spec:
  containers:
  - command:
    - sh
    - -c
    - sleep <span class="hljs-number">1</span>h
    <span class="hljs-attr">image</span>: busybox
    <span class="hljs-attr">imagePullPolicy</span>: Always
    <span class="hljs-attr">name</span>: sec-ctx-demo
    <span class="hljs-attr">resources</span>: {}
    <span class="hljs-attr">securityContext</span>:
      allowPrivilegeEscalation: <span class="hljs-literal">false</span>
...

student@minikube:~/ckad$ kubectl exec -it security-context-demo -- sh
/ $ cd data/demo
/data/demo $ echo <span class="hljs-string">"Hello"</span> &gt; test
/data/demo $ ls -l
total <span class="hljs-number">4</span>
-rw-r--r--    <span class="hljs-number">1</span> <span class="hljs-number">1000</span>     <span class="hljs-number">2000</span>             <span class="hljs-number">6</span> Mar <span class="hljs-number">24</span> <span class="hljs-number">17</span>:<span class="hljs-number">05</span> test
/data/demo $ id
uid=<span class="hljs-number">1000</span> gid=<span class="hljs-number">3000</span> groups=<span class="hljs-number">2000</span>
</code></pre><p>When we create a new file in the Pods primary container, we see that the owner of the file is id <code>1000</code> (<code>runAsUser</code>) and group owner is <code>2000</code> (<code>fsGroup</code>) as specified in the Yaml securityContext. The <code>id</code> command reveals our <code>runAsUser</code> ID, our primary group id <code>3000</code> and our secondary group id <code>2000</code>.</p>
<h2 id="heading-managing-jobs">Managing Jobs</h2>
<p>Pods are the essence of Kubernetes, when your Pod goes down then Kubernetes will start a new Pod. In that sense, Pods are normally created to run forever.  There can be cases where you want a Pod to execute a one-shot task, like backup jobs, a calculation or batch processing. This is were you can use Jobs: The Pod will run until it finishes its task then stops.</p>
<p>We can set <code>ttlSecondsAfterFinished</code> to clean up completed Jobs automatically so that we don't keep both the Job and the Pod (created by the Job) around forever.</p>
<p>There are 3 different Job types specified by the <code>completion</code> and <code>parallelism</code> parameters:</p>
<ul>
<li>Non-parallel Jobs: 1 Job - 1 Pod<ul>
<li><code>completions=X</code></li>
<li><code>paralellism=1</code>  </li>
</ul>
</li>
<li>Parallel Jobs with a fixed completion count: the Job is completed after successfully running as many times as specified by <code>jobs.spec.completions</code>. The number of parallel or concurrent Pods that are started by the Job are specified by <code>jobs.spec.parallelism</code>.<ul>
<li><code>completions=X</code> </li>
<li><code>paralellism=Y</code>  </li>
</ul>
</li>
<li>Parallel Jobs with a work queue: Multiple Jobs are started, when one completes the Job is done.<ul>
<li><code>completions=1</code></li>
<li><code>parallelism=X</code></li>
</ul>
</li>
</ul>
<p>Here's an example:</p>
<pre><code>student@minikube:~$ kubectl create job onejob --image=busybox --dry-run=client -o yaml -- date &gt; onejob.yml
student@minikube:~$ cat onejob.yml 
<span class="hljs-attr">apiVersion</span>: batch/v1
<span class="hljs-attr">kind</span>: Job
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">name</span>: onejob
<span class="hljs-attr">spec</span>:
  template:
    metadata:
      creationTimestamp: <span class="hljs-literal">null</span>
    <span class="hljs-attr">spec</span>:
      containers:
      - command:
        - date
        <span class="hljs-attr">image</span>: busybox
        <span class="hljs-attr">name</span>: onejob
        <span class="hljs-attr">resources</span>: {}
      <span class="hljs-attr">restartPolicy</span>: Never
<span class="hljs-attr">status</span>: {}
</code></pre><p>Notice that <code>kind</code> is <code>Job</code> and that <code>restartPolicy</code> is set to <code>Never</code>. In this example the container just executes the <code>date</code> command and then is done.</p>
<pre><code>student@minikube:~$ kubectl create -f onejob.yml 
job.batch/onejob created

student@minikube:~$ kubectl get jobs
NAME     COMPLETIONS   DURATION   AGE
onejob   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>           <span class="hljs-number">4</span>s         <span class="hljs-number">4</span>s

student@minikube:~$ kubectl get jobs,pods
NAME               COMPLETIONS   DURATION   AGE
job.batch/onejob   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>           <span class="hljs-number">7</span>s         <span class="hljs-number">7</span>s

NAME               READY   STATUS              RESTARTS   AGE
pod/onejob-zjgd9   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">7</span>s
</code></pre><p>Once the Job is done, the Job <code>COMPLETIONS</code> and Pod <code>STATUS</code> is updated:</p>
<pre><code>student@minikube:~$ kubectl get jobs,pods
NAME               COMPLETIONS   DURATION   AGE
job.batch/onejob   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>           <span class="hljs-number">10</span>s        <span class="hljs-number">41</span>s

NAME               READY   STATUS      RESTARTS   AGE
pod/onejob-zjgd9   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">41</span>s

student@minikube:~$ kubectl <span class="hljs-keyword">delete</span> -f onejob.yml 
job.batch <span class="hljs-string">"onejob"</span> deleted
</code></pre><p>Now let's create a parallel Job:</p>
<pre><code>student@minikube:~$ kubectl create job paralleljob --image=busybox --dry-run=client -o yaml -- sleep <span class="hljs-number">5</span> &gt; paralleljob.yml

student@minikube:~$ cat paralleljob.yml 
<span class="hljs-attr">apiVersion</span>: batch/v1
<span class="hljs-attr">kind</span>: Job
<span class="hljs-attr">metadata</span>:
  creationTimestamp: <span class="hljs-literal">null</span>
  <span class="hljs-attr">name</span>: paralleljob
<span class="hljs-attr">spec</span>:
  completions: <span class="hljs-number">6</span>
  <span class="hljs-attr">parallelism</span>: <span class="hljs-number">3</span>
  <span class="hljs-attr">ttlSecondsAfterFinished</span>: <span class="hljs-number">60</span>
  <span class="hljs-attr">template</span>:
    metadata:
      creationTimestamp: <span class="hljs-literal">null</span>
    <span class="hljs-attr">spec</span>:
      containers:
      - command:
        - sleep
        - <span class="hljs-string">"5"</span>
        <span class="hljs-attr">image</span>: busybox
        <span class="hljs-attr">name</span>: paralleljob
        <span class="hljs-attr">resources</span>: {}
      <span class="hljs-attr">restartPolicy</span>: Never
<span class="hljs-attr">status</span>: {}
</code></pre><p>After generating the YAML file we've added the <code>completions</code>, <code>parallelism</code> and <code>ttlSecondsAfterFinished</code> values.  </p>
<p>Until the Job has completed 6 times, the Job will make sure that 3 Pods are running the Job at all times. When one Pod finished a new Pod is started. At completion of the Job, 6 Pods will have been created by the Job.
The Job and Pods are deleted after 60 seconds.</p>
<pre><code>student@minikube:~$ kubectl create -f paralleljob.yml 
job.batch/paralleljob created

student@minikube:~$ kubectl get jobs,pods
NAME                    COMPLETIONS   DURATION   AGE
job.batch/paralleljob   <span class="hljs-number">6</span>/<span class="hljs-number">6</span>           <span class="hljs-number">29</span>s        <span class="hljs-number">29</span>s

NAME                    READY   STATUS      RESTARTS   AGE
pod/paralleljob<span class="hljs-number">-6</span>s9l4   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">11</span>s
pod/paralleljob<span class="hljs-number">-7</span>swk6   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">19</span>s
pod/paralleljob<span class="hljs-number">-8</span>pzgp   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">14</span>s
pod/paralleljob-ldmtf   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">29</span>s
pod/paralleljob-tsk4q   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">29</span>s
pod/paralleljob-x6p8k   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed   <span class="hljs-number">0</span>          <span class="hljs-number">29</span>s

student@minikube:~$ kubectl get jobs,pods
No resources found <span class="hljs-keyword">in</span> <span class="hljs-keyword">default</span> namespace.
</code></pre><h2 id="heading-managing-cronjobs">Managing Cronjobs</h2>
<p>While Jobs are used to run a task a specific number of times, CronJobs are used for tasks that are recurrent or that need to run on a regular basis. In that sense they are very similar to Linux cronjobs.<br />When running a CronJob, a Job is scheduled and in turn the Job will start a Pod.</p>
<p>Let's go over this in detail:</p>
<pre><code>student@minikube:~$ kubectl create cronjob -h | less
  # Create a cron job <span class="hljs-keyword">with</span> a command
  kubectl create cronjob my-job --image=busybox --schedule=<span class="hljs-string">"*/1 * * * *"</span> -- date
...
student@minikube:~$ kubectl create cronjob runme --image=busybox --schedule=<span class="hljs-string">"*/1 * * * *"</span> -- echo Hello there!
cronjob.batch/runme created

student@minikube:~$ kubectl get cronjobs,jobs,pods
NAME                  SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/runme   */<span class="hljs-number">1</span> * * * *   False     <span class="hljs-number">0</span>        &lt;none&gt;          <span class="hljs-number">15</span>s

student@minikube:~$ kubectl get cronjobs,jobs,pods
NAME                  SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/runme   */<span class="hljs-number">1</span> * * * *   False     <span class="hljs-number">1</span>        <span class="hljs-number">8</span>s              <span class="hljs-number">28</span>s

NAME                       COMPLETIONS   DURATION   AGE
job.batch/runme<span class="hljs-number">-27480120</span>   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>           <span class="hljs-number">8</span>s         <span class="hljs-number">8</span>s

NAME                       READY   STATUS              RESTARTS   AGE
pod/runme<span class="hljs-number">-27480120</span>-xv5l4   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">8</span>s

student@minikube:~$ kubectl get cronjobs,jobs,pods
NAME                  SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/runme   */<span class="hljs-number">1</span> * * * *   False     <span class="hljs-number">1</span>        <span class="hljs-number">2</span>s              <span class="hljs-number">82</span>s

NAME                       COMPLETIONS   DURATION   AGE
job.batch/runme<span class="hljs-number">-27480120</span>   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>           <span class="hljs-number">13</span>s        <span class="hljs-number">62</span>s
job.batch/runme<span class="hljs-number">-27480121</span>   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>           <span class="hljs-number">2</span>s         <span class="hljs-number">2</span>s

NAME                       READY   STATUS              RESTARTS   AGE
pod/runme<span class="hljs-number">-27480120</span>-xv5l4   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     Completed           <span class="hljs-number">0</span>          <span class="hljs-number">62</span>s
pod/runme<span class="hljs-number">-27480121</span>-nljcn   <span class="hljs-number">0</span>/<span class="hljs-number">1</span>     ContainerCreating   <span class="hljs-number">0</span>          <span class="hljs-number">2</span>s
</code></pre><p>As you can see above, the <code>cronjob.batch/runme</code> cronjob will create a new Job at the top of each minute and each Job will create a new Pod which will run until completion of the task.</p>
<pre><code>student@minikube:~$ kubectl <span class="hljs-keyword">delete</span> cronjob runme
cronjob.batch <span class="hljs-string">"runme"</span> deleted
student@minikube:~$ kubectl get cronjobs,jobs,pods
No resources found <span class="hljs-keyword">in</span> <span class="hljs-keyword">default</span> namespace.
</code></pre><h2 id="heading-resource-requests-and-limits">Resource Requests and Limits</h2>
<p>By default, a Pod will consume as much CPU and memory as necessary.<br />We can however use <code>pod.spec.containers.resources</code> to limit usage of those on a <em>per container</em> basis.
CPU and memory limitations are the most common, but there are others.<br />Each container can has its CPU and memory usage restricted by:</p>
<ul>
<li>Request: <code>kube-scheduler</code> will look for a worker-node that has this amount of resources available and schedule the Pod to run there. It's allowed for a container to use more resources than defined here. If no suitable worker-node is found, the Pod status remains in <code>Pending</code>.</li>
<li>Limit: This is a hard limit. If configured, the container runtime prevents the container from using more than the configured resource limit. For the memory resource type, this could result in an out of memory error if the container attempts to consume more memory than allowed.</li>
</ul>
<blockquote>
<p>a Pod resource request/limit is the sum of the resource requests/limits for each resource type and  for each container in the Pod.</p>
</blockquote>
<p>CPU limits are expressed in millicore or millicpu: 1/1000 of a CPU core.<br />So <code>500m</code> is 0.5 CPU and <code>2000m</code> is 2 CPU.</p>
<p>Example:</p>
<pre><code>---
apiVersion: v1
<span class="hljs-attr">kind</span>: Pod
<span class="hljs-attr">metadata</span>:
  name: frontend
<span class="hljs-attr">spec</span>:
  containers:
  - name: db
    <span class="hljs-attr">image</span>: mariadb
    <span class="hljs-attr">env</span>:
    - name: MYSQL_ROOT_PASSWORD
      <span class="hljs-attr">value</span>: <span class="hljs-string">"password"</span>
    <span class="hljs-attr">resources</span>:
      requests:
        memory: <span class="hljs-string">"64Mi"</span>
        <span class="hljs-attr">cpu</span>: <span class="hljs-string">"250m"</span>
      <span class="hljs-attr">limits</span>:
        memory: <span class="hljs-string">"128Mi"</span>
        <span class="hljs-attr">cpu</span>: <span class="hljs-string">"500m"</span>
  - name: wordpress
    <span class="hljs-attr">image</span>: wordpress
    <span class="hljs-attr">resources</span>:
      requests:
        memory: <span class="hljs-string">"64Mi"</span>
        <span class="hljs-attr">cpu</span>: <span class="hljs-string">"250m"</span>
      <span class="hljs-attr">limits</span>:
        memory: <span class="hljs-string">"128Mi"</span>
        <span class="hljs-attr">cpu</span>: <span class="hljs-string">"500m"</span>
</code></pre>]]></content:encoded></item><item><title><![CDATA[Kubernetes 101: Understanding Kubernetes]]></title><description><![CDATA[{{}}
What is Kubernetes?
https://kubernetes.io/
Kubernetes is an open-source ecosystem for automating deployment, scaling and managing of containerized applications. It provides a core solution with many third-party add-ons focusing on different area...]]></description><link>https://blog.joerismissaert.dev/kubernetes-101-understanding-kubernetes</link><guid isPermaLink="true">https://blog.joerismissaert.dev/kubernetes-101-understanding-kubernetes</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Wed, 08 Sep 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{</p>}}<p></p>
<h1 id="heading-what-is-kubernetes">What is Kubernetes?</h1>
<p><a target="_blank" href="https://kubernetes.io/">https://kubernetes.io/</a></p>
<p>Kubernetes is an open-source ecosystem for automating deployment, scaling and managing of containerized applications. It provides a core solution with many third-party add-ons focusing on different areas:</p>
<ul>
<li>Networking</li>
<li>Ingress</li>
<li>Monitoring</li>
<li>Packaging</li>
<li>...</li>
</ul>
<p>Kubernetes has its origins at Google where it was known as <a target="_blank" href="https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/">Borg</a>. It's currently owned by the <a target="_blank" href="https://www.cncf.io/">Cloud Native Computing Foundation</a>, an open-source foundation within the <a target="_blank" href="https://linuxfoundation.org/">Linux Foundation</a>. </p>
<p>Vanilla Kubernetes is Kubernetes directly created from the source code hosted by the CNCF. Different Kubernetes distributions exist that add specific functionality and a selection of solutions from the ecosystem:</p>
<ul>
<li>Google Anthos</li>
<li>Red Hat OpenShift</li>
<li>Suse Rancher</li>
<li>Canonical Kubernetes</li>
<li>...</li>
</ul>
<p>A new release of Kubernetes is published every 3 months. When a new release is published, new versions of the API (more on that later) may become available and old features may get deprecated. If a feature is deprecated it's important to adopt the new method: because of the 3 month <a target="_blank" href="https://kubernetes.io/releases/release/#the-release-cycle">release cycle</a>, the feature will go away within the next 2 releases. </p>
<h1 id="heading-kubernetes-architecture">Kubernetes Architecture</h1>
<p>Kubernetes has the following main components:</p>
<ul>
<li>Control Plane and worker nodes</li>
<li>Operators (aka "control loop", "watch-loops" or "controller")</li>
<li>Services</li>
<li>Pods of containers</li>
<li>Namespaces and quotas</li>
<li>Network and policies</li>
<li>Storage.</li>
</ul>
<p>A Kubernetes cluster is made of a Control Plane node and a set of worker nodes. The cluster is driven via API calls to operators.</p>
<p>{{</p>}}<p></p>
<h2 id="heading-the-control-plane-node">The Control Plane Node</h2>
<p>The various components responsible for ensuring that the current state of the cluster matches the desired state are called the Control Plane.</p>
<h3 id="heading-kube-apiserver">kube-apiserver</h3>
<p>The kube-apiserver is central to the operation of the Kubernetes cluster and exposes the Kubernetes API. You can communicate with the API using a local client called kubectl or you can write your own client and use curl commands.All actions are accepted and validated by this component, and it is the only connection to the etcd database.</p>
<h3 id="heading-kube-scheduler">kube-scheduler</h3>
<p>The kube-scheduler determines which node will host a Pod of containers. The scheduler will try to view available resources and then try to deploy the Pod based on availability and success.</p>
<h3 id="heading-etcd-database">etcd database</h3>
<p>The state of the cluster, networking, and other persistent information is kept in an etcd database. <a target="_blank" href="https://etcd.io/">etcd</a> is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. This database is only accessible by kube-apiserver.</p>
<h3 id="heading-kube-controller-manager">kube-controller-manager</h3>
<p>Orchestration is managed through a series of watch-loops or control loops, also called controllers or operators. A control loop is a non-terminating loop that regulates the state of a system. Each controller interrogates the kube-apiserver for a particular object state, then modifies the object until the declared state matches the current state. These controllers are compiled into the kube-controller-manager, but others can be added using custom resource definitions.  </p>
<p>The kube-controller-manager is a core control loop daemon which interacts with the kube-apiserver to determine the state of the cluster. If the state does not match, the manager will contact the necessary controller to match the desired state.  </p>
<h2 id="heading-worker-nodes">Worker Nodes</h2>
<p>A Worker Node consists of components that maintain running pods.</p>
<h3 id="heading-kubelet">kubelet</h3>
<p>The kubelet systemd process interacts with the underlying container engine. It accepts the API calls for Pod specifications and it will configure the local node until the specification has been met by passing requests to the local container engine.</p>
<h3 id="heading-kube-proxy">kube-proxy</h3>
<p>The kube-proxy creates and manages networking rules to expose the container on the network to other containers or the outside world.</p>
<h3 id="heading-container-runtime">Container runtime</h3>
<p>The container runtime or container engine is responsible for running containers.<br />Each Worker Node could run a different engine if needed: <a target="_blank" href="https://www.docker.com/">Docker</a>, <a target="_blank" href="https://containerd.io/">containerd</a>, <a target="_blank" href="https://cri-o.io/">CRI-O</a>, <a target="_blank" href="https://podman.io/">podman</a>, ...</p>
<h2 id="heading-the-most-essential-api-resources">The Most Essential API Resources</h2>
<h3 id="heading-deployment">Deployment</h3>
<p>The default operator for containers is a Deployment. A Deployment does not directly work with pods, instead it manages ReplicaSets. The ReplicaSet is an operator which will create or terminate pods according to a podSpec. The podSpec is sent to the kubelet, which then interacts with the container engine to download and make  the required resources available, then spawn or terminate containers until the status matches the spec.</p>
<h3 id="heading-pod">Pod</h3>
<p>Containers are not managed individually, instead, they are part of a larger object called a Pod. A Pod consists of one or more containers which share an IP address, access to storage and namespace. Typically, one container in a Pod runs an application, while other containers support the primary application.</p>
<h3 id="heading-service">Service</h3>
<p>The service operator requests existing IP addresses and information from the endpoint operator, and will manage the network connectivity based on labels. A service is used to communicate between pods, namespaces, and outside the cluster. </p>
<h1 id="heading-creating-a-lab-environment">Creating a Lab Environment</h1>
<p>The Kubernetes 101 series of articles that I will be publishing over time are meant to provide a basic introduction to Kubernetes. As such, we'll not be using a full blown Kubernetes cluster but we'll be relying on <a target="_blank" href="https://minikube.sigs.k8s.io/docs/">Minikube</a> instead.</p>
<p>With Minikube we can quickly and easily setup a local Kubernetes cluster and focus on learning the basics. In a later series, we'll deep dive into a full blown Kubernetes cluster with multiple worker nodes.</p>
<p>We will be installing Minikube in an Ubuntu virtual machine with 4GiB of RAM and 2vCPUs and we'll be using Docker as the container engine, so make sure you <a target="_blank" href="https://docs.docker.com/engine/install/ubuntu/">install Docker</a> as well. 
Once your virtual machine is ready, head over to the <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">Minikube installation instructions</a>. 
Make sure you start a cluster, install kubectl and create an alias for it to make life easier.</p>
<h2 id="heading-verifying-minikube-is-working">Verifying Minikube is working</h2>
<p>The minikube command has different options, here's an overview of the commonly used ones:</p>
<ul>
<li><code>minikube status</code>: Gets the status of a local Kubernetes cluster.</li>
<li><code>minikube start</code>: Starts a local Kubernetes cluster.</li>
<li><code>minikube stop</code>: Stops a running local Kubernetes cluster.</li>
<li><code>minikube ssh</code>: Log into the minikube environment (for debugging)</li>
<li><code>minikube dashboard</code>: Opens the Kubernetes dashboard in the local browser.</li>
<li><code>minikube delete</code>: Deletes a local Kubernetes cluster.</li>
<li><code>minikube ip</code>: Retrieves the IP address of the specified node.</li>
<li><code>minikube version</code>: Print the version of minikube.</li>
</ul>
<p>You can see all available options by using the <code>minikube --help</code> command.</p>
<p>These will come in handy as well:</p>
<ul>
<li><code>kubectl get all</code>: Display all resources.</li>
<li><code>docker ps</code>: List containers.</li>
</ul>
<h2 id="heading-bash-completion">Bash Completion</h2>
<p>Bash completion for <code>kubectl</code> will come in handy.
The <code>kubectl completion -h</code> command has instructions for different shells like <code>zsh</code> and <code>fish</code>.
Below are the instructions for <code>bash</code>:</p>
<pre><code>apt install bash-completion -y
echo <span class="hljs-string">"source &lt;(kubectl completion bash)"</span> &gt;&gt; ~/.bashrc
source ~/.bashrc
</code></pre><h2 id="heading-running-an-application">Running an application</h2>
<p>Let's go over the steps of starting our cluster and launching a simple Nginx Pod:</p>
<pre><code># We start our Minikube cluster
student@minikube:~$ minikube start
...

# Install kubectl
student@minikube:~$ minikube kubectl -- get pods -A
...

# Verify the status
student@minikube:~$ minikube status
minikube
<span class="hljs-attr">type</span>: Control Plane
<span class="hljs-attr">host</span>: Running
<span class="hljs-attr">kubelet</span>: Running
<span class="hljs-attr">apiserver</span>: Running
<span class="hljs-attr">kubeconfig</span>: Configured

# List all Docker containers - See how Minikube is running a Kubernetes cluster inside a single Docker container
student@minikube:~$ docker ps
...

# Have a look at the different Kubernetes components which are running <span class="hljs-keyword">in</span> Pods inside the kube-system namespace.
student@minikube:~$ kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS      AGE
coredns<span class="hljs-number">-64897985</span>d-sj5lw            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">11</span>m
etcd-minikube                      <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">11</span>m
kube-apiserver-minikube            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">11</span>m
kube-controller-manager-minikube   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">11</span>m
kube-proxy-mgcrk                   <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">11</span>m
kube-scheduler-minikube            <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>             <span class="hljs-number">11</span>m
storage-provisioner                <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">1</span> (<span class="hljs-number">10</span>m ago)   <span class="hljs-number">11</span>m

# Let<span class="hljs-string">'s run an Nginx Pod
student@minikube:~$ kubectl run nginx --image=nginx
pod/nginx created

student@minikube:~$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          24s

student@minikube:~$ kubectl get all
NAME        READY   STATUS    RESTARTS   AGE
pod/nginx   1/1     Running   0          37s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    &lt;none&gt;        443/TCP   13m</span>
</code></pre><p>Play around with the different minikube commands, and, once done head over to the next article.</p>
]]></content:encoded></item><item><title><![CDATA[Podman 102: Building a WordPress multi-service container with Nginx, PHP-FPM and MariaDB]]></title><description><![CDATA[{{< figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo">}}
{{< figure class="center" src="/img/podman.png" alt="Podman logo">}}
Typically an application container runs a single service, but instead of breaking apart existing multi-s...]]></description><link>https://blog.joerismissaert.dev/podman-102-building-a-wordpress-multi-service-container-with-nginx-php-fpm-and-mariadb</link><guid isPermaLink="true">https://blog.joerismissaert.dev/podman-102-building-a-wordpress-multi-service-container-with-nginx-php-fpm-and-mariadb</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Tue, 13 Apr 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo"&gt;}}
{{&lt; figure class="center" src="/img/podman.png" alt="Podman logo"&gt;}}</p>
<p>Typically an application container runs a single service, but instead of breaking apart existing multi-serivce applications into microservices (and connecting them with e.g. Kubernetes or OpenShift), we can use Podman (in contrast to Docker) to run multi-service containers using Systemd. Basically we would achieve something similar to LXD system containers but with Podman.</p>
<p>Podman understands what Systemd needs to do to run in a container. When Podman starts a container that is running init or systemd as its initial command, Podman automatically sets up the tmpfs and cgroups so that Systemd can start succesfully.</p>
<blockquote>
<p>Systemd attempts to write to the cgroup file system. By default, containers cannot write to the cgroup file system when SELinux is enabled. The <code>container_manage_cgroup</code> boolean must be enabled for this to be allowed on a SELinux enforced system: <code>setsebool -P container_manage_cgroup true</code></p>
</blockquote>
<p>In this post I'll create a rather basic multi-service container based on the Fedora container image which will be running Nginx, MariaDB and PHP-FPM to serve up a WordPress site with persistent storage both for the document root and the database.</p>
<p>I've pushed the final version of the image I've build below to <a target="_blank" href="https://quay.io/repository/smissaertj/fedora_wordpress?tab=info">my Quay.io repository</a>.</p>
<h2 id="heading-step-1-test-nginx">Step 1 - Test Nginx</h2>
<pre><code>[student@server1 ~]$ cat Dockerfile
FROM fedora
MAINTAINER Joeri Smissaert

RUN dnf -y upgrade; dnf -y install nginx; dnf clean all; systemctl enable nginx
RUN mkdir -p /<span class="hljs-keyword">var</span>/www/worpdress.server1.local/public
RUN mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
ADD https:<span class="hljs-comment">//gist.githubusercontent.com/smissaertj/9d02fd974b64fd1a30fd905bc730a098/raw/dee50eb0bea7b93acb6ad0ddb6894cefb74c9d45/nginx.conf /etc/nginx/nginx.conf</span>

EXPOSE <span class="hljs-number">80</span>

CMD [<span class="hljs-string">"/sbin/init"</span>]
</code></pre><p>Let's build the image:</p>
<pre><code>podman build -t fedora_wordpress .
</code></pre><p>...start the container:</p>
<pre><code>[student@server1 ~]$ podman run -d --name test -p <span class="hljs-number">8080</span>:<span class="hljs-number">80</span> -v /home/student/html:<span class="hljs-regexp">/var/</span>www/wordpress.server1.local/public:Z fedora_wordpress
...
</code></pre><p>...create a test file in the bind mounted document root and test using cURL:</p>
<pre><code>[student@server1 ~]$ mkdir html
[student@server1 ~]$ echo <span class="hljs-string">"JOERI"</span> &gt; html/index.html
[student@server1 ~]$ curl localhost:<span class="hljs-number">8080</span>
JOERI
[student@server1 ~]$ echo <span class="hljs-string">"TEST 123"</span> &gt; html/index.html
[student@server1 ~]$ curl localhost:<span class="hljs-number">8080</span>
TEST <span class="hljs-number">123</span>
</code></pre><p>So far so good :)</p>
<h2 id="heading-step-2-test-php-fpm">Step 2 - Test PHP-FPM</h2>
<p>In this step we only install and enable PHP-FPM. If the test fails, then I need to revise my Nginx and/or PHP-FPM pool configuration.
My Nginx configuration file is custom, while I left the default PHP-FPM configuration file in place.</p>
<pre><code>[student@server1 ~]$ cat Dockerfile
FROM fedora
MAINTAINER Joeri Smissaert

RUN dnf -y upgrade; dnf -y install nginx php-fpm php-mysqlnd php-pdo php-json; dnf clean all; systemctl enable nginx; systemctl enable php-fpm
RUN mkdir -p /<span class="hljs-keyword">var</span>/www/worpdress.server1.local/public
RUN mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
ADD https:<span class="hljs-comment">//gist.githubusercontent.com/smissaertj/9d02fd974b64fd1a30fd905bc730a098/raw/dee50eb0bea7b93acb6ad0ddb6894cefb74c9d45/nginx.conf /etc/nginx/nginx.conf</span>

EXPOSE <span class="hljs-number">80</span>

CMD [<span class="hljs-string">"/sbin/init"</span>]
</code></pre><p>Adjust original Dockerfile with the modifications above, then rebuild the image and run the container:</p>
<pre><code>[student@server1 ~]$ podman build -t fedora_wordpress .
...
[student@server1 ~]$ podman run -d --name test -p <span class="hljs-number">8080</span>:<span class="hljs-number">80</span> -v /home/student/html:<span class="hljs-regexp">/var/</span>www/wordpress.server1.local/public:Z fedora_wordpress
...
</code></pre><p>Remove the <code>html/index.html</code> file and create an <code>html/index.php</code> file with the following content:</p>
<pre><code>[student@server1 ~]$ cat html/index.php
&lt;html&gt;
 <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">head</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">title</span>&gt;</span>PHP Test<span class="hljs-tag">&lt;/<span class="hljs-name">title</span>&gt;</span>
 <span class="hljs-tag">&lt;/<span class="hljs-name">head</span>&gt;</span></span>
 <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
 <span class="hljs-tag">&lt;<span class="hljs-name">?php</span> <span class="hljs-attr">echo</span> '&lt;<span class="hljs-attr">p</span>&gt;</span>Hello World<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>'; ?&gt;
 <span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span></span>
&lt;/html&gt;
</code></pre><p>When we run a cURL test, we should <em>not</em> be seeing the <code>&lt;?php</code> and <code>?&gt;</code> tags, indicating that our php code was succesfully parsed by PHP-FPM:</p>
<pre><code>[student@server1 ~]$ curl localhost:<span class="hljs-number">8080</span>
&lt;html&gt;
 <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">head</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">title</span>&gt;</span>PHP Test<span class="hljs-tag">&lt;/<span class="hljs-name">title</span>&gt;</span>
 <span class="hljs-tag">&lt;/<span class="hljs-name">head</span>&gt;</span></span>
 <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
 <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>Hello World<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
 <span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span></span>
&lt;/html&gt;
</code></pre><p>Yaay! :D</p>
<h2 id="heading-step-3-test-mariadb">Step 3 - Test MariaDB</h2>
<p>I'll create persistent storage for the database by means of a podman volume:</p>
<pre><code>[student@server1 ~]$ podman volume create wordpress_db
wordpress_db

[student@server1 ~]$ podman volume ls
DRIVER      VOLUME NAME
local       wordpress_db
</code></pre><p>Again, we adjust our Dockerfile and rebuild our custom image:</p>
<pre><code>FROM fedora
MAINTAINER Joeri Smissaert

RUN dnf -y upgrade; dnf -y install nginx php-fpm php-fpm php-mysqlnd php-pdo php-json mariadb-server; dnf clean all; systemctl enable nginx; systemctl enable php-fpm; systemctl enable mariadb
RUN mkdir -p /<span class="hljs-keyword">var</span>/www/worpdress.server1.local/public
RUN mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
ADD https:<span class="hljs-comment">//gist.githubusercontent.com/smissaertj/9d02fd974b64fd1a30fd905bc730a098/raw/dee50eb0bea7b93acb6ad0ddb6894cefb74c9d45/nginx.conf /etc/nginx/nginx.conf</span>

EXPOSE <span class="hljs-number">80</span>

CMD [<span class="hljs-string">"/sbin/init"</span>]
</code></pre><p>We run the container:</p>
<pre><code>[student@server1 ~]$ podman run -d --name test -v wordpress_db:<span class="hljs-regexp">/var/</span>lib/mysql:Z fedora_wordpress
...
</code></pre><p>Next, we create the database and configure the database user and password:</p>
<pre><code>[student@server1 ~]$ podman exec test mysql -e <span class="hljs-string">"create database wordpressdb;"</span>
[student@server1 ~]$ podman exec test mysql -e <span class="hljs-string">"grant all privileges on wordpressdb.* to 'wordpress'@'localhost' identified by 'password';"</span>
</code></pre><p>We can now move on to the next step and install WordPress.</p>
<h2 id="heading-step-4-install-wordpress">Step 4 - Install WordPress</h2>
<p>In this step I'll move away from a bind mounted directory (used during Step 1 and Step 2) to a podman volume to persistently store the WordPress files.</p>
<pre><code>[student@server1 ~]$ podman volume create wordpress_files
wordpress_files
[student@server1 ~]$ podman volume ls
DRIVER      VOLUME NAME
local       wordpress_files
local       wordpress_db
[student@server1 ~]$ podman volume inspect wordpress_files
[
     {
          <span class="hljs-string">"Name"</span>: <span class="hljs-string">"wordpress_files"</span>,
          <span class="hljs-string">"Driver"</span>: <span class="hljs-string">"local"</span>,
          <span class="hljs-string">"Mountpoint"</span>: <span class="hljs-string">"/home/student/.local/share/containers/storage/volumes/wordpress_files/_data"</span>,
          <span class="hljs-string">"CreatedAt"</span>: <span class="hljs-string">"2021-04-14T00:25:06.906021271+04:00"</span>,
          <span class="hljs-string">"Labels"</span>: {

          },
          <span class="hljs-string">"Scope"</span>: <span class="hljs-string">"local"</span>,
          <span class="hljs-string">"Options"</span>: {

          },
          <span class="hljs-string">"UID"</span>: <span class="hljs-number">0</span>,
          <span class="hljs-string">"GID"</span>: <span class="hljs-number">0</span>,
          <span class="hljs-string">"Anonymous"</span>: <span class="hljs-literal">false</span>
     }
]
</code></pre><p>From the last command we can see where exactly the data will be stored:</p>
<pre><code>Mountpoint<span class="hljs-string">": "</span>/home/student/.local/share/containers/storage/volumes/wordpress_files/_data<span class="hljs-string">"</span>
</code></pre><p>I'll go ahead and extract WordPress inside that directory:</p>
<pre><code>[student@server1 ~]$ cd ~<span class="hljs-regexp">/.local/</span>share/containers/storage/volumes/wordpress_files/_data/
[student@server1 _data]$ wget https:<span class="hljs-comment">//wordpress.org/latest.tar.gz</span>
...
[student@server1 _data]$ tar xf latest.tar.gz --strip-components <span class="hljs-number">1</span>
[student@server1 _data]$ rm -rf latest.tar.gz
</code></pre><p>Let's start the container and test our installation:</p>
<pre><code>[student@server1 ~]$ podman run -d --name wordpress_test_container -p <span class="hljs-number">8080</span>:<span class="hljs-number">80</span> -v wordpress_db:<span class="hljs-regexp">/var/</span>lib/mysql:Z -v wordpress_files:<span class="hljs-regexp">/var/</span>www/wordpress.server1.local/public:Z fedora_wordpress
bc006160ad6b74b81fa3fc353bc0cbb1cec3b394365dc98259984a86c971cd9f
[student@server1 ~]$
</code></pre><p>Testing with cURL seems to go fine:</p>
<pre><code>[student@server1 ~]$ curl -I localhost:<span class="hljs-number">8080</span>
HTTP/<span class="hljs-number">1.1</span> <span class="hljs-number">302</span> Found
<span class="hljs-attr">Server</span>: nginx/<span class="hljs-number">1.18</span><span class="hljs-number">.0</span>
<span class="hljs-attr">Date</span>: Tue, <span class="hljs-number">13</span> Apr <span class="hljs-number">2021</span> <span class="hljs-number">20</span>:<span class="hljs-number">34</span>:<span class="hljs-number">46</span> GMT
Content-Type: text/html; charset=UTF<span class="hljs-number">-8</span>
<span class="hljs-attr">Connection</span>: keep-alive
X-Powered-By: PHP/<span class="hljs-number">7.4</span><span class="hljs-number">.16</span>
<span class="hljs-attr">Location</span>: http:<span class="hljs-comment">//localhost:8080/wp-admin/setup-config.php</span>
</code></pre><p>So at this point we have a working WordPress multi-service container :D</p>
<h3 id="heading-below-is-the-final-version-of-our-dockerfile">Below is the final version of our Dockerfile:</h3>
<pre><code>FROM fedora
MAINTAINER Joeri Smissaert

RUN dnf -y upgrade; dnf -y install nginx php-fpm php-fpm php-mysqlnd php-pdo php-json mariadb-server; dnf clean all; systemctl enable nginx; systemctl enable php-fpm; systemctl enable mariadb
RUN mkdir -p /<span class="hljs-keyword">var</span>/www/worpdress.server1.local/public
RUN mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
ADD https:<span class="hljs-comment">//gist.githubusercontent.com/smissaertj/9d02fd974b64fd1a30fd905bc730a098/raw/dee50eb0bea7b93acb6ad0ddb6894cefb74c9d45/nginx.conf /etc/nginx/nginx.conf</span>

EXPOSE <span class="hljs-number">80</span>

CMD [<span class="hljs-string">"/sbin/init"</span>]
</code></pre><p>I've pushed the final version of the image I've build to <a target="_blank" href="https://quay.io/repository/smissaertj/fedora_wordpress?tab=info">my Quay.io repository</a>.</p>
<p>As long as we keep the <code>wordpress_files</code> and <code>wordpress_db</code> volumes, I can destroy the running container and recreate it without any effect on the data:</p>
<pre><code>podman run -d --name container_name -p <span class="hljs-number">8080</span>:<span class="hljs-number">80</span> -v wordpress_files:<span class="hljs-regexp">/var/</span>www/wordpress.server1.local/public:Z -v wordpress_db:<span class="hljs-regexp">/var/</span>lib/mysql:Z quay.io/smissaertj/fedora_wordpress
</code></pre><p>Finally, I want my WordPress site to start at boot even when I'm not logging in to my machine as the user which created the container:</p>
<pre><code>[student@server1 ~]$ podman ps
CONTAINER ID  IMAGE                              COMMAND     CREATED        STATUS            PORTS                 NAMES
d4fb97ba659d  localhost/fedora_wordpress:latest  /sbin/init  <span class="hljs-number">6</span> minutes ago  Up <span class="hljs-number">6</span> minutes ago  <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8080</span>-&gt;<span class="hljs-number">80</span>/tcp  wordpress_test

[student@server1 ~]$ mkdir -p ~<span class="hljs-regexp">/.config/</span>systemd/user
[student@server1 ~]$ cd .config/systemd/user/

[student@server1 user]$ podman generate systemd --name wordpress_test --files --<span class="hljs-keyword">new</span>
/home/student/.config/systemd/user/container-wordpress_test.service

[student@server1 user]$ su - root
<span class="hljs-attr">Password</span>:
[root@server1 ~]# loginctl enable-linger student
[root@server1 ~]# exit
logout

[student@server1 user]$ systemctl --user daemon-reload
[student@server1 user]$ systemctl --user enable container-wordpress_test.service
Created symlink /home/student/.config/systemd/user/multi-user.target.wants/container-wordpress_test.service → /home/student/.config/systemd/user/container-wordpress_test.service.
Created symlink /home/student/.config/systemd/user/<span class="hljs-keyword">default</span>.target.wants/container-wordpress_test.service → /home/student/.config/systemd/user/container-wordpress_test.service.

[student@server1 user]$ reboot
</code></pre><p>The <code>--new</code> option passed to the <code>podman generate systemd</code> command will make sure that the container is destroyed when the service stops and recreated when the service starts.</p>
]]></content:encoded></item><item><title><![CDATA[Podman 101: Managing and Running Containers]]></title><description><![CDATA[{{< figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo">}}
{{< figure class="center" src="/img/podman.png" alt="Podman logo">}}
Understanding Containers
For a data center to operate efficiently, its machines and running components o...]]></description><link>https://blog.joerismissaert.dev/podman-101-managing-and-running-containers</link><guid isPermaLink="true">https://blog.joerismissaert.dev/podman-101-managing-and-running-containers</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Sun, 14 Mar 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo"&gt;}}
{{&lt; figure class="center" src="/img/podman.png" alt="Podman logo"&gt;}}</p>
<h1 id="heading-understanding-containers">Understanding Containers</h1>
<p>For a data center to operate efficiently, its machines and running components on those machines must become as generic and as much automated as possible. We can partly achieve this by seperating the applications from the operating system. This means not just packaging applications into things we install (like RPM or Deb packages), but also putting together sets of software into packages that themselves can run in ways that keep them independent and seperate from the operating system. Virtual Machines and Containers are two ways of packaging sets of software and their dependencies in a way which is separated from the host operating system they are running on.</p>
<p>A virtual machine is a complete operating system that runs on another operating sytem, you can have many virtual machines on one physical computer. Everything an application or service needs to run can be stored inside that virtual machine or in attached storage. A virtual machine has its own kernel, file system, process table, network interfaces and other operating system features separate from the host, while sharing CPU and RAM with the host system. A VM sees an emulation of the computer hardware and not the host hardware directly, hence the term <em>virtual</em> machine.</p>
<p>A Container is similar to a virtual machine, except that it doesn't have its own kernel. It remains separate from the host system by using its own set of <em>namespaces</em>. Just like a VM, you can move it from one host to another to run it wherever it is convenient. Typically you would build your own container images by getting a secure base image and then adding your own layers of software on top of that image to create a new image. To share your image, you <em>push</em> them to shared container registries from where others are allowed to <em>pull</em> them. </p>
<p>Containers run on top of a container engine, like Docker, CRI-O (which is the default on RHEL 8), Moby or rkt, and typically a container runs a single application or service (which can be connected in microservices using OpenShift or Kubernetes for example), although there are systemd images from which you can build multiservice containers. </p>
<p>Podman is a daemonless container engine that is compatible with Docker, for developing, managing, and running Open Container Initiative (OCI) containers and container images on Linux.</p>
<h2 id="heading-namespaces">Namespaces</h2>
<p>Linux support for namespaces is what allows containers to be contained. With namespaces, the Linux kernel can associate one or more processes with a set of resources. Normal processes, not run in a container, use the same host namespaces. By default, processes in a container can only see the container's namespaces and not those of the host. </p>
<ul>
<li><p><strong>Process table</strong> - A container has its own set of process IDs and, by default, can only see processes running inside the container. While <code>PID 1</code> on the host is the <code>init</code> (systemd) process, in a container <code>PID 1</code> is the first process run inside the container.   </p>
</li>
<li><p><strong>Network interfaces</strong> - By default, a container has a single network interface and is assigned an IP address when the container runs. A service run inside a container is not exposed outside of the host system, by default. You can have hundreds of webservers running on the same host without conflict, but you need to manage how those ports are exposed outside of the host.  </p>
</li>
<li><p><strong>Mount table</strong> - By default, a container can't see the host's root file system or any other mounted file system listed in the host's mount table. Files or directories needed from the host can be selectively <em>bind-mounted</em> inside the container.  </p>
</li>
<li><p><strong>User IDs</strong> - Containerized processes run as some UID within the host's namespace, and, with another set of UIDs nested within the container. This can, for example, let a process run as root within the container but not have any special privileges to the host system.  </p>
</li>
<li><p><strong>UTS</strong> - The UNIX Time Sharing namespace allows a containerized process to have a different host and domain name from the host. </p>
</li>
<li><p><strong>Control Group</strong> - A containerized process runs within a selected <code>cgroup</code> and cannot see the other cgroups available on the host system. Similarly, it cannot see the identify of its own cgroup. Control Groups are used for resource management.</p>
</li>
<li><p><strong>Interprocess Communications</strong> - A containerized process cannot see the IPC namespace of the host.</p>
</li>
</ul>
<blockquote>
<p>Although <strong>access to any host namespace is restricted by default</strong>, privileges to host namespaces can be opened selectively. In that way, you can do things like mount configuration files or data inside the container and map container ports to host ports to expose services outside of the host.</p>
</blockquote>
<h2 id="heading-container-registries">Container Registries</h2>
<p>Permanent storage for containers is done in what is referred to as a <em>container registry</em>. When you create a container image that you want to share, you can <em>push</em> that image to a public or private (which you maintain yourself) container registry. Someone who wants to use your container image will then <em>pull</em> it from the registry. </p>
<p>Large public container image registries are, for example, <a target="_blank" href="https://hub.docker.com/">Docker hub</a> and <a target="_blank" href="https://quay.io">Quay Registry</a>. </p>
<h2 id="heading-base-images-and-layers">Base Images and Layers</h2>
<p>Although you can create containers from scratch, most often a container is built by starting with a well-known base image and adding software to it. Linux distributions offer base images in different forms, like standard and minimal versions. But there are also base images you can build on that offer runtimes for PHP, Java and other development environments.</p>
<p>Red Hat offers freely available Universal Base Images (UBIs) for standard, minimal and a variety of runtime containers. You can find those by searching the <a target="_blank" href="https://catalog.redhat.com/software/containers/explore">Red Hat Container Catalog</a>.</p>
<p>You can add software to a base image by defining the build using <code>yum</code> commands to install software from software repositories into the new container. When you add software to an image, it creates a new layer that becomes part of the new image. You can reuse the same base image for all container you build, only one copy of the base image is needed on the host. If you're running 10 different containers based on the same base image, you only need to pull and store the base image once. For each new image you build, you only add the data that differs from the base image.</p>
<h1 id="heading-running-and-managing-containers-with-podman">Running and Managing Containers with Podman</h1>
<h2 id="heading-pulling-and-running-containers">Pulling and Running Containers</h2>
<p>In order to start using containers with podman, we need to install the <code>container-tools</code> module:</p>
<pre><code>[root@server1 student]# yum <span class="hljs-built_in">module</span> install container-tools
...
</code></pre><p>Let's choose a reliable image to try out, one that comes from an official project, is up to date and has been scanned for vulnerabilities:</p>
<pre><code>[student@server1 ~]$ podman pull registry.access.redhat.com/ubi8/ubi
Trying to pull registry.access.redhat.com/ubi8/ubi...
Getting image source signatures
Copying blob <span class="hljs-number">64607</span>cc74f9c done  
Copying blob <span class="hljs-number">13897</span>c84ca57 done  
Copying config <span class="hljs-number">9992</span>f11c61 done  
Writing manifest to image destination
Storing signatures
<span class="hljs-number">9992</span>f11c61c5fa38a691f80c7e13b75960b536aade4cce8543433b24623bce68
[student@server1 ~]$
</code></pre><p>We can verify that the image is on our system using the <strong>podman images</strong> command:</p>
<pre><code>[student@server1 ~]$ podman images
REPOSITORY                               TAG     IMAGE ID      CREATED       SIZE
registry.access.redhat.com/ubi8/ubi      latest  <span class="hljs-number">9992</span>f11c61c5  <span class="hljs-number">11</span> days ago   <span class="hljs-number">213</span> MB
</code></pre><p>Next, let's start an interactive shell from this base image. We use the <strong>podman run</strong> command, specify the <code>-i</code> (interactive) and <code>-t</code> (terminal) options, followed by the name of the image (ubi) and the command we wish to start once the container is up and running (bash):</p>
<pre><code>[student@server1 ~]$ podman run -it ubi bash
[root@<span class="hljs-number">888</span>b3cbea5cc /]#
</code></pre><p>We are in an interactive session within the container from the bash shell. Notice the container is using the host kernel:</p>
<pre><code>[root@<span class="hljs-number">888</span>b3cbea5cc /]# ls
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  <span class="hljs-keyword">var</span>

[root@<span class="hljs-number">888</span>b3cbea5cc /]# cat /etc/os-release  | grep -i ^NAME
NAME=<span class="hljs-string">"Red Hat Enterprise Linux"</span>

[root@<span class="hljs-number">888</span>b3cbea5cc /]# uname -r
<span class="hljs-number">4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-240.</span>el8.x86_64
</code></pre><p>We can add software to the container:</p>
<pre><code>[root@<span class="hljs-number">888</span>b3cbea5cc /]# yum install procps -y
...

[root@<span class="hljs-number">888</span>b3cbea5cc /]# ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           <span class="hljs-number">1</span>       <span class="hljs-number">0</span>  <span class="hljs-number">0</span> <span class="hljs-number">13</span>:<span class="hljs-number">13</span> pts/<span class="hljs-number">0</span>    <span class="hljs-number">00</span>:<span class="hljs-number">00</span>:<span class="hljs-number">00</span> bash
root          <span class="hljs-number">39</span>       <span class="hljs-number">1</span>  <span class="hljs-number">0</span> <span class="hljs-number">13</span>:<span class="hljs-number">20</span> pts/<span class="hljs-number">0</span>    <span class="hljs-number">00</span>:<span class="hljs-number">00</span>:<span class="hljs-number">00</span> ps -ef
</code></pre><p>Notice that form within the container, we only see two running processes: the shell and the <code>ps</code> command. PID 1 is the bash shell.</p>
<p>We can exit the container by using the <strong>exit</strong> command. The container is now no longer running, but it's still available on the host in a stopped state. The <strong>podman ps --all</strong> command shows all available containers:</p>
<pre><code>[student@server1 ~]$ podman ps -a
CONTAINER ID  IMAGE                                       COMMAND               CREATED         STATUS                    PORTS                                                                  NAMES                                                    bold_aryabhata
<span class="hljs-number">888</span>b3cbea5cc  registry.access.redhat.com/ubi8/ubi:latest  bash                  <span class="hljs-number">9</span> minutes ago   Exited (<span class="hljs-number">0</span>) <span class="hljs-number">3</span> seconds ago                                                                                    musing_almeida
</code></pre><h2 id="heading-managing-container-state">Managing Container State</h2>
<p>Unless you specifically set a container to be removed when it's stopped (<code>--rm</code> option), paused or fails, the container is still on your system. You can see the status of all containers on the system, running or stopped, using the <code>podman ps</code> command:</p>
<pre><code>[student@server1 ~]$ podman run -d nginx
e968c7e569cbe60d909b2108ba5a2067bb3e771327f4729b85566280efe944a6

[student@server1 ~]$ podman ps
CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS            PORTS   NAMES
e968c7e569cb  docker.io/library/nginx:latest  nginx -g daemon o...  <span class="hljs-number">4</span> seconds ago  Up <span class="hljs-number">3</span> seconds ago          loving_swartz

[student@server1 ~]$ podman stop e968
e968c7e569cbe60d909b2108ba5a2067bb3e771327f4729b85566280efe944a6

[student@server1 ~]$ podman ps
CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES

[student@server1 ~]$ podman ps -a
CONTAINER ID  IMAGE                                       COMMAND               CREATED         STATUS                    PORTS   NAMES
e968c7e569cb  docker.io/library/nginx:latest              nginx -g daemon o...  <span class="hljs-number">27</span> seconds ago  Exited (<span class="hljs-number">0</span>) <span class="hljs-number">5</span> seconds ago          loving_swartz
</code></pre><p>The <code>podman stop</code> command sends a SIGTERM signal and if the container doesn't stop after 10 seconds it will send a SIGKILL signal. 
You can also send the SIGKILL signal immediately using the <code>podman kill</code> command.
Just like the <code>podman stop</code> command stops a container, you can start a container using <code>podman start</code> or simply restart a container using <code>podman restart</code>.</p>
<p>Lastly, we can delete the container permanently by using the <code>podman rm</code> command:</p>
<pre><code>[student@server1 ~]$ podman rm e968
e968c7e569cbe60d909b2108ba5a2067bb3e771327f4729b85566280efe944a6

[student@server1 ~]$ podman ps -a
CONTAINER ID  IMAGE                                       COMMAND  CREATED        STATUS                    PORTS   NAMES
[student@server1 ~]$
</code></pre><p>Note that the <code>podman rm</code> command only deletes the container and not the image.</p>
<h2 id="heading-running-commands-in-a-container">Running commands in a container</h2>
<p>When we are detached from a container we can still execute commands inside the container using <code>podman exec</code>:</p>
<pre><code>[student@server1 ~]$ podman exec cd87 cat /etc/os-release | grep ^NAME
NAME=<span class="hljs-string">"Debian GNU/Linux"</span>
</code></pre><p>Or, we can attach to the container:</p>
<pre><code>[student@server1 ~]$ podman exec -it cd87 /bin/bash
root@cd87164b978f:/#
</code></pre><p>...and detach using the <code>CTRL-P+Q</code> sequence.</p>
<h2 id="heading-managing-container-ports">Managing Container Ports</h2>
<p>We can map a host port to the container application port to make the application in the container reachable from the host machine:</p>
<pre><code>[student@server1 ~]$ podman run -d -p <span class="hljs-number">8000</span>:<span class="hljs-number">80</span> nginx
<span class="hljs-number">965</span>fe32d0b4b96d469ddb5638edaa5ac18fe41fc083082844bc8ddae0f6a9a33

[student@server1 ~]$ podman ps
CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS            PORTS                 NAMES
<span class="hljs-number">965</span>fe32d0b4b  docker.io/library/nginx:latest  nginx -g daemon o...  <span class="hljs-number">3</span> seconds ago  Up <span class="hljs-number">2</span> seconds ago  <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>-&gt;<span class="hljs-number">80</span>/tcp  musing_mclaren

[student@server1 ~]$ podman port -a
<span class="hljs-number">965</span>fe32d0b4b  <span class="hljs-number">80</span>/tcp -&gt; <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>

[student@server1 ~]$ podman port <span class="hljs-number">965</span>
<span class="hljs-number">80</span>/tcp -&gt; <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>
</code></pre><p>In the example above, we mapped the host port 8000 to port 80 of the container.
Note that you can only map container ports to non privileged (&gt;1024) ports on the host when running rootless containers.</p>
<p>With the above done, we can <code>curl</code> the host port and see Nginx serving its default content:</p>
<pre><code>[student@server1 ~]$ curl localhost:<span class="hljs-number">8000</span>
&lt;!DOCTYPE html&gt;
<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">html</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">head</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">title</span>&gt;</span>Welcome to nginx!<span class="hljs-tag">&lt;/<span class="hljs-name">title</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">style</span>&gt;</span><span class="css">
    <span class="hljs-selector-tag">body</span> {
        <span class="hljs-attribute">width</span>: <span class="hljs-number">35em</span>;
        <span class="hljs-attribute">margin</span>: <span class="hljs-number">0</span> auto;
        <span class="hljs-attribute">font-family</span>: Tahoma, Verdana, Arial, sans-serif;
    }
</span><span class="hljs-tag">&lt;/<span class="hljs-name">style</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">head</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Welcome to nginx!<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>

<span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>For online documentation and support please refer to
<span class="hljs-tag">&lt;<span class="hljs-name">a</span> <span class="hljs-attr">href</span>=<span class="hljs-string">"http://nginx.org/"</span>&gt;</span>nginx.org<span class="hljs-tag">&lt;/<span class="hljs-name">a</span>&gt;</span>.<span class="hljs-tag">&lt;<span class="hljs-name">br</span>/&gt;</span>
Commercial support is available at
<span class="hljs-tag">&lt;<span class="hljs-name">a</span> <span class="hljs-attr">href</span>=<span class="hljs-string">"http://nginx.com/"</span>&gt;</span>nginx.com<span class="hljs-tag">&lt;/<span class="hljs-name">a</span>&gt;</span>.<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>

<span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span><span class="hljs-tag">&lt;<span class="hljs-name">em</span>&gt;</span>Thank you for using nginx.<span class="hljs-tag">&lt;/<span class="hljs-name">em</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">html</span>&gt;</span></span>
</code></pre><p>Now, if we would want access from outside of the host machine, we should not forget to configure the host machine's firewall:</p>
<pre><code>[student@server1 ~]$ su - root
<span class="hljs-attr">Password</span>: 
[root@server1 ~]# firewall-cmd --add-port=<span class="hljs-number">8000</span>/tcp --permanent &amp;&amp; firewall-cmd --reload
success
success
[root@server1 ~]# exit
logout
[student@server1 ~]$
</code></pre><blockquote>
<p>By default podman runs rootless containers.
Rootless containers cannot bind to a privileged port and do NOT have an IP address, you would need port forwarding instead. If you need a container with an IP address, you need a root container: <code>sudo podman run -d nginx</code></p>
</blockquote>
<h2 id="heading-attaching-storage-to-containers">Attaching Storage to Containers</h2>
<p>Storage in containers is ephemeral: modifications are written to the container writeable layer and stay around for the container lifetime.
For persistent storage needs, we use bind mounts to connect a directory inside the container to a directory on the host machine.</p>
<p>We start preparing on the hostmachine, creating directories, setting basic permissions and changing the SELinux file context type to <code>container_file_t</code>. 
SELinux is very important when using root containers, as without, the root container will have access to the entire host file system.</p>
<p>I'll run through an example where we set the document root of the <code>nginx</code> image to the <code>/home/student/html</code> directory on the host machine. Inside that directory we'll create a basic html file that the nginx container is going to serve.</p>
<h3 id="heading-preparing-host-storage">Preparing Host Storage</h3>
<pre><code>[root@server1 student]# pwd
/home/student

[student@server1 ~]$ ls -l
total <span class="hljs-number">0</span>
drwxrwxr-x. <span class="hljs-number">2</span> student student <span class="hljs-number">6</span> Apr <span class="hljs-number">12</span> <span class="hljs-number">21</span>:<span class="hljs-number">29</span> html

[root@server1 student]# semanage fcontext -a -t container_file_t <span class="hljs-string">"/home/student/html(/.*)?"</span>
[root@server1 student]# restorecon -Rv /home/student/html
Relabeled /home/student/html <span class="hljs-keyword">from</span> unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:container_file_t:s0
</code></pre><h3 id="heading-mounting-storage-inside-the-container">Mounting Storage Inside the Container.</h3>
<p>At this point we can delete the container from the previous example, start a new container and bind mount the host directory <code>/home/student/html</code> to the default document root of Nginx in the container: <code>/usr/share/nginx/html</code></p>
<p>If the container user is owner of the host directory, the <code>:Z</code> (SELinux) option can be used:
<code>podman run -d --name web1 -p 8000:80 -v /home/student/html:/usr/share/nginx/html:Z nginx</code></p>
<ul>
<li><code>--d</code> we run the container in detached mode.</li>
<li><code>--name</code> we set a name for our new container.</li>
<li><code>-p</code> we map the host port to the container port.</li>
<li><code>-v</code> we bind a host directory to a directory inside the container.</li>
<li><code>nginx</code> the name of the image we use to start our container from. </li>
</ul>
<pre><code>[student@server1 ~]$ podman run -d --name web1 -p <span class="hljs-number">8000</span>:<span class="hljs-number">80</span> -v /home/student/html:<span class="hljs-regexp">/usr/</span>share/nginx/html:Z nginx
<span class="hljs-number">1988217288</span>c55050a2820881ccf75e4436097d8128f9d2dec8a08af6674c6f88

[student@server1 ~]$ podman ps
CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS            PORTS                 NAMES
<span class="hljs-number">1988217288</span>c5  docker.io/library/nginx:latest  nginx -g daemon o...  <span class="hljs-number">4</span> seconds ago  Up <span class="hljs-number">4</span> seconds ago  <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>-&gt;<span class="hljs-number">80</span>/tcp  web1

[student@server1 ~]$ curl localhost:<span class="hljs-number">8000</span>
&lt;html&gt;
<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">head</span>&gt;</span><span class="hljs-tag">&lt;<span class="hljs-name">title</span>&gt;</span>403 Forbidden<span class="hljs-tag">&lt;/<span class="hljs-name">title</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">head</span>&gt;</span></span>
<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">center</span>&gt;</span><span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>403 Forbidden<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">center</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">hr</span>&gt;</span><span class="hljs-tag">&lt;<span class="hljs-name">center</span>&gt;</span>nginx/1.19.9<span class="hljs-tag">&lt;/<span class="hljs-name">center</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">html</span>&gt;</span></span>
</code></pre><p>After starting the container, you'll see that the <code>curl</code> test now returns a <code>403 Forbidden</code> status.
This is because the Nginx document root is bound to an empty directory on our host machine. Let's create an html file for Nginx to serve:</p>
<pre><code>[student@server1 ~]$ echo <span class="hljs-string">"&lt;h1&gt;TEST NGINX&lt;/h1&gt;"</span> &gt; html/index.html
[student@server1 ~]$ curl localhost:<span class="hljs-number">8000</span>
&lt;h1&gt;TEST NGINX&lt;/h1&gt;
[student@server1 ~]$
</code></pre><p>At this point we can manage the content that Nginx is serving directly from the host machine.</p>
<h3 id="heading-environment-variables">Environment Variables</h3>
<p>Podman allows us to set arbitrary environment variables that will become available to processes running in the container:  </p>
<pre><code>podman run -d --name mydb -e MYSQL_ROOT_PASSWORD=password -e MYSQL_USER=student -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=studentdb -p <span class="hljs-number">3306</span>:<span class="hljs-number">3306</span> mariadb
</code></pre><p>Using the <code>-e</code> option, in the above example, we set the MySQL root password, user, password and database name. If we don't specify a value for a variable, then podman will look for the value in the host environment and only set it if that variable has a value.</p>
<p>Similarly, instead of passing the environment variables one by one, we can define them in a file and then pass the filename to podman using the <code>--env-file</code> option:
<code>podman run -d --name mydb --env-file=variables.txt -p 9999:3306 mariadb</code></p>
<pre><code>[student@server1 ~]$ cat variables.txt 
MYSQL_ROOT_PASSWORD=password
MYSQL_USER=student
MYSQL_PASSWORD=password
MYSQL_DATABASE=studentdb
</code></pre><p>We can now connect from the host machine to the MariaDB instance in the container:</p>
<pre><code>[student@server1 ~]$ podman run -d --name mydb --env-file=variables.txt -p <span class="hljs-number">3306</span>:<span class="hljs-number">3306</span> mariadb
bd08dcbd3eef3907423ee2e55164e1e222a511f58a96d2c4e474f4ea8d56235b

[student@server1 ~]$ mysql -u student -h <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span> -p
Enter password: 

Welcome to the MySQL monitor.  Commands end <span class="hljs-keyword">with</span> ; or \g.
Your MySQL connection id is <span class="hljs-number">3</span>
Server version: <span class="hljs-number">5.5</span><span class="hljs-number">.5</span><span class="hljs-number">-10.5</span><span class="hljs-number">.9</span>-MariaDB<span class="hljs-number">-1</span>:<span class="hljs-number">10.5</span><span class="hljs-number">.9</span>+maria~focal mariadb.org binary distribution

Copyright (c) <span class="hljs-number">2000</span>, <span class="hljs-number">2020</span>, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark <span class="hljs-keyword">of</span> Oracle Corporation and/or its
affiliates. Other names may be trademarks <span class="hljs-keyword">of</span> their respective
owners.

Type <span class="hljs-string">'help;'</span> or <span class="hljs-string">'\h'</span> <span class="hljs-keyword">for</span> help. Type <span class="hljs-string">'\c'</span> to clear the current input statement.

mysql&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| studentdb          |
+--------------------+
<span class="hljs-number">2</span> rows <span class="hljs-keyword">in</span> set (<span class="hljs-number">0.00</span> sec)
</code></pre><blockquote>
<p>Some containers <strong>require</strong> environment variables to run them. If a container fails because of this requirement, use <code>podman logs container_name</code> to see the application log.
Alternatively, use <code>podman inspect | grep -i usage</code>.</p>
</blockquote>
<h2 id="heading-managing-containers-as-services">Managing Containers as Services</h2>
<p>Now that we have a running container, we can auto start it in a stand-alone situation. The container would start running even though the user that is running the container is not logged in.
For this we can create systemd <em>user</em> unit files (for rootless containers), and manage them with <strong>systemctl</strong>.</p>
<p>Systemd user services start when a user session is opened, and close when the user session is stopped.
We need to use the <code>loginctl enable-linger</code> command to start systemd user services at boot without requiring the user to login:</p>
<pre><code>[root@server1 ~]# loginctl enable-linger student
[root@server1 ~]# loginctl show-user student | grep -i ^linger
Linger=yes
[root@server1 ~]#
</code></pre><p>Next, we use <code>podman generate systemd</code> to generate a user systemd unit file. This will create the file in the working directory.
We need to create the <code>~/.config/systemd/user</code> directory (for a root container what would be in <code>/etc/systemd/system</code>), and move the user unit file into this directory.</p>
<pre><code>[student@server1 ~]$ mkdir -p ~<span class="hljs-regexp">/.config/</span>systemd/user
[student@server1 ~]$ podman generate systemd --name mydb --files
/home/student/container-mydb.service

[student@server1 ~]$ mv container-mydb.service ~<span class="hljs-regexp">/.config/</span>systemd/user/
[student@server1 ~]$ systemctl --user daemon-reload 
[student@server1 ~]$ systemctl --user enable container-mydb.service 
Created symlink /home/student/.config/systemd/user/multi-user.target.wants/container-mydb.service → /home/student/.config/systemd/user/container-mydb.service.
Created symlink /home/student/.config/systemd/user/<span class="hljs-keyword">default</span>.target.wants/container-mydb.service → /home/student/.config/systemd/user/container-mydb.service.
[student@server1 ~]$
</code></pre><p>When we reboot our host machine, the <code>mydb</code> container will automatically start even though the <code>student</code> user is not logged in.</p>
<p>To have systemd create the container when the service starts, and delete the container when the service stops, add the <code>--new</code> option. Keep in mind you'll lose all changes if you didn't configure persistent storage for the container:  </p>
<pre><code>[student@server1 ~]$ podman generate systemd --name mydb --files --<span class="hljs-keyword">new</span>
</code></pre><h2 id="heading-working-with-images">Working with Images</h2>
<p>An image is a read-only but runnable instance of a container that can be used to build new images.
They are obtained from registries which are configured in <code>/etc/containers/registries.conf</code>:</p>
<pre><code>[student@server1 ~]$ grep -ia1 ^registries /etc/containers/registries.conf 
[registries.search]
registries = [<span class="hljs-string">'registry.access.redhat.com'</span>, <span class="hljs-string">'registry.redhat.io'</span>, <span class="hljs-string">'docker.io'</span>]

--
[registries.insecure]
registries = []

--
[registries.block]
registries = []
</code></pre><p>Under the <code>[registries.search]</code> value we find an array of registries that will be searched for a specific image in the order they appear in. 
For example, if you do <code>podman pull nginx</code>, podman will look for the <code>nginx</code> image on <code>registry.access.redhat.com</code>, <code>registry.redhat.io</code>, <code>docker.io</code> subsequently until it finds the image.</p>
<p>Registries that do not use TLS when using images, or which are using self-signed certificates need to be placed under <code>[registries.insecure]</code>.</p>
<p>You can block specific registries under <code>[registries.block]</code>, or, if you specify a wildcard (<code>"*"</code>) then all registries are blocked except those that were specified under <code>[registries.search]</code>.</p>
<p>You can also verify what regestries are in used by issueing the <code>podman info</code> command.</p>
<h3 id="heading-searching-for-images">Searching for images</h3>
<p>We use the <code>podman search</code> command to search for images on either all configured registries or only on specific registries. The search results can be filtered using different options as well. 
A few examples below:</p>
<pre><code>[student@server1 ~]$ podman search docker.io/nginx --limit <span class="hljs-number">1</span>
INDEX       NAME                      DESCRIPTION                STARS   OFFICIAL   AUTOMATED
docker.io   docker.io/library/nginx   Official build <span class="hljs-keyword">of</span> Nginx.   <span class="hljs-number">14707</span>   [OK] 

[student@server1 ~]$ podman search registry.redhat.io/nginx --limit <span class="hljs-number">1</span>
INDEX       NAME                                 DESCRIPTION                                       STARS   OFFICIAL   AUTOMATED
redhat.io   registry.redhat.io/rhel8/nginx<span class="hljs-number">-116</span>   Platform <span class="hljs-keyword">for</span> running nginx <span class="hljs-number">1.16</span> or building ...   <span class="hljs-number">0</span> 

[student@server1 ~]$ podman search docker.io/mariadb --filter is-official=<span class="hljs-literal">true</span>
INDEX       NAME                        DESCRIPTION                                       STARS   OFFICIAL   AUTOMATED
docker.io   docker.io/library/mariadb   MariaDB Server is a high performing open sou...   <span class="hljs-number">4043</span>    [OK]
</code></pre><h3 id="heading-inspecting-images">Inspecting Images</h3>
<p>Now that we have an idea of what nginx images are available to us, we can inspect them remotely (without pulling them) using <code>skopeo</code>:</p>
<pre><code>[student@server1 ~]$ skopeo inspect docker:<span class="hljs-comment">//docker.io/nginx</span>
{
    <span class="hljs-string">"Name"</span>: <span class="hljs-string">"docker.io/library/nginx"</span>,
    <span class="hljs-string">"Digest"</span>: <span class="hljs-string">"sha256:6b5f5eec0ac03442f3b186d552ce895dce2a54be6cb834358040404a242fd476"</span>,
    <span class="hljs-string">"RepoTags"</span>: [
        <span class="hljs-string">"1-alpine-perl"</span>,
        <span class="hljs-string">"1-alpine"</span>,
...
</code></pre><p>Note that the <code>skopeo inspect</code> command always takes the <code>docker://</code> prefix regardless of what registry the image you're inspecting is located on:</p>
<pre><code>[student@server1 ~]$ skopeo inspect docker:<span class="hljs-comment">//registry.redhat.io/rhel8/mariadb-103</span>
{
    <span class="hljs-string">"Name"</span>: <span class="hljs-string">"registry.redhat.io/rhel8/mariadb-103"</span>,
    <span class="hljs-string">"Digest"</span>: <span class="hljs-string">"sha256:c6f117263e36880af79bba1de2018462126d226439d28d074f30bcfaf57dabe1"</span>,
    <span class="hljs-string">"RepoTags"</span>: [
        <span class="hljs-string">"1-116"</span>,
        <span class="hljs-string">"1-116-source"</span>,
...
</code></pre><p>If we have a <em>local</em> image we wish to inspect, we can use <code>podman inspect</code> instead:</p>
<pre><code>[student@server1 ~]$ podman images
REPOSITORY                           TAG     IMAGE ID      CREATED      SIZE
docker.io/library/nginx              latest  <span class="hljs-number">519e12</span>e2a84a  <span class="hljs-number">3</span> days ago   <span class="hljs-number">137</span> MB
docker.io/library/mariadb            latest  e76a4b2ed1b4  <span class="hljs-number">10</span> days ago  <span class="hljs-number">407</span> MB
registry.access.redhat.com/ubi8/ubi  latest  <span class="hljs-number">9992</span>f11c61c5  <span class="hljs-number">13</span> days ago  <span class="hljs-number">213</span> MB
[student@server1 ~]$ podman inspect registry.access.redhat.com/ubi8/ubi
[
    {
        <span class="hljs-string">"Id"</span>: <span class="hljs-string">"9992f11c61c5fa38a691f80c7e13b75960b536aade4cce8543433b24623bce68"</span>,
        <span class="hljs-string">"Digest"</span>: <span class="hljs-string">"sha256:17ff29c0747eade777e8b9868f97ba37e6b8b43f5ed2dbf504ff9277e1c1d1ca"</span>,
        <span class="hljs-string">"RepoTags"</span>: [
            <span class="hljs-string">"registry.access.redhat.com/ubi8/ubi:latest"</span>
...
</code></pre><h3 id="heading-removing-images">Removing Images</h3>
<p>When new images become available, the old version of the image is kept on your system.
We can remove images using the <code>podman rmi</code> command:</p>
<pre><code>[student@server1 ~]$ podman images
REPOSITORY                           TAG     IMAGE ID      CREATED      SIZE
docker.io/library/nginx              latest  <span class="hljs-number">519e12</span>e2a84a  <span class="hljs-number">3</span> days ago   <span class="hljs-number">137</span> MB
docker.io/library/mariadb            latest  e76a4b2ed1b4  <span class="hljs-number">10</span> days ago  <span class="hljs-number">407</span> MB
registry.access.redhat.com/ubi8/ubi  latest  <span class="hljs-number">9992</span>f11c61c5  <span class="hljs-number">13</span> days ago  <span class="hljs-number">213</span> MB

[student@server1 ~]$ podman rmi ubi
<span class="hljs-attr">Untagged</span>: registry.access.redhat.com/ubi8/ubi:latest
<span class="hljs-attr">Deleted</span>: <span class="hljs-number">9992</span>f11c61c5fa38a691f80c7e13b75960b536aade4cce8543433b24623bce68

[student@server1 ~]$ podman images
REPOSITORY                 TAG     IMAGE ID      CREATED      SIZE
docker.io/library/nginx    latest  <span class="hljs-number">519e12</span>e2a84a  <span class="hljs-number">3</span> days ago   <span class="hljs-number">137</span> MB
docker.io/library/mariadb  latest  e76a4b2ed1b4  <span class="hljs-number">10</span> days ago  <span class="hljs-number">407</span> MB
</code></pre><h3 id="heading-creating-images-from-a-dockerfile">Creating Images from a Dockerfile</h3>
<p>We can use <code>podman</code> and <code>buildah</code> to create new images from a Dockerfile. The resulting images are OCI compliant, so they will work on any runtime that meets the OCI Runtime Specification (such as Docker and CRI-O).</p>
<p>In the below example we prepare a Dockerfile to install the Apache webserver onto a Fedora image and later use <code>podman build</code> to create a new image from this Dockerfile.</p>
<pre><code>[student@server1 ~]$ cat Dockerfile 
# Base on the Fedora image
FROM fedora:latest
MAINTAINER Joeri Smissaert

# Update image and install Nginx
RUN dnf -y update; dnf -y clean all
RUN dnf -y install httpd

# Expose the <span class="hljs-keyword">default</span> port <span class="hljs-number">80</span>
EXPOSE <span class="hljs-number">80</span>

# Run Nginx
CMD [<span class="hljs-string">"/usr/sbin/httpd"</span>,<span class="hljs-string">"-DFOREGROUND"</span>]

[student@server1 ~]$ podman build -t fedora-apache .
...

[student@server1 ~]$ podman images
REPOSITORY                           TAG     IMAGE ID      CREATED         SIZE
localhost/fedora-apache              latest  cb083eb46577  <span class="hljs-number">15</span> minutes ago  <span class="hljs-number">483</span> MB

[student@server1 ~]$ podman run -d --name myweb1 -p <span class="hljs-number">8080</span>:<span class="hljs-number">80</span> fedora-apache
<span class="hljs-number">2</span>f8f1ef6c484f2825f7a11f30c8601799b0736145917f6428b395b4c599cbd6e

[student@server1 ~]$ podman ps
CONTAINER ID  IMAGE                             COMMAND               CREATED        STATUS            PORTS                   NAMES
<span class="hljs-number">2</span>f8f1ef6c484  localhost/fedora-apache:latest    /usr/sbin/httpd -...  <span class="hljs-number">3</span> seconds ago  Up <span class="hljs-number">2</span> seconds ago  <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8080</span>-&gt;<span class="hljs-number">80</span>/tcp    myweb1
</code></pre><h3 id="heading-tagging-and-pushing-an-image-to-a-registry">Tagging and Pushing an Image to a Registry</h3>
<p>In this example, I'll tag and push the <code>fedora-apache</code> image to <a target="_blank" href="https://quay.io">Quay.io</a>.</p>
<pre><code>[student@server1 ~]$ podman login quay.io
<span class="hljs-attr">Username</span>: ********
Password: 
Login Succeeded!

[student@server1 ~]$ podman tag fedora-apache quay.io/smissaertj/fedora-apache:v1<span class="hljs-number">.0</span>
[student@server1 ~]$ podman push quay.io/smissaertj/fedora-apache:v1<span class="hljs-number">.0</span>
Getting image source signatures
Copying blob <span class="hljs-number">7</span>ddfcddbaf0e done  
Copying blob dcbc36c2ed7d done  
Copying blob <span class="hljs-number">6</span>d668c00f3f1 done  
Copying config cb083eb465 done  
Writing manifest to image destination
Copying config cb083eb465 [--------------------------------------] <span class="hljs-number">0.0</span>b / <span class="hljs-number">1.9</span>KiB
Writing manifest to image destination
Storing signatures
[student@server1 ~]$
</code></pre><p>You can find the image here:
https://quay.io/smissaertj/fedora-apache</p>
]]></content:encoded></item><item><title><![CDATA[Configuring and Managing Time Services]]></title><description><![CDATA[{{< figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo">}}
Understanding Local Time
When a Linux machine boots, the hardware clock, also referred to as the real-time clock, is read. This clock resides in the computer hardware, it's ...]]></description><link>https://blog.joerismissaert.dev/configuring-and-managing-time-services</link><guid isPermaLink="true">https://blog.joerismissaert.dev/configuring-and-managing-time-services</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Wed, 10 Feb 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo"&gt;}}</p>
<h1 id="heading-understanding-local-time">Understanding Local Time</h1>
<p>When a Linux machine boots, the hardware clock, also referred to as the real-time clock, is read. This clock resides in the computer hardware, it's in an integrated circuit on the system board that is independent of the current state of the operating system. It keeps running when the computer is shutdown, as long as the system board battery or power supply feeds it. The hardware clock value is known as hardware time, the system gets its initial time setting from hardware time. The hardware clock is usually set to Coordinated Universal Time (UTC).</p>
<p>System time is maintained by the operating system, it's independent of the hardware clock. When the system time is changed, the new system time is not automatically synchronized with the hardware clock.  </p>
<p>System time is kept in UTC, applications runing on the operating system convert system time into local time. Local time is the actual time in the current time zone, daylight saving time (DST) is considered so that the system always shows an accurate time.</p>
<p>{{</p><table>}}
Concept | Explanation
-------|------
Hardware clock | The clock that resides on the main board of a computer system.
Real-time clock | Same as hardware clock.
System time | The time that is maintained by the operating system.
Software clock | Similar to system time.
UTC | Coordinated Universal Time, a worldwide standard time.
Daylight saving time | Calculation that is made to change time automatically when DST changes occur.
local time | The time that corresponds to the time in the current time zone.
{{</table>}}<p></p>
<h1 id="heading-using-network-time-protocol">Using Network Time Protocol</h1>
<p>Since the hardware clock is typically part of the computer's motherboard, it can be potentially unreliable. It's a good idea to use time from a more reliable source. Generally speaking, two solutions are available.  </p>
<p>One option is to buy a more reliable hardware clock. Using an external hardware clock is a common solution in datacenter environments to guarantee reliable time is maintained even if external networks for time synchronization are temporarily not available. An example would be a very accurate <a target="_blank" href="https://www.wired.co.uk/article/google-gps-powered-database">atomic clock</a>. </p>
<p>A more common solution is to configure your machine to use Network Time Protocol (NTP), a method of maintaining system time provided through NTP servers on the Internet. To determine which Internet NTP server should be used, the concept of <em>stratum</em> is introduced. Stratum defines the reliability of an NTP time source, and the lower the stratum value, the more reliable it is. Typically, Internet time servers are using stratum 1 or 2. When you configure a local time server, you can use a higher stratum value. As a consequence, machines configured to use the local time server will only ever use it if Internet time servers (with a lower stratum) are not available. </p>
<p>Setting up a machine to use NTP on RHEL 8 is easy if the server is already connected to the internet. In this case the <code>/etc/chrony.conf</code> file is prepopulated with a standard list of NTP servers. You would only need to turn on NTP using the <code>timedatectl set-ntp true</code> command (more on this later). </p>
<h1 id="heading-managing-time-on-red-hat-enterprise-linux">Managing Time on Red Hat Enterprise Linux</h1>
<p>On a Linux system, time is calculated as an offset of <em>epoch</em> time. <a target="_blank" href="https://en.wikipedia.org/wiki/Unix_time">Epoch time</a> is the number seconds since January 1, 1970, in UTC. You can convert an epoch time stamp to a human readable form using the <code>date --date</code> command, followd by the epoch string starting with an @:</p>
<pre><code>[student@server1 ~]$ date --date @<span class="hljs-number">1420987251</span>
Sun Jan <span class="hljs-number">11</span> <span class="hljs-number">06</span>:<span class="hljs-number">40</span>:<span class="hljs-number">51</span> PM +<span class="hljs-number">04</span> <span class="hljs-number">2015</span>
</code></pre><h2 id="heading-using-date">Using date</h2>
<p>The <code>date</code> command enables you to manage the system time. Or you can use it to show the current time in different formats:</p>
<ul>
<li><code>date</code> - Shows the current system time.</li>
<li><code>date +%d-%m-%y</code> - Shows the current system day, month and year.</li>
<li><code>date -s 16:03</code> - Sets the current system time to 3 minutes pas 4pm.</li>
</ul>
<h2 id="heading-using-hwclock">Using hwclock</h2>
<p>The <code>date</code> command will not change the hardware time. To manage hardware time you can use the <code>hwclock</code> command, which has many options (See <code>hwclock --help</code>).
Some options of interest:</p>
<ul>
<li><code>hwclock --systohc</code> - Sync the current system time to the hardware clock.</li>
<li><code>hwclock --hctosys</code> - Sync the current hardware time to the system clock.</li>
</ul>
<h2 id="heading-using-timedatectl">Using timedatectl</h2>
<p>The <code>timedatectl</code> command shows detailed information about the current time and date. It also displays the time zone, in addition to information about the use of NTP network time and DST.</p>
<p>The <code>timedatectl</code> command works with the below subcommands to perform time operations:
{{</p><table>}}
Command | Explanation
-------|------
status | Shows the current time settings.
set-time TIME | Sets the current time.
set-timezone TIMEZONE | Sets the time zone.
list-timezone | Shows a list of all time zones.
set-local-rtc [0|1] | Controls whether the real-time clock (hardware clock) is in local time.
set-ntp [0|1] | Enables or disables NTP.
{{</table>}}<p></p>
<pre><code>[root@server1 ~]# timedatectl status
               Local time: Mon <span class="hljs-number">2021</span><span class="hljs-number">-03</span><span class="hljs-number">-15</span> <span class="hljs-number">21</span>:<span class="hljs-number">27</span>:<span class="hljs-number">17</span> +<span class="hljs-number">04</span>
           Universal time: Mon <span class="hljs-number">2021</span><span class="hljs-number">-03</span><span class="hljs-number">-15</span> <span class="hljs-number">17</span>:<span class="hljs-number">27</span>:<span class="hljs-number">17</span> UTC
                 RTC time: Mon <span class="hljs-number">2021</span><span class="hljs-number">-03</span><span class="hljs-number">-15</span> <span class="hljs-number">17</span>:<span class="hljs-number">27</span>:<span class="hljs-number">17</span>
                Time zone: Indian/Mauritius (+<span class="hljs-number">04</span>, +<span class="hljs-number">0400</span>)
System clock synchronized: yes
              NTP service: active
          RTC <span class="hljs-keyword">in</span> local TZ: no


[root@server1 ~]# timedatectl set-time <span class="hljs-number">22</span>:<span class="hljs-number">30</span>
[root@server1 ~]# timedatectl
               Local time: Mon <span class="hljs-number">2021</span><span class="hljs-number">-03</span><span class="hljs-number">-15</span> <span class="hljs-number">22</span>:<span class="hljs-number">30</span>:<span class="hljs-number">03</span> +<span class="hljs-number">04</span>
           Universal time: Mon <span class="hljs-number">2021</span><span class="hljs-number">-03</span><span class="hljs-number">-15</span> <span class="hljs-number">18</span>:<span class="hljs-number">30</span>:<span class="hljs-number">03</span> UTC
                 RTC time: Mon <span class="hljs-number">2021</span><span class="hljs-number">-03</span><span class="hljs-number">-15</span> <span class="hljs-number">18</span>:<span class="hljs-number">30</span>:<span class="hljs-number">03</span>
                Time zone: Indian/Mauritius (+<span class="hljs-number">04</span>, +<span class="hljs-number">0400</span>)
System clock synchronized: no
              NTP service: inactive
          RTC <span class="hljs-keyword">in</span> local TZ: no
</code></pre><p>After enabling NTP again, you will have to wait a few minutes for the time to synchronize again:</p>
<pre><code>[root@server1 ~]# timedatectl set-ntp <span class="hljs-number">1</span>
[root@server1 ~]# timedatectl
               Local time: Mon <span class="hljs-number">2021</span><span class="hljs-number">-03</span><span class="hljs-number">-15</span> <span class="hljs-number">21</span>:<span class="hljs-number">30</span>:<span class="hljs-number">19</span> +<span class="hljs-number">04</span>
               ....

[root@server1 ~]# timedatectl list-timezones | grep -i mauritius
Indian/Mauritius
[root@server1 ~]# timedatectl set-timezone Indian/Mauritius
[root@server1 ~]#
</code></pre><h2 id="heading-managing-time-zone-settings">Managing Time Zone Settings</h2>
<p>Between Linux servers, time is normally communicated in UTC. This allows servers located in different time zones to use the same time settings, making it easier to manage large organizations. To make it easier for end users, we should set the local time, and for this we would need to configure an appropriate time zone. </p>
<p>There are 3 approaches to setting the local time zone.</p>
<ul>
<li>Use <code>timedatectl set-timezone</code></li>
<li>Use the <code>tzselect</code> command to start an text based interface.</li>
<li>Go the the <code>/usr/share/zoneinfo</code> directory where you'll find different subdirectories containing files for each time zone. To select a time zone, you create a symbolic link with the name <code>/etc/localtime</code> to the relevant time zone file. e.g. <code>ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime</code></li>
</ul>
<h2 id="heading-configuring-time-service-clients">Configuring Time Service Clients</h2>
<p>By default, the <strong>chrony</strong> service is configured to get the right time from the Internet. In a corporate environment it is not always desirable for clients to go out to the Internet, and instead time servers on the local network are configured.</p>
<p>In the below example we'll configure an NTP server on <code>server2</code> and we'll configure <code>server1</code> as the client.</p>
<p>On <code>server1</code> we comment out the predefined NTP server in <code>/etc/chrony.conf</code> and define the <code>server2</code> pool:</p>
<pre><code># Use public servers <span class="hljs-keyword">from</span> the pool.ntp.org project.
# Please consider joining the pool (http:<span class="hljs-comment">//www.pool.ntp.org/join.html).</span>
#pool <span class="hljs-number">2.</span>rhel.pool.ntp.org iburst
pool server2
</code></pre><p>On <code>server2</code> we edit <code>/etc/chrony.conf</code> to allow connections from a specific subnet, we set a stratum value, then configure the firewall and restart the <code>chronyd</code> service:</p>
<pre><code># Use public servers <span class="hljs-keyword">from</span> the pool.ntp.org project.
# Please consider joining the pool (http:<span class="hljs-comment">//www.pool.ntp.org/join.html).</span>
#pool <span class="hljs-number">2.</span>rhel.pool.ntp.org iburst

allow <span class="hljs-number">192.168</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>/<span class="hljs-number">16</span>
local stratum <span class="hljs-number">8</span>
</code></pre><pre><code>[root@server2 ~]# firewall-cmd --add-service=ntp --permanent
success
[root@server2 ~]# firewall-cmd --reload
success
[root@server2 ~]# systemctl restart chronyd
[root@server2 ~]#
</code></pre><p>Restart the <code>chronyd</code> service on <code>server1</code> and check if <code>server2</code> is used as a source:</p>
<pre><code>[root@server1 ~]# systemctl restart chronyd
[root@server1 ~]# chronyc sources
<span class="hljs-number">210</span> <span class="hljs-built_in">Number</span> <span class="hljs-keyword">of</span> sources = <span class="hljs-number">1</span>
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? server2                       <span class="hljs-number">8</span>   <span class="hljs-number">6</span>     <span class="hljs-number">1</span>     <span class="hljs-number">6</span>    +<span class="hljs-number">15</span>us[  +<span class="hljs-number">15</span>us] +/-   <span class="hljs-number">98</span>us
[root@server1 ~]#
</code></pre>]]></content:encoded></item><item><title><![CDATA[Configuring and Auto Mounting Remote File Systems Using fstab and automount: NFS & CIFS]]></title><description><![CDATA[{{< figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo">}}
Using NFS Services
The Network File System is a protocol that was developed for UNIX by Sun in the early 1980s. Its purpose is to make mounting of remote file systems in the...]]></description><link>https://blog.joerismissaert.dev/configuring-and-auto-mounting-remote-file-systems-using-fstab-and-automount-nfs-and-cifs</link><guid isPermaLink="true">https://blog.joerismissaert.dev/configuring-and-auto-mounting-remote-file-systems-using-fstab-and-automount-nfs-and-cifs</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Sun, 10 Jan 2021 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo"&gt;}}</p>
<h1 id="heading-using-nfs-services">Using NFS Services</h1>
<p>The Network File System is a protocol that was developed for UNIX by Sun in the early 1980s. Its purpose is to make mounting of remote file systems in the local file system hierarchy possible. It was often used with Network Information Services (NIS) which provides network-based authentication, all machines connected to the NIS server used the same user accounts and security was handled by the NIS server. NFS security by default is limited to allowing and restricting specific hosts. </p>
<p>Without NIS, NFS seems to be an unsecure solution: if on server1 the user X has UID 1001 and on server2 user Y has UID 1001, then user X would have the same access to server2 resources as user Y. To prevent situations like this, NFS should be used together with a centralized authentication service like the Lightweight Directory Access Protocol (LDAP) and Kerberos. This solution is not covered in this article. </p>
<p>On RHEL8, NFSv4 is the default version of NFS wich you can override when mounting using the <code>nfsvers=</code> mount option. Typically, clients will automatically fallback to a previous version of NFS if required.</p>
<h2 id="heading-offering-an-nfs-share">Offering an NFS Share</h2>
<p>To setup an NFS share you would need to go through a few tasks:</p>
<ul>
<li><p>Create local directories which you want to share and copy some data into them:</p>
<pre><code>[root@server2 ~]# mkdir -p /nfs_data /nfs_users/user{<span class="hljs-number">1.</span><span class="hljs-number">.2</span>}
[root@server2 ~]# cp -r /etc/[a-c]* <span class="hljs-regexp">/nfs_data/</span>
[root@server2 ~]# cp -r /etc/[d-f]* <span class="hljs-regexp">/nfs_users/u</span>ser1/
[root@server2 ~]# cp -r /etc/[g-i]* <span class="hljs-regexp">/nfs_users/u</span>ser2/
</code></pre></li>
<li><p>Edit the <code>/etc/exports</code> file to define the NFS shares:</p>
<pre><code>[root@server2 ~]# cat /etc/<span class="hljs-built_in">exports</span>
/nfs_data    *(rw,no_root_squash)
/nfs_users        *(rw,no_root_squash)
</code></pre></li>
<li><p>Start and enable the NFS server:</p>
<pre><code>[root@server2 ~]# yum install nfs-utils
[root@server2 ~]# systemctl enable --now nfs-server
</code></pre></li>
<li><p>Configure the firewall to allow incoming NFS traffic</p>
</li>
</ul>
<pre><code>[root@server2 ~]# firewall-cmd --add-service=nfs --permanent
success
[root@server2 ~]# firewall-cmd --add-service=rpc-bind --permanent
success
[root@server2 ~]# firewall-cmd --add-service=mountd --permanent
success
[root@server2 ~]# firewall-cmd --reload
success
</code></pre><h2 id="heading-mounting-the-nfs-share">Mounting the NFS Share</h2>
<p>In order to mount an NFS share we need to know the name of the share. Typically this information is known by the administrator, but you have multiple options to discover what shares are available:</p>
<ul>
<li>If NFSv4 is used on the server, you can use a root mount. You mount the root directory of the NFS server and you'll see all shares you have access to under your local mount point. </li>
<li>Use the <code>showmount -e</code> command</li>
</ul>
<blockquote>
<p>The <code>showmount</code> command may have issues with NFSv4 servers that are behind a firewall. The command relies on the portmapper service which uses random UDP ports while the firwall nfs service opens port 2049 only, which doesn't allow portmapper traffic. In these cases you can use the root mount option to discover the shares.</p>
</blockquote>
<pre><code>[root@server1 ~]# showmount -e server2
Export list <span class="hljs-keyword">for</span> server2:
/nfs_data   *
/nfs_users *
</code></pre><pre><code>[root@server1 ~]# mount server2:<span class="hljs-regexp">/ /m</span>nt
[root@server1 ~]# ls /mnt/
nfs_data  nfs_users
</code></pre><h1 id="heading-using-cifs-services">Using CIFS Services</h1>
<p>Microsoft published the technical specifications of its Server Message Block (SMB) protocol. This protocol is the foundation of all shares that are created in a Windows environment.
Releasing these specifications led to the start of the Samba project. The goal of this project was to provide SMB services on top of other operating systems. Samba has developed into the standard for file sharing between different operating systems and is now often referred to as the Common Internet File System (CIFS).</p>
<h2 id="heading-setting-up-a-samba-server">Setting Up a Samba Server</h2>
<p>Before jumping into configuring the samba server, let's clearly define our goals.
Server2, the samba server, should be sharing the following directories:</p>
<ul>
<li><code>/var/samba/public_read_share</code> - read only access for guests, mounted on <code>/mnt/public_read_share</code></li>
<li><code>/var/samba/public_write_share</code> - read/write permissions for guests, mounted on <code>/mnt/public_write_share</code></li>
<li><code>/var/samba/student_share</code> - read permissions for guests, read/write permissions for users in the <code>students</code> group. Mounted on <code>/mnt/students_share</code>.</li>
</ul>
<h3 id="heading-installing-and-configuring-samba">Installing and Configuring Samba</h3>
<p>Install the samba package and create the shared directories:</p>
<pre><code>[root@server2 ~]# yum install samba -y
...
[root@server2 ~]# mkdir -p /<span class="hljs-keyword">var</span>/samba/{public_share,public_write_share,students_share}
[root@server2 ~]# ls /<span class="hljs-keyword">var</span>/samba/
public_share  public_write_share  students_share
</code></pre><p>We enable the <code>smbd_anon_write</code> SELinux Boolean which allows anonymous users to modify public files labeled with the <code>public_content_rw_t</code> file context.
Next, we set the appropriate SELinux file contexts:</p>
<ul>
<li><code>public_content_t</code> - Allows Read Only access to public files.</li>
<li><code>public_content_rw_t</code> - Allows Read/Write access to public files.</li>
<li><code>samba_share_t</code> - As samba doesn't have default paths for shares, we make sure SELinux recognizes our share as a standard samba share.</li>
</ul>
<pre><code>[root@server2 samba]# pwd
/<span class="hljs-keyword">var</span>/samba

[root@server2 samba]# ls -lh
total <span class="hljs-number">0</span>
drwxr-xr-x. <span class="hljs-number">2</span> root root <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> public_share
drwxr-xr-x. <span class="hljs-number">2</span> root root <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> public_write_share
drwxr-xr-x. <span class="hljs-number">2</span> root root <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> students_share

[root@server2 samba]# setsebool -P smbd_anon_write on
[root@server2 samba]# getsebool smbd_anon_write 
smbd_anon_write --&gt; on

[root@server2 samba]# semanage fcontext -a -t public_content_t <span class="hljs-string">"/var/samba/public_share(/.*)?"</span>
[root@server2 samba]# semanage fcontext -a -t public_content_rw_t <span class="hljs-string">"/var/samba/public_write_share(/.*)?"</span>
[root@server2 samba]# semanage fcontext -a -t samba_share_t <span class="hljs-string">"/var/samba/students_share(/.*)?"</span>
[root@server2 samba]# restorecon -Rv /<span class="hljs-keyword">var</span>/samba/
Relabeled /<span class="hljs-keyword">var</span>/samba/public_share <span class="hljs-keyword">from</span> unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:public_content_t:s0
Relabeled /<span class="hljs-keyword">var</span>/samba/public_write_share <span class="hljs-keyword">from</span> unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:public_content_rw_t:s0
Relabeled /<span class="hljs-keyword">var</span>/samba/students_share <span class="hljs-keyword">from</span> unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:samba_share_t:s0

[root@server2 samba]# ls -lhZ
total <span class="hljs-number">0</span>
drwxr-xr-x. <span class="hljs-number">2</span> root root unconfined_u:object_r:public_content_t:s0    <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> public_share
drwxr-xr-x. <span class="hljs-number">2</span> root root unconfined_u:object_r:public_content_rw_t:s0 <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> public_write_share
drwxr-xr-x. <span class="hljs-number">2</span> root root unconfined_u:object_r:samba_share_t:s0       <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> students_share
</code></pre><p>Create the <code>students</code> group and, add the user <code>student</code> to the group.
Create the <code>smb_user</code> through which we'll be able to write to the <code>public_write_share</code> directory.
Add the Linux user <code>student</code> to samba and set a password. This credential will be used to authenticate and mount the <code>students_share</code> directory.
Set the Linux permissions on the shared directories:</p>
<pre><code>[root@server2 samba]# groupadd students
[root@server2 samba]# usermod -aG students student
[root@server2 samba]# id student
uid=<span class="hljs-number">1000</span>(student) gid=<span class="hljs-number">1000</span>(student) groups=<span class="hljs-number">1000</span>(student),<span class="hljs-number">1001</span>(students)

[root@server2 samba]# useradd smb_user --no-create-home --shell /sbin/nologin
[root@server2 samba]# 

[root@server2 samba]# smbpasswd -a student
New SMB password:
Retype <span class="hljs-keyword">new</span> SMB password:
Added user student.

[root@server2 samba]# chgrp smb_user public_write_share
[root@server2 samba]# chmod <span class="hljs-number">0770</span> public_write_share
[root@server2 samba]# chmod g+s public_write_share
[root@server2 samba]# 

[root@server2 samba]# chgrp students students_share
[root@server2 samba]# chmod <span class="hljs-number">0775</span> students_share
[root@server2 samba]# chmod g+s students_share
[root@server2 samba]# 

[root@server2 samba]# ls -lhZ
total <span class="hljs-number">0</span>
drwxr-xr-x. <span class="hljs-number">2</span> root root     unconfined_u:object_r:public_content_t:s0    <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> public_share
drwxrws---. <span class="hljs-number">2</span> root smb_user unconfined_u:object_r:public_content_rw_t:s0 <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> public_write_share
drwxrwsr-x. <span class="hljs-number">2</span> root students unconfined_u:object_r:samba_share_t:s0       <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">13</span>:<span class="hljs-number">24</span> students_share
</code></pre><p>Note that we don't change any permissions on <code>public_share</code>, since we only need read access.</p>
<p>Next, we configure the samba shares in <code>/etc/samba/smb.conf</code>:</p>
<pre><code>[root@server2 samba]# cd /etc/samba
[root@server2 samba]# mv smb.conf smb.conf.old
[root@server2 samba]# vim smb.conf
...

[root@server2 samba]# testparm
Load smb config files <span class="hljs-keyword">from</span> /etc/samba/smb.conf
Loaded services file OK.
Server role: ROLE_STANDALONE

Press enter to see a dump <span class="hljs-keyword">of</span> your service definitions

# Global parameters
[<span class="hljs-built_in">global</span>]
    security = USER
    workgroup = SAMBA
    idmap config * : backend = tdb


[public_read]
    comment = Public Read Only Share
    guest ok = Yes
    path = <span class="hljs-regexp">/var/</span>samba/public_share


[public_write]
    comment = Public Read/Write Share
    force user = smb_user
    guest ok = Yes
    path = <span class="hljs-regexp">/var/</span>samba/public_write_share
    read only = No
    write list = smb_user


[students]
    comment = Read/Write access <span class="hljs-keyword">for</span> the students group. Read access <span class="hljs-keyword">for</span> anyone <span class="hljs-keyword">else</span>.
    guest ok = Yes
    path = <span class="hljs-regexp">/var/</span>samba/students_share
    write list = +students
</code></pre><p>We need to allow samba traffic through our firewall:</p>
<pre><code>[root@server2 samba]# firewall-cmd --add-service=samba --permanent
success
[root@server2 samba]# firewall-cmd --reload
success
</code></pre><p>The final step before moving on to the client side would be to start and enable the samba service:</p>
<pre><code>[root@server2 samba]# systemctl enable --now smb
Created symlink /etc/systemd/system/multi-user.target.wants/smb.service → /usr/lib/systemd/system/smb.service.
</code></pre><h2 id="heading-discovering-cifs-shares">Discovering CIFS Shares</h2>
<p>On <code>server1</code>, where the shares are going to be mounted, you discover available shares using the <code>smbclient -L //hostname</code> command. 
Make sure you have the <code>cifs-utils</code> and <code>samba-client</code> packages installed:</p>
<pre><code>[root@server1 ~]# yum install -y cifs-utils samba-client
...
</code></pre><p>Let's discover the shares we created on <code>server2</code>. When you're prompted for a password, just hit Enter without providing a password.</p>
<pre><code>[root@server1 ~]# smbclient -L <span class="hljs-comment">//server2</span>
Enter SAMBA\root<span class="hljs-string">'s password: 
Anonymous login successful

    Sharename       Type      Comment
    ---------       ----      -------
    public_read     Disk      Public Read Only Share
    public_write    Disk      Public Read/Write Share
    students        Disk      Read/Write access for the students group. Read access for anyone else.
    IPC$            IPC       IPC Service (Samba 4.12.3)
SMB1 disabled -- no workgroup available
[root@server1 ~]#</span>
</code></pre><p>We're ready to move to the next step and mount our shares.</p>
<h2 id="heading-mounting-and-authenticating-to-samba-shares">Mounting and Authenticating to Samba Shares</h2>
<p>In the previous steps we created two guest shares and one share that needs authentication. 
We can mount these as follows:  </p>
<ul>
<li><code>mount -t cifs -o guest //server2/public_read /mnt/public_read_share</code>  </li>
<li><code>mount -t cifs -o guest //server2/public_write /mnt/public_write_share</code>  </li>
<li><code>mount -t cifs -o username=student,password=password //server2/students_share /mnt/students_share</code>  </li>
</ul>
<p>Before you do so, create the local mount points:</p>
<pre><code>[root@server1 ~]# mkdir /mnt/{public_read_share,public_write_share,students_share}
[root@server1 ~]# ls -l /mnt/
total <span class="hljs-number">0</span>
drwxr-xr-x. <span class="hljs-number">2</span> root root <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">14</span>:<span class="hljs-number">41</span> public_read_share
drwxr-xr-x. <span class="hljs-number">2</span> root root <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">14</span>:<span class="hljs-number">41</span> public_write_share
drwxr-xr-x. <span class="hljs-number">2</span> root root <span class="hljs-number">6</span> Mar <span class="hljs-number">11</span> <span class="hljs-number">14</span>:<span class="hljs-number">41</span> students_share

[root@server1 ~]# mount -t cifs -o guest <span class="hljs-comment">//server2/public_read /mnt/public_read_share</span>
[root@server1 ~]# mount -t cifs -o guest <span class="hljs-comment">//server2/public_write /mnt/public_write_share</span>
[root@server1 ~]# mount -t cifs -o username=student,password=password <span class="hljs-comment">//server2/students /mnt/students_share</span>
</code></pre><p>Next, test the read/write access to the shares. The outcome should be as expected.</p>
<blockquote>
<p>Note that we've mounted the share as <code>root</code>, this means the <code>/mnt/students_share</code> directory will only be writeable for the user <code>root</code>. In the next step we'll cover how to auto mount the share at boot time.</p>
</blockquote>
<h2 id="heading-mounting-remote-file-systems-through-fstab">Mounting Remote File Systems Through fstab</h2>
<p>As we've seen in <a target="_blank" href="/managing-storage-creating-mounting-file-systems">an earlier post</a>, the <code>/etc/fstab</code> file can be used to mount file systems automatically at boot time. </p>
<h3 id="heading-mounting-nfs-shares-through-fstab">Mounting NFS Shares Through fstab</h3>
<p>Mounting NFS Shares through <code>/etc/fstab</code> is pretty straightforward. Add the following line to the <code>fstab</code> file:</p>
<pre><code>server2:<span class="hljs-regexp">/nfs_data    /</span>nfs_data    nfs  sync     <span class="hljs-number">0</span> <span class="hljs-number">0</span>
</code></pre><p>With the <code>sync</code> option we ensure that modified files are committed to the remote file system immediately instead of being placed in a write buffer.</p>
<h3 id="heading-mounting-samba-shares-through-fstab">Mounting Samba Shares Through fstab</h3>
<p>When mounting Samba file systems through <code>/etc/fstab</code>, you need to consider a specific challenge: The user credentials that are needed to issue the mount. 
These are typically specified as mount options using <code>username=</code> and <code>password=</code>, but it is not a good idea to put these in clear text in the <code>/etc/fstab</code> file.</p>
<p>We can work around this by creating a file in the <code>root</code> home that contains these credentials, and referencing <code>/etc/fstab</code> to that file:</p>
<pre><code>[root@server1 ~]# pwd
/root
[root@server1 ~]# cat cifs.txt 
user=student
pass=password
[root@server1 ~]#
</code></pre><p>We set strict permissions on the file so only <code>root</code> can read it:</p>
<pre><code>[root@server1 ~]# chmod <span class="hljs-number">0600</span> cifs.txt
[root@server1 ~]#
</code></pre><p>Next, for the <code>//server2/students</code> share, we add the following line to <code>/etc/fstab</code>:</p>
<pre><code><span class="hljs-comment">//server2/students    /mnt/students_share    cifs    credentials=/root/cifs.txt,gid=students,file_mode=0664,dir_mode=0775 0 0</span>
</code></pre><p>Let's break down what the line does exactly:</p>
<ul>
<li><code>//server2/students</code> - The remote file system we're mounting.</li>
<li><code>/mnt_students_share</code> - The local mount point of the share.</li>
<li><code>cifs</code> - The remote file system type.</li>
<li><code>credentials=/root/cifs.txt</code> - Specifies the file that contains the credentials necessary to mount the remote file system.</li>
<li><code>gid=students</code> - We set group ownership on the files and directories to the group <code>students</code> .</li>
<li><code>file_mode=0664</code> - We set the necessary file permissions: read+write for Owner and Group, read for Others.</li>
<li><code>dir_mode=0775</code> -  We set the necessary directory permissions: read+write+execute for Owner and Group, read+execute for Others.</li>
<li><code>0 0</code> - We don't need backup support through the <code>dump</code> utility and we don't want <code>fsck</code> to check the disk integrity during boot.</li>
</ul>
<p>Simarly to the above, the entry for the <code>//server2/public_write_share</code> would look like this:</p>
<pre><code><span class="hljs-comment">//server2/public_write_share    /mnt/public_write_share cifs    guest,file_mode=0777,dir_mode=0777    0 0</span>
</code></pre><p>We autenticate as the user <code>guest</code> aganst the remote file system and we allow everyone read+write access to files and, read+write+execute permissions to directories.</p>
<p>For the last share, <code>//server2/public_share</code>, we don't specify Linux permissions in the <code>/etc/fstab</code> file as this share has been set to read-only by default on the Samba server. </p>
<pre><code><span class="hljs-comment">//server2/public_share    /mnt/public_read_share    cifs    guest    0 0</span>
</code></pre><p>Here all three <code>/etc/fstab</code> entries:</p>
<pre><code><span class="hljs-comment">//server2/public_share    /mnt/public_read_share    cifs    guest    0 0</span>
<span class="hljs-comment">//server2/public_write_share    /mnt/public_write_share cifs    guest,file_mode=0666,dir_mode=0777    0 0</span>
<span class="hljs-comment">//server2/students    /mnt/students_share    cifs    credentials=/root/cifs.txt,gid=students,file_mode=0664,dir_mode=0775 0 0</span>
</code></pre><h2 id="heading-using-automount-to-mount-remote-file-systems">Using Automount to Mount Remote File Systems</h2>
<p>As an alternative to using <code>/etc/fstab</code> we can configure <code>automount</code> to mount the shares automatically. The difference is that mounts through <code>automount</code> are "on demand", which ensures that no files systems are mounted when it's not needed. This works completely in user space and no root permissions are required, contrary to mounts using the <code>mount</code> command.</p>
<p>You need to install the <code>autofs</code> package to use <code>automount</code>:</p>
<pre><code>[root@server1 ~]# yum install -y autofs
...
[root@server1 ~]# systemctl enable --now autofs
...
</code></pre><h3 id="heading-defining-mounts-in-automount">Defining Mounts in Automount</h3>
<p>Mounts in <code>automount</code> are defined through a two-step procedure:</p>
<ul>
<li>Edit the master configuration file in <code>/etc/auto.master</code> where you specify the local mount point and the secondary configuration file.</li>
<li>Edit the secondary configuration file where you specify the subdirectory that will be created in the mount point.</li>
</ul>
<p>For this exercise, we'll be using the <code>nfs_data</code> NFS share on <code>server2</code>:</p>
<pre><code>[root@server2 ~]# cat /etc/<span class="hljs-built_in">exports</span>
/nfs_users    *(rw,no_root_squash)
/nfs_data    *(rw,no_root_squash)
</code></pre><p>On <code>server1</code>, open the <code>/etc/auto.master</code> file and add the below line:  </p>
<pre><code>/nfs_data    /etc/auto.nfs_data
</code></pre><p>On <code>server1</code>, open the <code>/etc/auto.nfs_data</code> file and add the below line:  </p>
<pre><code>files -rw server2:/nfs_data
</code></pre><p>Restart the <code>autofs</code> service:</p>
<pre><code>[root@server1 /]# systemctl restart autofs
</code></pre><p>Go to the <code>/nfs_data</code> directory on <code>server1</code>, notice there is <strong>no</strong> <code>files</code> directory:</p>
<pre><code>[root@server1 nfs_data]# ls
[root@server1 nfs_data]#
</code></pre><p>Change directory to <code>/nfs_data/files</code>:</p>
<pre><code>[root@server1 nfs_data]# cd files
[root@server1 files]# ls
automount_test
</code></pre><p>The <code>/nfs_data</code> share on <code>server2</code> was auto mounted on <code>/nfs_data/files</code> on <code>server1</code>.</p>
<h3 id="heading-using-wildcards-in-automount">Using Wildcards in Automount</h3>
<p>In some cases we're better off using dynamic directory names, for example when mounting home directories. The home directory of a user would be automatically mounted when that user logs in.  </p>
<p>We'll be simulating this by using the <code>/nfs_users</code> NFS share on <code>server2</code>:</p>
<pre><code>[root@server2 ~]# cat /etc/<span class="hljs-built_in">exports</span>
/nfs_users    *(rw,no_root_squash)
/nfs_data    *(rw,no_root_squash)
</code></pre><p>First, unmount the <code>/nfs_users</code> mount point on <code>server1</code>, if you still have it mounted, and delete the directory:</p>
<pre><code>[root@server1 /]# umount /nfs_users 
[root@server1 /]# rm -rf nfs_users
</code></pre><p>Add the below line to the <code>/etc/auto.master</code> file on <code>server1</code>:</p>
<pre><code>/nfs_users      /etc/auto.nfs_users
</code></pre><p>Create the <code>/etc/auto.nfs_users</code> file and add the below:</p>
<pre><code>* -rw server2:<span class="hljs-regexp">/nfs_users/</span>&amp;<span class="hljs-string">`</span>
</code></pre><p>Restart the <code>autofs</code> service:</p>
<pre><code>[root@server1 /]# systemctl restart autofs
</code></pre><p>Go to the <code>/nfs_users</code> directory and notice it's empty:</p>
<pre><code>[root@server1 /]# cd /nfs_users
[root@server1 nfs_users]# ls
[root@server1 nfs_users]#
</code></pre><p>Change directory to <code>/nfs_users/user1</code>:</p>
<pre><code>[root@server1 nfs_users]# cd user1
[root@server1 user1]# ls
user1_automount_test
[root@server1 user1]#
</code></pre><p>See how the other user folders are auto-mounted on demand:</p>
<pre><code>[root@server1 nfs_users]# ls
user1
[root@server1 nfs_users]# cd user2
[root@server1 user2]# ls
user2_automount_test
[root@server1 user2]# cd ..
[root@server1 nfs_users]# ls
user1  user2
</code></pre>]]></content:encoded></item><item><title><![CDATA[Managing a Firewall with Firewalld]]></title><description><![CDATA[{{< figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo">}}
{{< figure class="center" src="/img/firewalld.png" alt="Firewalld logo" width="200px">}}
Understanding Linux Firewalling
Firewalling is implemented in the Linux kernel by me...]]></description><link>https://blog.joerismissaert.dev/managing-a-firewall-with-firewalld</link><guid isPermaLink="true">https://blog.joerismissaert.dev/managing-a-firewall-with-firewalld</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Sun, 27 Dec 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo"&gt;}}
{{&lt; figure class="center" src="/img/firewalld.png" alt="Firewalld logo" width="200px"&gt;}}</p>
<h1 id="heading-understanding-linux-firewalling">Understanding Linux Firewalling</h1>
<p>Firewalling is implemented in the Linux kernel by means of the <a target="_blank" href="http://www.netfilter.org/">netfilter</a> subsystem to limit traffic coming in to a server or going out of the server. Netfilter allows kernel modules to inspect every incoming, outgoing, or forwarded packet and act upon it by either allowing it or blocking it. In essence, netfilter controls access to and from the network stack at the Linux kernel module level.</p>
<p>Iptables was the default solution to interact with netfilter, it provides a sophisticated way of defining firewall rules but it's also challenging to use due to the complicated syntax and the ordering of rules which can become complex. The iptables service is no longer offered in RHEL8, it has been replaced with <strong>nftables</strong>, a new solution with more advanced options.</p>
<h2 id="heading-firewalld">Firewalld</h2>
<p>Firewalld is a higher-level netfilter implementation that is more user-friendly compared to iptables or nftables. While administrators can manage the Firewalld rules, applications can also communicate with it using the DBus messaging system: rules can be added or removed without any direct action required from the system administrator. Applications can address the firewall from user space. </p>
<blockquote>
<p>Firewalld applies rules to incoming packets only by default, no filtering happens on outgoing packets.</p>
</blockquote>
<h3 id="heading-firewalld-zones">Firewalld Zones</h3>
<p>Firewalld makes management easier by working with <em>zones</em>. A zone is a collection of rules that are applied to incoming packets matching a specific source address or network interface. </p>
<p>The use of zones is import on servers that have multiple network interfaces. Each interface could be a different zone where different rules would apply. On a machine with only one network interface you can work with one zone, the <em>default</em> zone. </p>
<p>Every packet that comes into a system is analyzed for its source address, based on the source address Firewalld decides if it belongs to a specific zone. If not, the zone for the incoming network interface is used. If no specific zone is available, the packet is handled by the rules in the <em>default</em> zone. </p>
<p>{{</p><table>}}
Zone Name | Description
-----|----
block | Incoming network connections are rejected with the "icmp-host-prohibited" message. Connections that were initiated on this system are allowed.
dmz | For use on computers in the demilitarized zone. Selected incoming connections are accepted, and limited access to the internal network is allowed.
drop | Any incoming packets are dropped and there is no reply.
external | For use on external networks with masquarading (Network Address Translation) enabled, used on routers. Selected incoming connections are accepted.
home | Most computers on the same network are trusted, only selected incoming connections are accepted.
internal| Most computers on the same network are trusted, only selected incoming connections are accepted.
public | Other computers on the same network are not trused, limited connections are accepted. This is the <em>default</em> zone for all newly created network interfaces.
trusted | All network connections are accepted.
work | Most computers on the same network are trusted, only selected incoming connections are accepted.
{{</table>}}<p></p>
<h3 id="heading-firewalld-services">Firewalld Services</h3>
<p>Services are the second key element while working with Firewalld.
A service in Firewalld is not the same as a service in systemd. A Firewalld service defines what exactly should be accepted as incoming traffic in the firewall, it includes ports to be opened and supoorting kernel modules that should be loaded. </p>
<p>Behind each service is an XML configuration file that explains which TCP or UDP ports are involved and, if required, what kernel modules must be loaded.  Default (RPM installed) XML files are stored in <code>/usr/lib/firewalld/services</code> while custom XML files can be added to the <code>/etc/firewalld/services</code> directory.</p>
<pre><code>[root@localhost ~]# firewall-cmd --get-services
RH-Satellite<span class="hljs-number">-6</span> amanda-client amanda-k5-client amqp amqps ...
...


[root@localhost ~]# cat /usr/lib/firewalld/services/ftp.xml 
&lt;?xml version=<span class="hljs-string">"1.0"</span> encoding=<span class="hljs-string">"utf-8"</span>?&gt;
<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">service</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">short</span>&gt;</span>FTP<span class="hljs-tag">&lt;/<span class="hljs-name">short</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">description</span>&gt;</span>FTP is a protocol used for remote file transfer. If you plan to make your FTP server publicly available, enable this option. You need the vsftpd package installed for this option to be useful.<span class="hljs-tag">&lt;/<span class="hljs-name">description</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">port</span> <span class="hljs-attr">protocol</span>=<span class="hljs-string">"tcp"</span> <span class="hljs-attr">port</span>=<span class="hljs-string">"21"</span>/&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">helper</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"ftp"</span>/&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">service</span>&gt;</span></span>
</code></pre><h2 id="heading-working-with-firewalld">Working with Firewalld</h2>
<p>Firewalld provides a command-line interface tool that works with a runtime and permament (on-disk) configuration state: <strong>firewall-cmd</strong></p>
<p>Below is an example of how you can use the tool to retrieve current settings and make configuration changes. Always make sure to commit changes to disk using the <code>--permanent</code> flag so that your changes survive a reboot, then <code>--reload</code> to apply the changes to the runtime environment.</p>
<pre><code>[root@localhost ~]# firewall-cmd --get-<span class="hljs-keyword">default</span>-zone
public

[root@localhost ~]# firewall-cmd --get-zones
block dmz drop external home internal libvirt public trusted work

[root@localhost ~]# firewall-cmd --list-all --zone=public
public (active)
  <span class="hljs-attr">target</span>: <span class="hljs-keyword">default</span>
  icmp-block-inversion: no
  <span class="hljs-attr">interfaces</span>: enp1s0
  <span class="hljs-attr">sources</span>: 
  services: cockpit dhcpv6-client ftp http https ssh
  <span class="hljs-attr">ports</span>: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

[root@localhost ~]# firewall-cmd --get-services
RH-Satellite<span class="hljs-number">-6</span> amanda-client amanda-k5-client amqp amqps apcupsd audit bacula bacula-client bb bgp bitcoin bitcoin-rpc bitcoin-testnet
...

[root@localhost ~]# firewall-cmd --list-services
cockpit dhcpv6-client ftp http https ssh

[root@localhost ~]# firewall-cmd --add-service=vnc-server --permanent
success

[root@localhost ~]# firewall-cmd --list-services
cockpit dhcpv6-client ftp http https ssh

[root@localhost ~]# firewall-cmd --reload
success

[root@localhost ~]# firewall-cmd --list-services
cockpit dhcpv6-client ftp http https ssh vnc-server

[root@localhost ~]# firewall-cmd --add-port=<span class="hljs-number">2022</span>/tcp --permanent
success

[root@localhost ~]# firewall-cmd --reload
success

[root@localhost ~]# firewall-cmd --list-all
public (active)
  <span class="hljs-attr">target</span>: <span class="hljs-keyword">default</span>
  icmp-block-inversion: no
  <span class="hljs-attr">interfaces</span>: enp1s0
  <span class="hljs-attr">sources</span>: 
  services: cockpit dhcpv6-client ftp http https ssh vnc-server
  <span class="hljs-attr">ports</span>: <span class="hljs-number">2022</span>/tcp
  <span class="hljs-attr">protocols</span>: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
</code></pre><h4 id="heading-key-commands">Key Commands</h4>
<pre><code>firewall-cmd --list-all
firewall-cmd --list-all --zone=public

firewall-cmd --get-<span class="hljs-keyword">default</span>-zone
firewall-cmd --get-zones

firewall-cmd --get-services
firewall-cmd --list-services

firewall-cmd --add-service ftp
irewall-cmd --add-service ftp --permanent
firewall-cmd --reload

firewall-cmd --add-port=<span class="hljs-number">2022</span>/tcp --permanent
firewall-cmd --reload
</code></pre>]]></content:encoded></item><item><title><![CDATA[Enhancing Linux Security with SELinux]]></title><description><![CDATA[{{< figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo">}}
{{< figure class="center" src="/img/selinux.png" alt="SELinux logo" width="200px">}}
SELinux is a security enhancement module, deployed on top of Linux, which provides impro...]]></description><link>https://blog.joerismissaert.dev/enhancing-linux-security-with-selinux</link><guid isPermaLink="true">https://blog.joerismissaert.dev/enhancing-linux-security-with-selinux</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Wed, 16 Dec 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; figure class="center" src="/img/redhat-8-logo.png" alt="Red Hat logo"&gt;}}
{{&lt; figure class="center" src="/img/selinux.png" alt="SELinux logo" width="200px"&gt;}}</p>
<p>SELinux is a security enhancement module, deployed on top of Linux, which provides improved security via Role Based Access Controls (RBACs) on subjects and objects (processes and resources). Traditional Linux security used Discretionary Access Controls (DACs).</p>
<p>With DAC, a process can access any file, directory, device or other resource that leaves itself open to access. Using RBAC, a process only has access to resources that it is explicitely allowd to access, based on the assigned role. The way that SELinux implements RBAC is to assign an SELinux policy to a process. That process restricts access as follows:</p>
<ul>
<li>Only let the process access resources that carry the explicit labels</li>
<li>Make potentially insecure features, e.g. write access to a directory, available as Booleans, which can be turned on or off.</li>
</ul>
<p>SELinux is not a replacement for DAC, it's an additional security layer:</p>
<ul>
<li>DAC rules are still used when using SELinux;</li>
<li>DAC rules are checked first, if those allow access then SELinux policies are checked;</li>
<li>If DAC rules deny access then SELinux policies are not checked.</li>
</ul>
<p>In essence, SELinux severaly limits what potentially malicious code may gain access to and generally limits activity on the Linux system.</p>
<h1 id="heading-understanding-how-selinux-works">Understanding How SELinux Works</h1>
<p>SELinux provides a combination of Role Based Access Control and either <em>Type Enforcement</em> (TE) or <em>Multi-Level Security</em> (MLS). In RBAC, access to an object is granted or denied based on the subject's assigned role in the organization. It's not based on usernames or process ID. In this post I will focus only on Type Enforcement, which is the default SELinux <em>targeted policy</em>.</p>
<h2 id="heading-type-enforcement">Type Enforcement</h2>
<p>Type Enforcement is necessary to implement the RBAC model, it secures a system through these methods:</p>
<ul>
<li>Labeling objects as certain security types;</li>
<li>Assigning subjects to particular domains and roles;</li>
<li>Providing rules to allow certain domains and roles to access certain object types.</li>
</ul>
<p>Let's look at an example.
The below <code>ls -l</code> command shows the DAC controls on the files. The output shows the file's owner, group and permissions:</p>
<pre><code>[student@localhost my_stuff]$ ls -l
total <span class="hljs-number">0</span>
-rw-rw-r--. <span class="hljs-number">1</span> student student <span class="hljs-number">0</span> Jan <span class="hljs-number">19</span> <span class="hljs-number">06</span>:<span class="hljs-number">25</span> test001
</code></pre><p>We can add the <code>-Z</code> option to display the SELinux RBAC controls too:</p>
<pre><code>[student@localhost my_stuff]$ ls -lZ
total <span class="hljs-number">0</span>
-rw-rw-r--. <span class="hljs-number">1</span> student student unconfined_u:object_r:user_home_t:s0 <span class="hljs-number">0</span> Jan <span class="hljs-number">19</span> <span class="hljs-number">06</span>:<span class="hljs-number">25</span> test001
</code></pre><p>The last example displays four items assiciated with the file that are specific to SELinux:</p>
<ul>
<li><strong>user</strong> <code>unconfined_u</code></li>
<li><strong>role</strong> <code>object_r</code></li>
<li><strong>type</strong> <code>user_home_t</code></li>
<li><strong>level</strong> <code>s0</code></li>
</ul>
<p>The above four RBAC items are used in the SELinux access control  to determine appropriate access levels. Together, these items are called the SELinux <em>security context</em> or sometimes the <em>security label</em>.</p>
<p>These security contexts are given to to subjects (processes and users). Each security context has a specific name. The name given depends upon what object or subject it has been assigned: Files have a file context, users have a user context, and processes have a process context also referred to as a domain.</p>
<p>The rules allowing access are called allow rules or policy rules. A policy rule is the process SELinux follows to grant or deny access to a particular system security type. Thus, Type Enforcement ensures that only certain "types" of subjects can access certain "types" of objects.</p>
<h2 id="heading-implementing-selinux-security-models">Implementing SELinux Security Models</h2>
<p>SELinux implements the RBAC model through a combination of four primary SELinux pieces:</p>
<ul>
<li>Operational modes</li>
<li>Security contexts</li>
<li>Policy types</li>
<li>Policy rule packages</li>
</ul>
<p>We already touched on some of these design elements.</p>
<h3 id="heading-understanding-selinux-operational-modes">Understanding SELinux Operational Modes</h3>
<p>SELinux comes with three operational modes: <em>disabled, permissive</em> and <em>enforcing</em>.
Each of these modes offeres different benefits for Linux system security.</p>
<h4 id="heading-using-disabled-mode">Using Disabled Mode</h4>
<p>In the <em>disabled</em> mode, SELinux is turned off. The default method of access control, Discretionary Access Control, is used instead.</p>
<h4 id="heading-using-permissive-mode">Using Permissive Mode</h4>
<p>In <em>permissive</em> mode, SELinux is turned on, but the security policy rules are not enforced. When a security policy rule should deny access, access will still be allowed. However, a message is sent to a log file denoting that access should've been denied.</p>
<p>SELinux permissive mode is useful for testing and troubleshooting. </p>
<h4 id="heading-using-enforcing-mode">Using Enforcing Mode</h4>
<p>In <em>enforcing</em> mode SELinux is turned on and all of the security policy rules are enforced.</p>
<h3 id="heading-understanding-selinux-security-contexts">Understanding SELinux Security Contexts</h3>
<p>An SELinux security context is the method used to classify objects (such as files) and subjects (such as users or programs). A security context consists of four attributes: <code>user</code>, <code>role</code>, <code>type</code> and <code>level</code>.</p>
<ul>
<li><p><strong>User</strong> - The <code>user</code> attribute is a mapping of a Linux username to an SELinux name. This is not the same as a users's login name, and it's referred to specifically as the SELinux user. The SELinux username ends with a <code>_u</code>, making it easy to identify in the output. Regular unconfined users have an <code>unconfined_u</code> user attribute in the default targeted policy.</p>
</li>
<li><p><strong>Role</strong> - The <code>role</code> attribute is assigned to subjects and objects. Each role is granted access to other subjects and objects based on the role's security clearance and the object's classification level. Users are assigned a role and that role is authorized for particular types of domains (or process context). The SELinux role has <code>_r</code> at the end. Processes run by <code>root</code> have a <code>system_r</code> role, while regular users run processes under the <code>unconfined_r</code> role.</p>
</li>
<li><p><strong>Type</strong> - The <code>type</code> attribute defines a domain type for processes, a user type for users, and a file type for files. This attribute is also called the security type. Most policy rules are concerned with the security type of a process and what files, ports, devices and other resources that process has access to based on their security types. The SELinux type name ends with a <code>_t</code>.</p>
</li>
<li><p><strong>Level</strong> - The <code>level</code> is an attribute of Multi-Level Security (MLS), it's optional in Type Enforcement. </p>
</li>
</ul>
<h4 id="heading-users-files-and-processes-have-security-contexts">Users, Files, and Processes Have Security Contexts</h4>
<p>To see your SELinux user context, enter the <code>id</code> command at the shell prompt:</p>
<pre><code>[student@localhost ~]$ id
uid=<span class="hljs-number">1000</span>(student) gid=<span class="hljs-number">1000</span>(student) groups=<span class="hljs-number">1000</span>(student) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[student@localhost ~]$
</code></pre><p>Use the <code>-Z</code> option on the <code>ls</code> command to see an individual file's context:</p>
<pre><code>[student@localhost my_stuff]$ ls -lZ
total <span class="hljs-number">0</span>
-rw-rw-r--. <span class="hljs-number">1</span> student student unconfined_u:object_r:user_home_t:s0 <span class="hljs-number">0</span> Jan <span class="hljs-number">19</span> <span class="hljs-number">06</span>:<span class="hljs-number">25</span> test001
</code></pre><p>Use the <code>-Z</code> option on the <code>ps</code> command to see a process's security context:</p>
<pre><code>[student@localhost my_stuff]$ ps -eZ | grep bash
<span class="hljs-attr">unconfined_u</span>:unconfined_r:unconfined_t:s0-s0:c0.c1023 <span class="hljs-number">2872</span> pts/<span class="hljs-number">0</span> <span class="hljs-number">00</span>:<span class="hljs-number">00</span>:<span class="hljs-number">00</span> bash

[student@localhost my_stuff]$ ps -eZ | grep systemd
<span class="hljs-attr">system_u</span>:system_r:init_t:s0           <span class="hljs-number">1</span> ?        <span class="hljs-number">00</span>:<span class="hljs-number">00</span>:<span class="hljs-number">01</span> systemd
<span class="hljs-attr">system_u</span>:system_r:syslogd_t:s0      <span class="hljs-number">638</span> ?        <span class="hljs-number">00</span>:<span class="hljs-number">00</span>:<span class="hljs-number">00</span> systemd-journal
</code></pre><h3 id="heading-understanding-selinux-policy-types">Understanding SELinux Policy Types</h3>
<p>The policy type directly determines what sets of policy rules are used to dictate what an object can access. The policy type also determines what specific security context attributes are needed.</p>
<p>SELinux has different policies:</p>
<ul>
<li>Targeted (default)</li>
<li>MLS</li>
<li>Minimum</li>
</ul>
<p>The <em>Targeted policy's</em> primary purpose is to restrict "targeted" daemons, but it can also restrict other processes and users. Targeted daemons are sandboxed, they run in an environment where their access to other objects is tightly controlled so that no malicious attacks launched through those daemons can affect other services or the Linux system as a whole. </p>
<p>All subjects and objects not targeted are run in the <code>unconfined_t</code> domain. This domain has no SELinux policy restrictions and thus only used traditional Linux security.</p>
<h3 id="heading-selinux-policy-rule-packages">SELinux Policy Rule Packages</h3>
<p>Policy rules are installed with SELinux and are grouped into packages, also called modules.</p>
<p>There is user documentation on these various policy modules in the form of HTML files. To view this documentation on RHEL, open your browser and enter the following url:
<code>file:///usr/share/doc/selinux-policy/html/index.html</code></p>
<p>If you don't have the policy documentation you can install it:
<code>yum install selinux-policy-doc</code></p>
<p>This documentation allows you to review how policy rules are created and packaged.</p>
<h1 id="heading-configuring-selinux">Configuring SELinux</h1>
<p>SELinux comes preconfigured, you can use the SELinux features without any configuration.
The configuration can only be set and modified by <code>root</code>. The primary configuration file is <code>/etc/sysconfig/selinux</code> which is a symlink to <code>/etc/selinux/config</code>:</p>
<pre><code>[root@localhost ~]# ls -lh /etc/sysconfig/selinux 
lrwxrwxrwx. <span class="hljs-number">1</span> root root <span class="hljs-number">17</span> Sep <span class="hljs-number">26</span> <span class="hljs-number">09</span>:<span class="hljs-number">44</span> /etc/sysconfig/selinux -&gt; ../selinux/config
[root@localhost ~]# cat /etc/sysconfig/selinux 

# This file controls the state <span class="hljs-keyword">of</span> SELinux on the system.
# SELINUX= can take one <span class="hljs-keyword">of</span> these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead <span class="hljs-keyword">of</span> enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one <span class="hljs-keyword">of</span> these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification <span class="hljs-keyword">of</span> targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
</code></pre><p>This file allows you to set the mode and policy type.</p>
<h2 id="heading-setting-the-selinux-mode-and-policy-type">Setting the SELinux Mode and Policy Type</h2>
<p>We can use the <code>getenforce</code> command to see the <em>current</em> SELinux mode, to see both the <code>current</code> mode and the mode set in the configuration file, use the <code>sestatus</code> command:</p>
<pre><code>[root@localhost ~]# getenforce
Enforcing

[root@localhost ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                <span class="hljs-regexp">/sys/</span>fs/selinux
SELinux root directory:         <span class="hljs-regexp">/etc/</span>selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode <span class="hljs-keyword">from</span> config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      <span class="hljs-number">31</span>
</code></pre><p>To change the mode setting, you can use the <code>setenforce</code> command with either the <code>0</code> or <code>permissive</code> argument, or the <code>1</code> or <code>enforcing</code> argument.
This will change the SELinux mode during runtime and leaves the setting in the primary configuration file untouched. Rebooting the system will apply the mode set in the primary configuration file.</p>
<blockquote>
<p>You cannot use <code>setenforce</code> to change SELinux to disabled mode.</p>
</blockquote>
<p>Switching from <code>disabled</code> to <code>enforcing</code> should be done using the primary configuration file and a reboot. Using the <code>setenforce</code> command may hang your system due to incorrect file labels. Rebooting after changing from <code>disabled</code> may take a while as the filesystem will be relabeled.  </p>
<p>This means that SELinux checks and changes the security context of any files with incorrect security contexts that can cause problems in the new mode, and any file not labeled will be labeled with contexts. This process can take a long time since each file's context is checked.</p>
<p>The policy type you choose determines whether SELinux enforces TE, MLS or Minimum. The default policy type is <code>targeted</code>.
When setting the policy type to MLS or Minimum you need to make sure you have the policy package installed:
<code>yum list selinux-policy-mls selinux-policy-minimum</code></p>
<h2 id="heading-managing-selinux-security-contexts">Managing SELinux Security Contexts</h2>
<p>Current SELinux file and process security contexts can be viewed using the <code>secon</code> command:</p>
<ul>
<li><code>-u</code> Shows the user of the security context.</li>
<li><code>-r</code> Shows the role of the security context.</li>
<li><code>-t</code> Shows the type of the security context.</li>
</ul>
<p>Without any arguments, the command shows you the current process's security context:</p>
<pre><code>[student@localhost ~]$ secon -urt
<span class="hljs-attr">user</span>: unconfined_u
<span class="hljs-attr">role</span>: unconfined_r
<span class="hljs-attr">type</span>: unconfined_t
</code></pre><p>To view another process's security context, use the <code>-p</code> option followed by the process id.
e.g. <code>systemd</code>:</p>
<pre><code>[student@localhost ~]$ secon -urt -p <span class="hljs-number">1</span>
<span class="hljs-attr">user</span>: system_u
<span class="hljs-attr">role</span>: system_r
<span class="hljs-attr">type</span>: init_t
</code></pre><p>To view a file's security context, use the <code>-f</code> option:</p>
<pre><code>[student@localhost ~]$ secon -urt -f /etc/passwd
<span class="hljs-attr">user</span>: system_u
<span class="hljs-attr">role</span>: object_r
<span class="hljs-attr">type</span>: passwd_file_t
</code></pre><blockquote>
<p>The <code>secon</code> command does not show the security context for the current user, instead use the <code>id</code> command.</p>
</blockquote>
<h3 id="heading-setting-security-context-types">Setting Security Context Types</h3>
<blockquote>
<p>Since the RHCSA exam focuses only on context types, I will not be covering the user and role contexts.</p>
</blockquote>
<p>To set a context type we can use the <code>semanage</code> command.
<code>semanage</code> writes the new context to the SELinux policy from where it can be applied to the file system.</p>
<p>The <code>semanage</code> command may not be available by default. You can find the RPM containing <code>semanage</code> using <code>yum whatprovides */semanage</code>:</p>
<pre><code>[root@localhost ~]# yum whatprovides */semanage
policycoreutils-python-utils<span class="hljs-number">-2.9</span><span class="hljs-number">-9.</span>el8.noarch : SELinux policy core python utilities
<span class="hljs-attr">Repo</span>        : BaseOS
Matched <span class="hljs-keyword">from</span>:
Filename    : <span class="hljs-regexp">/usr/</span>sbin/semanage
<span class="hljs-attr">Filename</span>    : <span class="hljs-regexp">/usr/</span>share/bash-completion/completions/semanage
</code></pre><p>The <code>policycoreutils-python-utils</code> has to be installed in order to use <code>semanage</code>.</p>
<p>To set context using <code>semanage</code> we need to know the appropriate context type. An easy way to find the appropriate context is by looking at the default context settings on already-existing items:</p>
<pre><code>[root@localhost ~]# ls -lZ /<span class="hljs-keyword">var</span>/www
total <span class="hljs-number">0</span>
drwxr-xr-x. <span class="hljs-number">2</span> root root system_u:object_r:httpd_sys_script_exec_t:s0  <span class="hljs-number">6</span> Jun  <span class="hljs-number">8</span>  <span class="hljs-number">2020</span> cgi-bin
drwxr-xr-x. <span class="hljs-number">4</span> root root system_u:object_r:httpd_sys_content_t:s0     <span class="hljs-number">61</span> Jan  <span class="hljs-number">6</span> <span class="hljs-number">12</span>:<span class="hljs-number">19</span> html
</code></pre><p><code>/var/www/html</code> is a default location for the Apache HTTP Service. If we would want to add a new folder to <code>/var/www</code> to serve content with Apache, we now know we need the <code>http_sys_content_t</code> context type. </p>
<p>For demonstration purposes, let's created the <code>my_dir</code> directory in our home folder, then move it to <code>/var/www</code>. The reason why we do this is because if we create the directory in <code>/var/www</code> it will inherit the correct context type from the parent directory.</p>
<pre><code>[root@localhost ~]# ls -lZ /<span class="hljs-keyword">var</span>/www
total <span class="hljs-number">0</span>
drwxr-xr-x. <span class="hljs-number">2</span> root root system_u:object_r:httpd_sys_script_exec_t:s0  <span class="hljs-number">6</span> Jun  <span class="hljs-number">8</span>  <span class="hljs-number">2020</span> cgi-bin
drwxr-xr-x. <span class="hljs-number">4</span> root root system_u:object_r:httpd_sys_content_t:s0     <span class="hljs-number">61</span> Jan  <span class="hljs-number">6</span> <span class="hljs-number">12</span>:<span class="hljs-number">19</span> html
drwxr-xr-x. <span class="hljs-number">2</span> root root unconfined_u:object_r:admin_home_t:s0         <span class="hljs-number">6</span> Jan <span class="hljs-number">20</span> <span class="hljs-number">11</span>:<span class="hljs-number">44</span> my_dir
</code></pre><p>The <code>mv</code> command kept the <code>admin_home_t</code> context type on our directory.
We can change the context type as follows:</p>
<pre><code>[root@localhost ~]# semanage fcontext -a -t httpd_sys_content_t <span class="hljs-string">"/var/www/my_dir(/.*)?"</span>
[root@localhost ~]# ls -lZd /<span class="hljs-keyword">var</span>/www/my_dir
drwxr-xr-x. <span class="hljs-number">2</span> root root unconfined_u:object_r:admin_home_t:s0 <span class="hljs-number">6</span> Jan <span class="hljs-number">20</span> <span class="hljs-number">11</span>:<span class="hljs-number">44</span> /<span class="hljs-keyword">var</span>/www/my_dir
</code></pre><p>The <code>-a</code> option is used to add a context type, then we use <code>-t</code> to specify the context type. The last part of the command indicates the folder we apply the changes to and contains a regular expression, <code>(/.*)?</code>, to refer to the directory <code>my_dir</code> and anything that exists below that directory.</p>
<p>Notice how the <code>semanage</code> command didn't provide any output, and our <code>ls -lZd</code> command still shows the original context type.
This is because using <code>semanage</code> we only applied the context type to the SELinux policy but not to the file system. We need to apply the change to the file system using <code>restorecon</code>:</p>
<pre><code>[root@localhost ~]# restorecon -R -v /<span class="hljs-keyword">var</span>/www/my_dir
Relabeled /<span class="hljs-keyword">var</span>/www/my_dir <span class="hljs-keyword">from</span> unconfined_u:object_r:admin_home_t:s0 to unconfined_u:object_r:httpd_sys_content_t:s0
[root@localhost ~]# ls -lZd /<span class="hljs-keyword">var</span>/www/my_dir
drwxr-xr-x. <span class="hljs-number">2</span> root root unconfined_u:object_r:httpd_sys_content_t:s0 <span class="hljs-number">6</span> Jan <span class="hljs-number">20</span> <span class="hljs-number">11</span>:<span class="hljs-number">44</span> /<span class="hljs-keyword">var</span>/www/my_dir
</code></pre><p>The following example changes the SELinux context type on a network port, assuming you would want to make the <code>ssh</code> service available over port 2222.</p>
<pre><code>[root@localhost ~]# semanage port -l | grep ssh
ssh_port_t                     tcp      <span class="hljs-number">22</span>
[root@localhost ~]# semanage port -a -t ssh_port_t -p tcp <span class="hljs-number">2222</span>
[root@localhost ~]# semanage port -l | grep ssh
ssh_port_t                     tcp      <span class="hljs-number">2222</span>, <span class="hljs-number">22</span>
</code></pre><h3 id="heading-finding-the-context-type-you-need">Finding the Context Type You Need</h3>
<p>There are three approaches in finding the context type you need:</p>
<ul>
<li>Look at the default environment;</li>
<li>Read the configuration files;</li>
<li>Use <code>man -k _selinux</code> to find the SELinux-specific man pages for your service.</li>
</ul>
<p>The man pages are not installed by default, to install them you need to install the <code>policycoreutils-devel</code> package. Once installed, use the <code>mandb</code> command to update the man page database and issue the <code>sepolicy manpage -a -p /usr/share/man/man8</code> command to install the SELinux man pages:</p>
<pre><code>[root@localhost ~]# yum whatprovides */sepolicy
policycoreutils-devel<span class="hljs-number">-2.9</span><span class="hljs-number">-9.</span>el8.i686 : SELinux policy core policy devel utilities
<span class="hljs-attr">Repo</span>        : BaseOS
Matched <span class="hljs-keyword">from</span>:
Filename    : <span class="hljs-regexp">/usr/</span>bin/sepolicy
<span class="hljs-attr">Filename</span>    : <span class="hljs-regexp">/usr/</span>share/bash-completion/completions/sepolicy

[root@localhost ~]# yum install -y policycoreutils-devel
...

[root@localhost ~]# sepolicy manpage -a -p /usr/share/man/man8
...

[root@localhost ~]# mandb
...

[root@localhost ~]# man -k _selinux | grep http
apache_selinux (<span class="hljs-number">8</span>)   - Security Enhanced Linux Policy <span class="hljs-keyword">for</span> the httpd processes
httpd_helper_selinux (<span class="hljs-number">8</span>) - Security Enhanced Linux Policy <span class="hljs-keyword">for</span> the httpd_helper processes
httpd_passwd_selinux (<span class="hljs-number">8</span>) - Security Enhanced Linux Policy <span class="hljs-keyword">for</span> the httpd_passwd processes
...

[root@localhost ~]# man apache_selinux
</code></pre><h3 id="heading-restoring-default-file-contexts">Restoring Default File Contexts</h3>
<p>Previously, we applied the context type from the policy to the file system using the <code>restorecon</code> command. The policy contains the default settings for most files and directories, so if ever a wrong context setting is applied we can use <code>restorecon</code> to reapply the default from the policy to the file system.</p>
<p>Using <code>restorecon</code> this way can be useful to fix problems on new files. There's a specific way context settings are applied:</p>
<ul>
<li>If a new file or directory is created, it inherits the context type of the parent directory.</li>
<li>If a file or directory is copied, this is considered a new file or directory.</li>
<li>If a file is moved, or copied using <code>cp -a</code> and thus keeping properties, the original context type is applied.</li>
</ul>
<p>The latter of the above 4 ways can be fixed by using <code>restorecon</code>. It's also possible to relabel the entire file system using <code>restorecon -Rv /</code> or by creating the file <code>/.autorelabel</code> in the root <code>/</code>. The next time you reboot the system will discover the <code>/.autorelabel</code> file and the entire file system will be relabeled.</p>
<h3 id="heading-managing-selinux-via-booleans">Managing SELinux via Booleans</h3>
<p>SELinux Booleans are provided to easily change the behaviour of a rule. A Boolean is a switch that toggles a setting on or off and it allows you to change parts of a SELinux policy rule without any knowledge of policy writing. These changes are applied during runtime and do not require a reboot. </p>
<p>You can get a list of Booleans using the <code>getsebool -a</code> command and filtering that down using <code>grep</code>:</p>
<pre><code>[root@localhost ~]# getsebool -a | grep httpd
httpd_anon_write --&gt; off
httpd_builtin_scripting --&gt; on
httpd_can_check_spam --&gt; off
httpd_can_connect_ftp --&gt; off
httpd_can_connect_ldap --&gt; off
httpd_can_connect_mythtv --&gt; off
httpd_can_connect_zabbix --&gt; off
...
</code></pre><p>The <code>semanage boolean -l</code> command provides more detail, it shows the current setting and the default one.</p>
<pre><code>[root@localhost ~]# semanage boolean -l | head
SELinux boolean                State  Default Description

abrt_anon_write                (off  ,  off)  Allow ABRT to modify public files used <span class="hljs-keyword">for</span> public file transfer services.
abrt_handle_event              (off  ,  off)  Determine whether ABRT can run <span class="hljs-keyword">in</span> the abrt_handle_event_t domain to handle ABRT event scripts.
</code></pre><p>To set a Boolean we use <code>setsebool</code> and to apply the change permanently we add the <code>-P</code> option:</p>
<pre><code>[root@localhost ~]# getsebool -a | grep ftpd
ftpd_anon_write --&gt; off
...

[root@localhost ~]# setsebool ftpd_anon_write on
[root@localhost ~]# semanage boolean -l | grep ftpd_anon
ftpd_anon_write                (on   ,  off)  Determine whether ftpd can modify public files used <span class="hljs-keyword">for</span> public file transfer services. Directories/Files must be labeled public_content_rw_t.

[root@localhost ~]# setsebool -P ftpd_anon_write on
[root@localhost ~]# semanage boolean -l | grep ftpd_anon
ftpd_anon_write                (on   ,   on)  Determine whether ftpd can modify public files used <span class="hljs-keyword">for</span> public file transfer services. Directories/Files must be labeled public_content_rw_t.
</code></pre><h2 id="heading-troubleshooting-selinux-policy-violations">Troubleshooting SELinux Policy Violations</h2>
<p>SELinux logs everything it is doing, the primary source to get logging information is the audit log which is in <code>/var/log/audit/audit.log</code>. Message are logged with <code>type=AVC</code>, which stands for <em>Access Vector Cache</em>. </p>
<pre><code>[root@localhost ~]# grep AVC /<span class="hljs-keyword">var</span>/log/audit/audit.log | tail <span class="hljs-number">-1</span>
type=AVC msg=audit(<span class="hljs-number">1611246770.937</span>:<span class="hljs-number">136</span>): avc:  denied  { getattr } <span class="hljs-keyword">for</span>  pid=<span class="hljs-number">4178</span> comm=<span class="hljs-string">"httpd"</span> path=<span class="hljs-string">"/test/index.html"</span> dev=<span class="hljs-string">"dm-0"</span> ino=<span class="hljs-number">35157701</span> scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=file permissive=<span class="hljs-number">0</span>
</code></pre><p>The first relevant part in the output is the text <code>acv: denied { getattr }</code>. This means some process tried to read the attributes of a file and it was denied access.
Further down we can see <code>comm="httpd"</code> which means the command that was trying to issue the getattr request was <code>httpd</code>. Next, we see <code>path="test/index.html"</code>, which is the file that this process tried to access.</p>
<p>In the last part we see information about the source context and the target context:
<code>scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:default_t:s0</code></p>
<blockquote>
<p><code>default_t</code> is used for files that do not match any pattern in the SELinux policy. I created <code>/test/index.html</code> in the root and SELinux doesn't know what security context to give to this file, so it assigned <code>default_t</code>.</p>
</blockquote>
<p>We also see that Permissive mode is disabled:
<code>permissive=0</code></p>
<p>The issue here is that the SELinux policy denies access from the <code>httpd_t</code> security context to the <code>default_t</code> security context.
We can solve this issue by setting the correct target security context:</p>
<pre><code>[root@localhost ~]# semanage fcontext -a -t httpd_sys_content_t <span class="hljs-string">"/test(/.*)?"</span>
[root@localhost ~]# restorecon -Rv /test
Relabeled /test <span class="hljs-keyword">from</span> unconfined_u:object_r:default_t:s0 to unconfined_u:object_r:httpd_sys_content_t:s0
Relabeled /test/index.html <span class="hljs-keyword">from</span> unconfined_u:object_r:default_t:s0 to unconfined_u:object_r:httpd_sys_content_t:s0
</code></pre><h3 id="heading-analyzing-selinux-with-sealert">Analyzing SELinux with Sealert</h3>
<p>We can use <code>sealert</code> to easier understand SELinux messages in <code>/var/log/audit/audit.log</code>.
First, you need to install <code>sealert</code>: <code>yum install setroubleshoot-server</code></p>
<p>Once this is installed, issue the <code>journalctl | grep sealert</code> command:</p>
<pre><code>Jan <span class="hljs-number">21</span> <span class="hljs-number">11</span>:<span class="hljs-number">32</span>:<span class="hljs-number">57</span> localhost.localdomain setroubleshoot[<span class="hljs-number">4395</span>]: SELinux is preventing httpd <span class="hljs-keyword">from</span> getattr access on the file /test/index.html. For complete SELinux messages run: sealert -l e4fc58ab-c1c0<span class="hljs-number">-4525</span>-a955-eff9a5570a7c
Jan <span class="hljs-number">21</span> <span class="hljs-number">11</span>:<span class="hljs-number">33</span>:<span class="hljs-number">00</span> localhost.localdomain setroubleshoot[<span class="hljs-number">4395</span>]: SELinux is preventing httpd <span class="hljs-keyword">from</span> getattr access on the file /test/index.html. For complete SELinux messages run: sealert -l e4fc58ab-c1c0<span class="hljs-number">-4525</span>-a955-eff9a5570a7c
</code></pre><p>Follow the instructions and run <code>saelert -l UUID</code>.
<code>sealert</code> will analyze what's happened and provide some suggestions what you need to do to fix the problem. Each suggestion will have a confidence score and the higher this score the more likely the suggested solution would be applicable. </p>
<pre><code>[root@localhost ~]# sealert -l e4fc58ab-c1c0<span class="hljs-number">-4525</span>-a955-eff9a5570a7c
SELinux is preventing httpd <span class="hljs-keyword">from</span> getattr access on the file /test/index.html.

*****  Plugin catchall_labels (<span class="hljs-number">83.8</span> confidence) suggests   *******************

If you want to allow httpd to have getattr access on the index.html file
Then you need to change the label on /test/index.html
Do
# semanage fcontext -a -t FILE_TYPE <span class="hljs-string">'/test/index.html'</span>
...
</code></pre>]]></content:encoded></item><item><title><![CDATA[Network Services - Managing Apache HTTP]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
{{< image src="/img/apache.png" alt="Apache logo" position="center" >}}
Managing Apache HTTP Services is not part of the current RHCSA exam objectives, but we need minima...]]></description><link>https://blog.joerismissaert.dev/network-services-managing-apache-http</link><guid isPermaLink="true">https://blog.joerismissaert.dev/network-services-managing-apache-http</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Mon, 09 Nov 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}
{{&lt; image src="/img/apache.png" alt="Apache logo" position="center" &gt;}}</p>
<p>Managing Apache HTTP Services is not part of the current RHCSA exam objectives, but we need minimal knowledge on this topic in order to master the SELinux-related objectives later on. </p>
<p>The Apache server is provided through different software packages. The basic packages is <code>httpd</code> which contains everything for an operational but basic website. For a complete overview of all the packages use <code>yum search httpd</code>. </p>
<h1 id="heading-understanding-the-httpd-package">Understanding the httpd Package</h1>
<p>Let's examine the <code>httpd</code> package by downloading it using <code>yumdownloader</code> and running a few rpm commands on it:</p>
<pre><code>[root@localhost ~]# yumdownloader httpd
Last metadata expiration check: <span class="hljs-number">0</span>:<span class="hljs-number">00</span>:<span class="hljs-number">31</span> ago on Tue <span class="hljs-number">06</span> Oct <span class="hljs-number">2020</span> <span class="hljs-number">11</span>:<span class="hljs-number">05</span>:<span class="hljs-number">52</span> AM EST.
[root@localhost ~]# ls
anaconda-ks.cfg  httpd<span class="hljs-number">-2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.x86_64.rpm  initial-setup-ks.cfg
[root@localhost ~]# rpm -qpi httpd<span class="hljs-number">-2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.x86_64.rpm 
<span class="hljs-attr">Name</span>        : httpd
<span class="hljs-attr">Version</span>     : <span class="hljs-number">2.4</span><span class="hljs-number">.37</span>
<span class="hljs-attr">Release</span>     : <span class="hljs-number">21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8
<span class="hljs-attr">Architecture</span>: x86_64
Install <span class="hljs-built_in">Date</span>: (not installed)
<span class="hljs-attr">Group</span>       : System Environment/Daemons
<span class="hljs-attr">Size</span>        : <span class="hljs-number">5105105</span>
<span class="hljs-attr">License</span>     : ASL <span class="hljs-number">2.0</span>
<span class="hljs-attr">Signature</span>   : RSA/SHA256, Mon <span class="hljs-number">08</span> Jun <span class="hljs-number">2020</span> <span class="hljs-number">05</span>:<span class="hljs-number">08</span>:<span class="hljs-number">58</span> PM EDT, Key ID <span class="hljs-number">05</span>b555b38483c65d
Source RPM  : httpd<span class="hljs-number">-2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.src.rpm
Build <span class="hljs-built_in">Date</span>  : Mon <span class="hljs-number">08</span> Jun <span class="hljs-number">2020</span> <span class="hljs-number">04</span>:<span class="hljs-number">15</span>:<span class="hljs-number">29</span> PM EDT
Build Host  : x86<span class="hljs-number">-02.</span>mbox.centos.org
<span class="hljs-attr">Relocations</span> : (not relocatable)
<span class="hljs-attr">Packager</span>    : CentOS Buildsys &lt;bugs@centos.org&gt;
Vendor      : CentOS
<span class="hljs-attr">URL</span>         : https:<span class="hljs-comment">//httpd.apache.org/</span>
Summary     : Apache HTTP Server
<span class="hljs-attr">Description</span> :
The Apache HTTP Server is a powerful, efficient, and extensible
web server.
[root@localhost ~]#
</code></pre><p>We can see the package was created by CentOS Buildsys and that it is indeed the Apache HTTP Server package.
Next, let's have a look at the configuration files:</p>
<pre><code>[root@localhost ~]# rpm -qpc httpd<span class="hljs-number">-2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.x86_64.rpm 
/etc/httpd/conf.d/autoindex.conf
/etc/httpd/conf.d/userdir.conf
/etc/httpd/conf.d/welcome.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">00</span>-base.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">00</span>-dav.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">00</span>-lua.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">00</span>-mpm.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">00</span>-optional.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">00</span>-proxy.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">00</span>-systemd.conf
/etc/httpd/conf.modules.d/<span class="hljs-number">01</span>-cgi.conf
/etc/httpd/conf/httpd.conf
/etc/httpd/conf/magic
/etc/logrotate.d/httpd
/etc/sysconfig/htcacheclean
[root@localhost ~]#
</code></pre><p>The main configuration file is <code>/etc/httpd/conf/httpd.conf</code>. The <code>welcome.conf</code> file defines the default home page for your website, until you add content. The <code>magic</code> file defines rules that the server can use to figure out a file's type when the server tries to open it. The <code>/etc/logrotate.d/httpd</code> file defines how log files produced by Apache are rotated.</p>
<p>Most Apache modules put their configuration files into the  <code>/etc/httpd/conf.d</code> directory but some may drop their configuration files into the <code>/etc/httpd/conf.modules.d/</code> directory. Any file in those directories that ends with the <code>.conf</code> extension is included in the main <code>httpd.conf</code> file and used to configure Apache.</p>
<h1 id="heading-setting-up-a-basic-web-server">Setting Up a Basic Web Server</h1>
<p>Let's install the <code>httpd</code> package and some of the most commonly used additional packages using the <code>yum module install httpd</code> command:</p>
<pre><code>[root@localhost ~]# yum <span class="hljs-built_in">module</span> install httpd
...
...
Installed:
  apr<span class="hljs-number">-1.6</span><span class="hljs-number">.3</span><span class="hljs-number">-9.</span>el8.x86_64                                                    apr-util<span class="hljs-number">-1.6</span><span class="hljs-number">.1</span><span class="hljs-number">-6.</span>el8.x86_64                                          apr-util-bdb<span class="hljs-number">-1.6</span><span class="hljs-number">.1</span><span class="hljs-number">-6.</span>el8.x86_64                                  
  apr-util-openssl<span class="hljs-number">-1.6</span><span class="hljs-number">.1</span><span class="hljs-number">-6.</span>el8.x86_64                                       centos-logos-httpd<span class="hljs-number">-80.5</span><span class="hljs-number">-2.</span>el8.noarch                                 httpd<span class="hljs-number">-2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.x86_64               
  httpd-filesystem<span class="hljs-number">-2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.noarch             httpd-tools<span class="hljs-number">-2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.x86_64             mod_http2<span class="hljs-number">-1.11</span><span class="hljs-number">.3</span><span class="hljs-number">-3.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">307</span>+<span class="hljs-number">4</span>d18d695.x86_64            
  mod_ssl<span class="hljs-number">-1</span>:<span class="hljs-number">2.4</span><span class="hljs-number">.37</span><span class="hljs-number">-21.</span>module_el8<span class="hljs-number">.2</span><span class="hljs-number">.0</span>+<span class="hljs-number">382</span>+<span class="hljs-number">15</span>b0afa8.x86_64                   

Complete!
</code></pre><p>Open the main configuration file, <code>/ect/httpd/conf/httpd.conf</code>, and look for the <code>DocumentRoot</code> parameter.
This parameter specifies the default location where the Apache Web Server looks for content to serve. It should be set to <code>DocumentRoot "/var/www/html"</code>.
In the directory <code>/var/www/html</code>, create a file with the name <code>index.html</code> and the content <code>Welcome To My Web Server!</code>. Next, start and enable the <code>httpd</code> service and check if the service is up and running. </p>
<pre><code>[root@localhost ~]# echo <span class="hljs-string">"Welcome To My Webserver!"</span> &gt; <span class="hljs-regexp">/var/</span>www/html/index.html
[root@localhost ~]# 
[root@localhost ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@localhost ~]# 
[root@localhost ~]# systemctl status httpd
● httpd.service - The Apache HTTP Server
   <span class="hljs-attr">Loaded</span>: loaded (<span class="hljs-regexp">/usr/</span>lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   <span class="hljs-attr">Active</span>: active (running) since Tue <span class="hljs-number">2020</span><span class="hljs-number">-10</span><span class="hljs-number">-06</span> <span class="hljs-number">11</span>:<span class="hljs-number">28</span>:<span class="hljs-number">24</span> EST; <span class="hljs-number">3</span>s ago
     <span class="hljs-attr">Docs</span>: man:httpd.service(<span class="hljs-number">8</span>)
 Main PID: <span class="hljs-number">33293</span> (httpd)
   <span class="hljs-attr">Status</span>: <span class="hljs-string">"Started, listening on: port 443, port 80"</span>
    <span class="hljs-attr">Tasks</span>: <span class="hljs-number">213</span> (limit: <span class="hljs-number">11323</span>)
   <span class="hljs-attr">Memory</span>: <span class="hljs-number">17.8</span>M
   <span class="hljs-attr">CGroup</span>: <span class="hljs-regexp">/system.slice/</span>httpd.service
           ├─<span class="hljs-number">33293</span> /usr/sbin/httpd -DFOREGROUND
           ├─<span class="hljs-number">33296</span> /usr/sbin/httpd -DFOREGROUND
           ├─<span class="hljs-number">33298</span> /usr/sbin/httpd -DFOREGROUND
           ├─<span class="hljs-number">33299</span> /usr/sbin/httpd -DFOREGROUND
           └─<span class="hljs-number">33301</span> /usr/sbin/httpd -DFOREGROUND

Oct <span class="hljs-number">06</span> <span class="hljs-number">11</span>:<span class="hljs-number">28</span>:<span class="hljs-number">24</span> localhost.localdomain systemd[<span class="hljs-number">1</span>]: Starting The Apache HTTP Server...
Oct <span class="hljs-number">06</span> <span class="hljs-number">11</span>:<span class="hljs-number">28</span>:<span class="hljs-number">24</span> localhost.localdomain httpd[<span class="hljs-number">33293</span>]: AH00558: httpd: Could not reliably determine the server<span class="hljs-string">'s fully qualified domain name, using localhost.localdomain. Set the '</span>ServerName<span class="hljs-string">' directive globally to &gt;
Oct 06 11:28:24 localhost.localdomain systemd[1]: Started The Apache HTTP Server.
Oct 06 11:28:24 localhost.localdomain httpd[33293]: Server configured, listening on: port 443, port 80</span>
</code></pre><p>When the <code>httpd</code> service starts, five <code>httpd</code> daemon processes are launched by default to respond to requests for the web server. You can configure more or fewer daemons to be started based on settings in the main configuration file. </p>
<p>We can verify it's working by making an http request to localhost using <code>curl</code>:</p>
<pre><code>[root@localhost ~]# curl http:<span class="hljs-comment">//localhost</span>
Welcome To My Webserver!
</code></pre><h1 id="heading-creating-apache-virtual-hosts">Creating Apache Virtual Hosts</h1>
<p>Apache supports the creation of separate websites within a single server. Individual sites are configured in what we refer to as <em>virtual hosts</em> which is just a way to have the content for multiple domain names available from the same Apache server. The content that is served to a web client is based on the (domain) name used to access the server.</p>
<p>For example, if a client got to the server by requesting the name <code>www.example.org</code>, he would be redirected to a virtual host container that has its <code>ServerName</code> parameter set to <code>www.example.org</code>.</p>
<p>Name-based virtual hosting is the most common solution where virtual hosts use different names but the same IP address.
IP-based virtual hosts are less common but is required if the name of a web server must resolve to a unique IP address. This solution requires multiple IP addresses on the same machine.</p>
<p>In this section we'll be setting up name-based virtual hosts. </p>
<blockquote>
<p>If your Apache server is configured for virtual hosts, all sites it's hosting should be handled by virtual hosts. If someone accesses the server via IP address or a name that is not set in a virtual host then the first virtual host is used as the default location to serve up content.<br />You can create a catch-all entry for those requests by creating a virtual host for <code>_default:80</code>. </p>
</blockquote>
<p>Create a file named <code>example.org.conf</code> in <code>/etc/httpd/conf.d/</code> using the following template:</p>
<pre><code>&lt;VirtualHost *:<span class="hljs-number">80</span>&gt;

    ServerAdmin    webmaster@example.org
    ServerName    example.org
    ServerAlias www.example.org
    DocumentRoot /<span class="hljs-keyword">var</span>/www/html/example.org/

DirectoryIndex index.php index.html index.htm
&lt;/VirtualHost&gt;
</code></pre><p>This example includes the following settings:</p>
<ul>
<li>The <code>*:80</code> specification indicates to what address and port this virtual host applies. If your machine has multiple IP addresses, you can replace the <code>*</code> with an IP. The port is optional but should always be used to prevent interference with SSL virtual hosts (which use port 443).</li>
<li>The <code>ServerName</code> and <code>ServerAlias</code> lines tell Apache which names this virtual host should be recognized as. You can either leave out <code>ServerAlias</code> or specify more than one name on the same line, space separated. </li>
<li>The <code>DocumentRoot</code> specifies where the content for this virtual host is stored.</li>
<li>The <code>DirectoryIndex</code> directive sets the list of files to look for and serve when the web server receives a request.</li>
</ul>
<p>Create the <code>index.html</code> file inside the <code>DocumentRoot</code> with the following content: <code>Welcome To Example.org</code></p>
<pre><code>[root@localhost conf.d]# mkdir /<span class="hljs-keyword">var</span>/www/html/example.org
[root@localhost conf.d]# echo <span class="hljs-string">"Welcome To Example.org"</span> &gt; <span class="hljs-regexp">/var/</span>www/html/example.org/index.html
</code></pre><p>Create a second virtual host with different values, e.g.:</p>
<pre><code>[root@localhost conf.d]# cat foobar.com.conf 
&lt;VirtualHost *:<span class="hljs-number">80</span>&gt;

    ServerAdmin    webmaster@foobar.com
    ServerName    foobar.com
    ServerAlias    www.foobar.com
    DocumentRoot     /<span class="hljs-keyword">var</span>/www/html/foobar.com/

DirectoryIndex index.php index.html index.htm
&lt;/VirtualHost&gt;

[root@localhost conf.d]# mkdir /<span class="hljs-keyword">var</span>/www/html/foobar.com
[root@localhost conf.d]# echo <span class="hljs-string">"Welcome to Foobar.com!"</span> &gt; <span class="hljs-regexp">/var/</span>www/html/foobar.com/index.html
[root@localhost conf.d]#
</code></pre><p>Next we want to make sure that the domains used in our virtual hosts resolve to our local machine and not to the internet. Edit your hosts file and add the domains to the line that starts with the local loopback address:</p>
<pre><code>[root@localhost conf.d]# cat /etc/hosts
<span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>   localhost localhost.localdomain localhost4 localhost4.localdomain4 foobar.com www.foobar.com example.org www.example.org
</code></pre><p>After we restarted the <code>httpd</code> service, we can test if our setup is working correctly:</p>
<pre><code>[root@localhost conf.d]# systemctl restart httpd
[root@localhost conf.d]# curl http:<span class="hljs-comment">//foobar.com</span>
Welcome to Foobar.com!
[root@localhost conf.d]# curl http:<span class="hljs-comment">//example.org</span>
Welcome To Example.org
</code></pre><p>This covered some Apache basics which we will need for testing advanced topics like firewall configuration and SELinux.</p>
]]></content:encoded></item><item><title><![CDATA[Network Services - Configuring SSH]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
Hardening the SSH Server
SSH is a convenient and important solution to establish remote connections to servers. If your SSH server is visible directly from the internet, ...]]></description><link>https://blog.joerismissaert.dev/network-services-configuring-ssh</link><guid isPermaLink="true">https://blog.joerismissaert.dev/network-services-configuring-ssh</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Fri, 02 Oct 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}</p>
<h1 id="heading-hardening-the-ssh-server">Hardening the SSH Server</h1>
<p>SSH is a convenient and important solution to establish remote connections to servers. If your SSH server is visible directly from the internet, you can be sure that sooner or later intruders will try to connect to it, intending to do harm.</p>
<p>Dictionary attacks are common against an SSH server. SSH servers usually offer their services on port 22, and every Linux servers has a <code>root</code> account. Based on this information it's easy for an attacker to try to log in as <code>root</code> by guessing the password if the password has limited complexity and no additional security measures are in place. Sooner or later the intruder will be able to connect.</p>
<p>We can protect ourselves against these kind of attacks by:</p>
<ul>
<li>Disabling root login.</li>
<li>Disabling password login and using key-based authentication.</li>
<li>Configuring a non default port for SSH to listen on.</li>
<li>Allowing only specific users to log in on SSH.</li>
</ul>
<h2 id="heading-limiting-root-access">Limiting Root access</h2>
<p>SSH servers have <code>root</code> login enabled by default, which is a big security concern. Disabling <code>root</code> login is easy: Modify the <code>PermitRootLogin</code> parameter in <code>/etc/ssh/sshd_config</code> and reload or restart the service:</p>
<pre><code># Authentication:

#LoginGraceTime <span class="hljs-number">2</span>m
PermitRootLogin no
#StrictModes yes
#MaxAuthTries <span class="hljs-number">6</span>
#MaxSessions <span class="hljs-number">10</span>
</code></pre><h2 id="heading-configuring-alternative-ports">Configuring Alternative Ports</h2>
<p>Security problems on Linux servers start with a port scan issued by an attacker. There are 65,535 ports that can potentially be listening, and scanning all those ports takes a lot of time so most port scans focus on well known ports only. Port 22 is always among these ports.</p>
<p>To protect against port scans we can configure the SSH server to listen on another port. You can choose a completely random port, as long as the port is not already in use by another service.</p>
<pre><code># If you want to change the port on a SELinux system, you have to tell
# SELinux about <span class="hljs-built_in">this</span> change.
# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER
#
Port <span class="hljs-number">39860</span>
#AddressFamily any
#ListenAddress <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>
#ListenAddress ::
</code></pre><blockquote>
<p>To avoid being locked out of the server after making changes to the SSH listing port, it's a good idea to open two sessions. Use one session to apply the port change and test, use the other sessions to keep your current connection open. Active sessions will not be disconnected after restarting the SSH server (unless the restart fails), so if something is wrong with the configuration and you're not longer able to connect you still have the second session to fix the problem.</p>
</blockquote>
<h2 id="heading-modifying-selinux-to-allow-for-port-changes">Modifying SELinux to Allow for Port Changes</h2>
<p>After changing the SSH port you also need to configure SELinux to allow this change. Network ports are labeled with SELinux security labels to prevent services from accessing ports they shouldn't.</p>
<p>Use the <strong><em>semanage port</em></strong> command to change the label on the target port. Before doing so, it's a good idea to check if the port already has a label: <strong><em>semanage port -l</em></strong>, e.g. <code>semanage port -l | grep ssh</code></p>
<p>If the port doesn't have a label, use the <strong><em>-a</em></strong> option to add a label, if it does have a label use <strong><em>-m</em></strong> to modify the current security label.</p>
<p><code>semanage port -a -t ssh_port_t -p tcp 39860</code><br /><code>semanage port -m -t ssh_port_t -p tcp 443</code> </p>
<h2 id="heading-limiting-user-access">Limiting User Access</h2>
<p>The <code>AllowUsers</code> option takes a space separated list of usernames that will be allowed to login through SSH. If the user <code>root</code> still needs to be able to log in you'll have to include it as well in the list. </p>
<blockquote>
<p>This option does <em>not</em> appear anywhere in the <code>/etc/ssh/sshd_config</code> file by default.</p>
</blockquote>
<p>Another interesting option is <code>MaxAuthTries</code>. It specifies the maximum number of authentication attempts permitted per connection.<code>MaxAuthTries</code> is also useful for analyzing security events, it logs failed login attempts once the number of failures reaches half this value. The higher the number of attempts, the more likely it is an intruder is trying to get in.<br />SSH writes log information about failed login attempts to the AUTHPRIV syslog facility. This facility is by default configured to write information to <code>/var/log/secure</code>.</p>
<h1 id="heading-other-useful-sshd-options">Other Useful sshd Options</h1>
<p>Apart from security-related options, there are some useful miscellaneous options you can use to streamline performance.</p>
<h2 id="heading-session-options">Session Options</h2>
<p>On RHEL8, <code>GSSAPIAuthentication</code> option is set to <strong><em>yes</em></strong> by default. This option is only useful in an environment where Kerberos authentication is used. Having this feature on slows down the authentication procedure.</p>
<p>The <code>UseDNS</code> option is also enabled by default and instructs the SSH server to lookup the remote hostname and check with DNS that the hostname maps back to the same IP address (Reverse DNS Lookup). Although this option has some security benefits, it also involves a significant performance penalty. Set this to <code>no</code> if client connections are slow. </p>
<blockquote>
<p>To give an example on Reverse DNS lookups, assume you're connecting from a client with the <code>8.8.8.8</code> ip address. The SSH server will lookup the PTR record for the <code>8.8.8.8.in-addr.arpa</code> domain which would result in <code>dns.google</code>. In turn, this result resolves back to <code>8.8.8.8</code>.  The reverse DNS database of the Internet is rooted in the <code>.arpa</code> top-level domain.  </p>
</blockquote>
<pre><code>$ dig -x <span class="hljs-number">8.8</span><span class="hljs-number">.8</span><span class="hljs-number">.8</span>
;; ANSWER SECTION:
<span class="hljs-number">8.8</span><span class="hljs-number">.8</span><span class="hljs-number">.8</span>.in-addr.arpa.    <span class="hljs-number">76082</span>    IN    PTR    dns.google.

$ dig dns.google
;; ANSWER SECTION:
dns.google.        <span class="hljs-number">824</span>    IN    A    <span class="hljs-number">8.8</span><span class="hljs-number">.4</span><span class="hljs-number">.4</span>
dns.google.        <span class="hljs-number">824</span>    IN    A    <span class="hljs-number">8.8</span><span class="hljs-number">.8</span><span class="hljs-number">.8</span>
</code></pre><p>The <code>MaxSessions</code> option specifies the maximum number of sessions that can be opened from one IP address simultaneously. You might need to increase this option beyond the default value of 10.</p>
<h2 id="heading-connection-keepalive-options">Connection Keepalive Options</h2>
<p>The <code>TCPKeepAlive</code> option is used to monitor whether the client is still available.
This option is by default enabled and sends a keepalive probe packet with the ACK flag to the client after a certain amount of time. If a reply is received, the SSH server can assume that the connection is still up and running.</p>
<p>The <code>ClientAliveInterval</code> option sets an interval in seconds after which the server sends a packet to the client if no activity has been detected. The <code>ClientAliveCountMax</code> parameter specifies how many of these should be sent. So if the <code>ClientAliveInterval</code> is set to <code>30</code> and the <code>ClientAliveCountMax</code> to <code>10</code>, inactive connections are kept alive for about 5 minutes.</p>
<blockquote>
<p>The equivalent client side options are <code>ServerAliveInterval</code> and <code>ServerAliveCountMax</code>, useful if you cannot change the configuration of the SSH server.</p>
</blockquote>
<h1 id="heading-configuring-key-based-authentication-with-passphrases">Configuring Key-Based Authentication with Passphrases</h1>
<p>By default, password authentication is allowed on RHEL 8 SSH servers. You can disable password authentication and allow public/private key-based authentication only by setting the <code>PasswordAuthentication</code> option to <code>no</code>.</p>
<p>When using key-based authentication you can set a passphrase which makes the key pair stronger. In case an intruder has access to the private key he would also need to know the passphrase before being able to use the key.</p>
<p>Without further configuration the use of passphrases would mean that users have to enter the passphrase every time before a connection can be created, which is inconvenient. To work around this we can cache the passphrase for a session:</p>
<ul>
<li>Execute the <code>ssh-agent /bin/bash</code> command to start the agent for the current (Bash) shell.</li>
<li>Execute <code>ssh-add</code> to add the passphrase for the current user's private key. The key is now cached.</li>
<li>Connect to the remote server, you'll notice you do not need to enter the passphrase.</li>
</ul>
<h1 id="heading-copying-and-synchronizing-files-securely-over-ssh">Copying and synchronizing files securely over SSH</h1>
<p><code>scp</code> is a program for copying files securely between computers using the SSH protocol.<br />The basic usage is as follows:  </p>
<ul>
<li>To copy a local file to a remote host:  <code>scp localfile remote_host:remote_path</code>  </li>
<li>To copy a remote file to a local path: <code>scp remote_host:remote_file localpath</code>  </li>
<li>To copy entire directory trees, add the <code>-r</code> option: <code>scp -r remote_host:path/directory .</code></li>
</ul>
<p>Rsync, which stands for “remote sync”, is a remote and local file synchronization tool. It uses an algorithm that minimizes the amount of data copied by only moving the portions of files that have changed. The basic syntax is similar to that of <code>scp</code>: <code>rsync source destination</code>.</p>
<ul>
<li><code>rsync -anvzP --progress remote_host:/path/to/directory/ /some/local/path</code></li>
</ul>
<p>The <code>-a</code> option is a combination flag, it stands for "archive" and syncs recursively and preserves symbolic links, special and device files, modification times, group, owner, and permissions. You could use <code>-r</code> to only sync recursively instead.<br />The <code>-n</code> flag is the same as the <code>--dry-run</code> option and allows you to check results before actually running the synchronization. You need the <code>-v</code> flag (verbose) to get the appropriate output to verify.<br />The <code>-z</code> option can reduce network transfer by adding compression.<br />The <code>-P</code> flag combines the <code>--progress</code> and <code>--partial</code> options, it gives you a progress bar and allows you to resume interrupted transfers.<br />Finally, you can use the <code>-A</code> flag to preserve Access Control Lists, and the <code>-X</code> flag to preserve SELinux context labels.</p>
<blockquote>
<p>Notice the traling slash <code>/</code> at the end of the first argument in the example command. This is necessary to include the contents of the source path. Without the trailing slash, <code>directory</code> would be created inside <code>/some/local/path</code>.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Introduction to Bash Shell Scripting]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
{{< image src="/img/bash_logo.png" alt="Bash logo" position="center" style="width:150px">}}
Core Elements
A shell script is a list of sequentially executed commands with ...]]></description><link>https://blog.joerismissaert.dev/introduction-to-bash-shell-scripting</link><guid isPermaLink="true">https://blog.joerismissaert.dev/introduction-to-bash-shell-scripting</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Fri, 25 Sep 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}
{{&lt; image src="/img/bash_logo.png" alt="Bash logo" position="center" style="width:150px"&gt;}}</p>
<h1 id="heading-core-elements">Core Elements</h1>
<p>A shell script is a list of sequentially executed commands with optional scripting logic to allow code to be executed under specific conditions only. Starting a script from the parent shell opens a subshell from where the commands in the script are executed. These commands can be interpreted in different ways, to make it clear how they should be interpreted the <em>shebang</em> is used on the first line of the script: <code>#!/bin/bash</code>, which would call and execute the script in a bash subshell. </p>
<p>The below script asks you for a path and stores the path in the <code>DIR</code> variable, then changes directory to the <code>DIR</code> value and prints the current working directory.</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
# MyComment

echo Provide a path to a directory:
read DIR
cd $DIR
pwd
exit <span class="hljs-number">0</span>
</code></pre><p>When you execute this script, notice how your current working directory hasn't changed after the script has executed. This is because the script executes in a subshell of the parent shell from where you invoked the script.</p>
<p>At the end of the above script an <code>exit 0</code> statement is included. An exit statement tells the parent script whether the scipt was successful, a <code>0</code> means it was successfull, while anything else means a problem was encountered. </p>
<p>A script needs to be executable. The most common way to make a script executable is by applying the execute permission to it. The script can also be executed as an argument to the <code>bash</code> command, e.g. <code>bash myscript.sh</code>. </p>
<p>You can store a script anywhere you like, but if it's stored outide of the <code>$PATH</code> you need to execute it with a <code>./</code> in front: <code>./myscript.sh</code>.</p>
<h1 id="heading-variables-and-input">Variables and Input</h1>
<p>Scripts typically aren't a list of sequential commands, they can work with variables and input to be more flexible.</p>
<h2 id="heading-positional-parameters">Positional Parameters</h2>
<p>When starting a script, an argument can be used. Arguments are anything you put behind the command while starting the script, e.g. <code>useradd lisa</code> where the command is <code>useradd</code> and the argument is <code>lisa</code>. In a script, the first variable is referred to as <code>$1</code>, the second as <code>$2</code> and so on. </p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
# Run <span class="hljs-built_in">this</span> script <span class="hljs-keyword">with</span> a few <span class="hljs-built_in">arguments</span>

echo The first argument is $<span class="hljs-number">1</span>
echo The <span class="hljs-number">2n</span>d argument is $<span class="hljs-number">2</span>
echo The <span class="hljs-number">3</span>rd argument is $<span class="hljs-number">3</span>
</code></pre><p>Run the above script with a few arguments, and it will make sense:
<code>./script 1 2 3 4</code>
You'll notice the 4th argument, being <code>4</code> isn't echoed. We can work around that by making the script more flexible using a conditional <code>for</code> loop instead of echoeing each argument one after the other:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
# Run <span class="hljs-built_in">this</span> script <span class="hljs-keyword">with</span> a few <span class="hljs-built_in">arguments</span>

echo You have entered $# <span class="hljs-built_in">arguments</span>.

for i <span class="hljs-keyword">in</span> <span class="hljs-string">"$@"</span>
  <span class="hljs-keyword">do</span> echo $i
done

exit <span class="hljs-number">0</span>
</code></pre><p><code>$#</code> is a counter that shows how many arguments were used when starting the script.<br /><code>$@</code> refers to all arguments used when starting the script. 
In the above script, the condition is <code>for i in "$@"</code>, which means "for each argument in the list of arguments". I'll cover more on <code>for</code> loops later, but what this script basically does is loop through the list of arguments (<code>$@</code>) and echo each one (<code>do echo $i</code>). </p>
<h2 id="heading-variables">Variables</h2>
<p>Variables are labels that refer to a specific location in memory which contains a specific value. They can be defined statically or dynamically. Variables are defined by using the <code>=</code> sign directly after the uppercase name, followed by the value. You should never use spaces when defining variables:<br /><code>MYVAR=value</code>, this would be a statically defined variable.</p>
<p>There are two solutions for defining variables dynamically:</p>
<ul>
<li>Using <code>read</code> in the script to ask the user for input. IT stops the script so input can be processed and stored in a variable:  <pre><code>[joeri@Ryzen7 ~]$ read NAME
joeri
[joeri@Ryzen7 ~]$ echo $NAME
joeri
</code></pre></li>
<li>Using command substitution where you assign the result of a specific command to a variable. For example: <code>TODAY=$(date +%d-%m-%y)</code>.<br />You enclose the command whose result you want to use between parentheses and preceed that with a <code>$</code> sign.  <pre><code>[joeri@Ryzen7 ~]$ TODAY=$(date +%d-%m-%y)
[joeri@Ryzen7 ~]$ echo $TODAY
<span class="hljs-number">31</span><span class="hljs-number">-10</span><span class="hljs-number">-20</span>
</code></pre></li>
</ul>
<h1 id="heading-conditional-loops">Conditional Loops</h1>
<p>Conditional loops are executed only if a certain condition is true. I'll cover the most often used conditional loops in this section.</p>
<h2 id="heading-if-then-else">if ... then ... else</h2>
<p>This construction is common to evaluate specific conditions and are often used together with the <code>test</code> command. Have a look at the man page of <code>test</code> for a complete overview of all the functionality.</p>
<p>Let's look at an example:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
# MyComment

<span class="hljs-keyword">if</span> [ -z $<span class="hljs-number">1</span> ]
then
  echo No value provided
fi
</code></pre><p>The  <code>-z</code> test command checks if the length of a string is zero (<code>man test</code>). If that is true, then "No value provided" will be echoed to the screen.
The above script will only provide output if you run it without any argument.</p>
<p>Below is another example using multiple test commands:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
# Run <span class="hljs-built_in">this</span> script <span class="hljs-keyword">with</span> one argument.
# Find out <span class="hljs-keyword">if</span> the argument is a file or a directory

<span class="hljs-keyword">if</span> [ -f $<span class="hljs-number">1</span> ]
then
  echo <span class="hljs-string">"$1 is a file"</span>
elif [ -d $<span class="hljs-number">1</span> ]
then
  echo <span class="hljs-string">"$1 is a directory"</span>
<span class="hljs-keyword">else</span>
  echo <span class="hljs-string">"Not sure what $1 is...."</span>
fi

exit <span class="hljs-number">0</span>
</code></pre><h2 id="heading-or-and-andampandamp">|| and &amp;&amp;</h2>
<p>Instead of writing full <code>if ... then</code> statements we can use logical operators. <code>||</code> is a logical OR and will execute the second part of the statement only if the first part is <em>not</em> true. <code>&amp;&amp;</code> is a logical AND, and will execute the second part of the statement only if the first part <em>is</em> true.
"true" is the state where a command exits with a <code>0</code>. </p>
<pre><code>[ -z $<span class="hljs-number">1</span> ] &amp;&amp; echo no argument provided
ping -c <span class="hljs-number">1</span> <span class="hljs-number">192.168</span><span class="hljs-number">.1</span><span class="hljs-number">.256</span> || echo node does not exist
</code></pre><h2 id="heading-for-do-done">For ... do ... done</h2>
<p>The <code>for</code> conditional loop provides a solution for processing ranges of data. It always starts with <code>for</code> followed by the condition, then <code>do</code> followed by the commands to be executed when the condition is true, and finally closed with <code>done</code>.</p>
<p>In the below example the COUNTER variable is initialized with a value of 10, if the value is greater than or equal to 0 we substract 1. As long as this condition is true we then echo the value of COUNTER:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
#

<span class="hljs-keyword">for</span> (( COUNTER=<span class="hljs-number">10</span>; COUNTER&gt;=<span class="hljs-number">0</span>; COUNTER--))
<span class="hljs-keyword">do</span>
  echo $COUNTER
done
exit <span class="hljs-number">0</span>
</code></pre><p>We can also define a range by specifying the first number followed by two dots and closing with the last number in the range:</p>
<pre><code>[joeri@Ryzen7 ~]$ <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> {<span class="hljs-number">85.</span><span class="hljs-number">.90</span>}; <span class="hljs-keyword">do</span> ping -c <span class="hljs-number">1</span> <span class="hljs-number">192.168</span><span class="hljs-number">.100</span>.$i &gt;<span class="hljs-regexp">/dev/</span><span class="hljs-literal">null</span> &amp;&amp; echo <span class="hljs-number">192.168</span><span class="hljs-number">.100</span>.$i is UP; done
<span class="hljs-number">192.168</span><span class="hljs-number">.100</span><span class="hljs-number">.88</span> is UP
</code></pre><p>With <code>for i in</code> each of the numbers in the range is assigned to the variable <code>i</code>. For each of those values the <code>ping -c 1</code> command is executed, and output is redirected to <code>/dev/null</code> since we don't need it. Based on the exit status of the <code>ping</code> command, <code>exit 0</code> or <code>true</code>, the part behind the logical operator <code>&amp;&amp;</code> is executed.</p>
<h2 id="heading-while-and-until">While and until</h2>
<p>The <code>while</code> statement is useful if you want to do something as long as a condition is true. Its counterpart is <code>until</code> which keeps the iteration open as long as the condition is false, or until the condition is true.</p>
<p>The below script initializes the COUNTER value with a value of 0 and <em>while</em> the value is <em>less than</em> 11 we echo the value and increase the value with 1:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
#

COUNTER=<span class="hljs-number">0</span>

<span class="hljs-keyword">while</span> [ $COUNTER -lt <span class="hljs-number">11</span> ]; <span class="hljs-keyword">do</span>
  echo The counter is $COUNTER
  (( COUNTER=COUNTER+<span class="hljs-number">1</span> ))
done
</code></pre><p>Below we echo the value of COUNTER and increase its value with 1 <em>until</em> the value is equal to 11. At that point we break out of the loop:</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash
#

COUNTER=<span class="hljs-number">0</span>

until [ $COUNTER = <span class="hljs-number">11</span> ]; <span class="hljs-keyword">do</span>
  echo The counter is $COUNTER
  (( COUNTER=COUNTER+<span class="hljs-number">1</span> ))
done
</code></pre><h2 id="heading-case">Case</h2>
<p>The <code>case</code> statement is used to evaluate a number of expected values, you define very specific argument that you expect followed by the command that needs to be executed if that argument was used.  </p>
<p>The generic syntax is <code>case item-to-evaluate in</code>, followed by a list of all possible values that need to be evaluated. Each item is closed with a <code>)</code>. Then follows a list of commands that are executed if the specific argument was used, the commands are closed with a double semicolon, <code>;;</code>.   </p>
<p>The evaluations in <code>case</code> are performed in order. Then the first match is made, the <code>case</code> statement will not evaluate anything else. Whitin the evaluaten, wildcard-like patterns can be used. For example <code>*)</code>, which is a "catchall" statement.</p>
<pre><code>#!<span class="hljs-regexp">/bin/</span>bash

echo -n <span class="hljs-string">"Enter the name of a country: "</span>
read COUNTRY

echo -n <span class="hljs-string">"The official language of $COUNTRY is "</span>

<span class="hljs-keyword">case</span> $COUNTRY <span class="hljs-keyword">in</span>

  Lithuania)
    echo -n <span class="hljs-string">"Lithuanian"</span>
    ;;

  Romania | Moldova)
    echo -n <span class="hljs-string">"Romanian"</span>
    ;;

  Italy | <span class="hljs-string">"San Marino"</span> | Switzerland | <span class="hljs-string">"Vatican City"</span>)
    echo -n <span class="hljs-string">"Italian"</span>
    ;;

  *)
    echo -n <span class="hljs-string">"unknown"</span>
    ;;
esac
</code></pre><h1 id="heading-script-debugging">Script debugging</h1>
<p>If a script does not do what you expect it to do, try starting it as an argument to the <code>bash -x</code> command. This will show you line by line what the script is trying to do and will show specific errors if it does not work. </p>
<pre><code>[joeri@Ryzen7 ~]$ bash -x lang.sh 
+ echo -n <span class="hljs-string">'Enter the name of a country: '</span>
Enter the name <span class="hljs-keyword">of</span> a country: + read COUNTRY
Germany
+ echo -n <span class="hljs-string">'The official language of Germany is '</span>
The official language <span class="hljs-keyword">of</span> Germany is + <span class="hljs-keyword">case</span> $COUNTRY <span class="hljs-keyword">in</span>
+ echo -n unknown
unknown
</code></pre>]]></content:encoded></item><item><title><![CDATA[Troubleshooting Boot Issues]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
The RHEL8 Boot Procedure
In order to fix boot issues we need to be able to judge in which phase of the boot procedure the issue occurs so we can apply appropriate means t...]]></description><link>https://blog.joerismissaert.dev/troubleshooting-boot-issues</link><guid isPermaLink="true">https://blog.joerismissaert.dev/troubleshooting-boot-issues</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Fri, 18 Sep 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}</p>
<h2 id="heading-the-rhel8-boot-procedure">The RHEL8 Boot Procedure</h2>
<p>In order to fix boot issues we need to be able to judge in which phase of the boot procedure the issue occurs so we can apply appropriate means to fix it.
The following steps summarize the boot procedure:</p>
<ul>
<li><strong>POST</strong> - The machine is powered on, the Power-On-Self-Test executes and hardware required to start the system is initialized.</li>
<li><strong>Boot device selection</strong> - From UEFI or BIOS, a bootable device is located.</li>
<li><strong>Loading the boot loader</strong> - From the bootable device a boot loader is located.</li>
<li><strong>Loading the kernel</strong> - The kernel is loaded together with the initramfs. The initramfs contains kernel modules required to boot as well as initials scripts to proceed to the next stage of booting.</li>
<li><strong>Starting /sbin/init</strong> - The first process is loaded, <code>/sbin/init</code>, which is a symlink to Systemd. The udev daemon is loaded to take care of further hardware initialization. This all happens from initramfs.</li>
<li><strong>Process initrd.target</strong> - The Systemd process executes all units from the initrd.target, preparing a minimal operating environment from where the root file system on disk is mounted onto the <code>/sysroot</code> directory.</li>
<li><strong>Switch to root file system</strong> - The system switches to the root file system on disk and loads the Systemd process from disk.</li>
<li><strong>Running the default target</strong> - Systemd looks for the default target to execute and runs all of its units.</li>
</ul>
<p>The below table summarizes where a specific phase is configured and what you can do to troubleshoot if something goes wrong.</p>
<p>{{</p><table>}}
Phase | Configuration | Fix
-----|----|-----
<strong>POST</strong> | Hardware Configuration, BIOS, UEFI | Replace Hardware
<strong>Boot Device</strong> | BIOS/UEFI configuration or boot menu | Replace hardware or use rescue system
<strong>Boot Loader</strong> | <code>grub2-install</code> and edits to <code>/etc/defaults/grub</code> | GRUB Boot menu, edits to <code>/etc/defaults/grub</code> followed by <code>grub2-mkconfig</code>
<strong>Kernel</strong> | Edits to GRUB config and <code>/etc/dracut.conf</code> | GRUB Boot menu, edits to <code>/etc/defaults/grub</code> followed by <code>grub2-mkconfig</code>
<strong>/sbin/init</strong> | Compiled into initramfs | <strong>init=</strong> kernel boot argument, <strong>rd.break</strong> kernel boot argument, recreate initramfs
<strong>initrd.target</strong> | Compiled into initramfs | recreate initramfs
<strong>Root file system</strong> | Edits to <code>/etc/fstab</code> | Edits to <code>/etc/fstab</code>
<strong>Default Target</strong> | <code>systemctl set-default</code> | Start rescue.target as a kernel boot argument
{{</table>}}<p></p>
<h2 id="heading-passing-kernel-boot-arguments">Passing Kernel Boot Arguments</h2>
<p>The GRUB boot prompt offers a way to stop the boot procedure and pass specific options to the kernel.
When you see the GRUB2 menu, type <strong>e</strong> to enter a mode where you can edit commands and scroll down to the section that begins with <code>linux ($root)/vmlinuz</code>. This line tells GRUB how to start a kernel and looks similar to this:</p>
<pre><code>linux ($root)/vmlinuz<span class="hljs-number">-4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64 root=<span class="hljs-regexp">/dev/m</span>apper/cl-root ro crash kernel-auto resume=<span class="hljs-regexp">/dev/m</span>apper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet
</code></pre><p>Additional boot arguments need to be added to the end of this line.</p>
<p>The <strong>rhgb</strong> and <strong>quiet</strong> boot options hide boot messages, we can remove these in order to see what's happening when we boot the machine.
Once you made the necessary changes, press <code>CTRL+X</code> to start the kernel. Note that this change is not persistent, to make them persistent we must modify the content of <code>/etc/default/grub</code> and use <strong>grub2-mkconfig -o /boot/grub2/grub.cf</strong> to apply the change.</p>
<h2 id="heading-starting-a-troubleshooting-target">Starting a Troubleshooting Target</h2>
<p>In the GRUB boot prompt we can use several options to allow us to fix our issue:</p>
<ul>
<li><strong>rd.break</strong> - Stops the boot procedure in the initramfs phase. This option is useful if you don't have the root password.</li>
<li><strong>init=/sbin/bash</strong> - A shell will be started immediately after loading the kernel and initrd.target.</li>
<li><strong>systemd.unit=emergency.target</strong> - Enters a mode that loads the bare minimum of required Systemd units, it requires a root password.</li>
<li><strong>systemd.unit=rescue.target</strong> - Starts more Systemd units to bring up a more complete operational mode. </li>
</ul>
<h2 id="heading-using-a-rescue-disk">Using a Rescue Disk</h2>
<p>The default rescue image for RHEL is on the installation disk. When booting from the installation disk you'll see a <code>Troubleshooting</code> menu item which presents you with the following options:</p>
<ul>
<li><strong>Install RHEL in Basic Graphics Mode</strong> - This option reinstalls the machine. You should not use it unless a normal installation does not work and you need basic graphics mode.</li>
<li><strong>Rescue a RHEL System</strong> - This options prompts you to press Enter to start the installation, but only loads a rescue system. It does not overwrite the current configuration. The Rescue System will try to find an installed Linux system and mount it on <code>/mnt/sysimage</code>. If a valid installation was found and mounted you can press Enter twice to access the rescue shell. At this point we can switch to the root file system on disk to access all tools we need to repair the system: <code>chroot /mnt/sysimage</code></li>
<li><strong>Run a Memory Test</strong> - If you encounter memory errors this tool allows you to mark bad memory chips so you can boot your machine normally.</li>
<li><strong>Boot from Local Drive</strong> - If you cannot boot from GRUB on your usual boot device try this option. It offers a boot loader that will try to load the OS from your hard disk.</li>
</ul>
<h3 id="heading-reinstalling-grub-using-a-rescue-disk">Reinstalling Grub Using a Rescue Disk</h3>
<p>One of the most common reasons to start a rescue disk is if the GRUB2 boot loader breaks. Once you have access to your machine using the rescue disk, reinstalling GRUB2 is a two step process:</p>
<ul>
<li>Make sure you switch to the root file system on disk: <code>chroot /mnt/sysroot</code></li>
<li>Use <strong>grub2-install</strong> followed by the name of the device on which you want to reinstall GRUB2, i.e. <code>grub2-install /dev/sda</code></li>
</ul>
<h3 id="heading-recreating-initramfs-using-a-rescue-disk">Recreating Initramfs Using a Rescue Disk</h3>
<p>You know there is a problem with initramfs when you never see the root file system getting mounted on the root directory and don't see any Systemd unit files being started when analyzing the boot procedure.</p>
<p>To repair the initramfs image after booting into the rescue environment you can use the <strong>dracut</strong> command. <strong>dracut --force</strong> overwrites the existing initramfs and creates a new initramfs image for the currently loaded kernel. There is also the <code>/etc/dracut.conf</code> configuration file you can use to include specific options while re-creating initramfs. The <strong>dracut</strong> configuration itself is dispersed over several locations:</p>
<ul>
<li><code>/usr/lib/dracut/dracut.conf.d/</code> - Contains the system default configuration files</li>
<li><code>/etc/dracut.conf.d/</code> - Contains custom dracut configuration files</li>
<li><code>/etc/dracut.conf</code> - The master configuration file</li>
</ul>
<h2 id="heading-recovering-from-file-system-issues">Recovering from File System Issues</h2>
<p>When there is a misconfiguration in the file system mounts the boot procedure may end with the "Give root password for maintenance" message. If a device does not exist or there's an error in the UUID, for example, Systemd waits to see if the device comes back online by itself. When that doesn't happen, the "Give root password for maintenance" message appears.</p>
<p>After entering the root password, issue the <strong>journalctl -xb</strong> command to see if relevant messages providing information about what is wrong are written to the journal. If the problem is indeed file system oriented we need to make sure the root file system is mountend with read/write rights, analyze what's wrong in <code>/etc/fstab</code> and fix that: <code>mount -o remount,rw /</code></p>
<h2 id="heading-resetting-the-root-password">Resetting the Root Password</h2>
<p>When the root password is lost, the only way to reset it is to boot into minimal mode which allows you to login without using a password:</p>
<ul>
<li>Pass the <strong>rd.break</strong> boot argument to the kernel</li>
<li>Boot the system</li>
<li>The boot procedure stops after loading initramfs and before mounting the root file system.</li>
<li>Re-mount the root file system on disk to get read/write access to the system image: <code>mount -o remount,rw /sysroot</code></li>
<li>Make the contents of the <code>/sysroot</code> directory the new root directory: <code>chroot /sysroot</code></li>
<li>Use the <strong>passwd</strong>  command to set the new password.</li>
<li>Load the SELinux policy: <code>load_policy -i</code></li>
<li>Set the correct SELinux contect type to <code>/etc/shadow</code>: <code>chcon -t shadow_t /etc/shadow</code></li>
<li>Reboot by issuing the <strong>exit</strong> command twice. Use the new root password at the next boot.</li>
</ul>
<blockquote>
<p>An alternative to applying the SELinux context to <code>/etc/shadow</code> is to create the <code>/.autorelabel</code> file which forces SELinux to restore labels set on the entire file system the next time the system is booted.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Managing Systemd Targets and Working with GRUB2]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
Managing Systemd Targets
A Systemd target is a group of units belonging together, some of these targets can be used to define the state a system should boot in. These tar...]]></description><link>https://blog.joerismissaert.dev/managing-systemd-targets-and-working-with-grub2</link><guid isPermaLink="true">https://blog.joerismissaert.dev/managing-systemd-targets-and-working-with-grub2</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Fri, 11 Sep 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}</p>
<h2 id="heading-managing-systemd-targets">Managing Systemd Targets</h2>
<p>A Systemd target is a group of units belonging together, some of these targets can be used to define the state a system should boot in. These targets can be isolated and have the <code>AllowIsolate</code> property in their <code>[Unit]</code> section.<br />Four targets can be used to boot into:</p>
<ul>
<li><strong>emergency.target</strong> : A minimal number of units are started.</li>
<li><strong>rescue.target</strong> : A fully operation Linux system without nonessential services.</li>
<li><strong>multi-user.target</strong> : The default target commonly used on servers, starts everything needed for full system functionality.</li>
<li><strong>graphical.target</strong> : Starts all units needed for full system functionality as well as a graphical interface.</li>
</ul>
<p>A target configuration consists of two parts, the target unit file and the "wants" directory that contains references to all unit files that need to be loaded when entering that specific target. They can also have other targets as dependencies, specified in the target unit file.</p>
<pre><code>[root@server1 ~]# systemctl cat multi-user.target 
# /usr/lib/systemd/system/multi-user.target
#  SPDX-License-Identifier: LGPL<span class="hljs-number">-2.1</span>+
#
#  This file is part <span class="hljs-keyword">of</span> systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms <span class="hljs-keyword">of</span> the GNU Lesser General Public License <span class="hljs-keyword">as</span> published by
#  the Free Software Foundation; either version <span class="hljs-number">2.1</span> <span class="hljs-keyword">of</span> the License, or
#  (at your option) any later version.

[Unit]
Description=Multi-User System
Documentation=man:systemd.special(<span class="hljs-number">7</span>)
Requires=basic.target
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes
</code></pre><p>The target unit doesn't contain much, it defines what it requires and which services and targets it can't coexist with. The <code>After</code> statement in the <code>[Unit]</code> sections also defines load ordering. It does not contain any information about units that it "wants".</p>
<h3 id="heading-understanding-wants">Understanding Wants</h3>
<p>Wants define which units should start when booting or starting a specific target.
Wants are created when enabling units using <code>systemd enable</code>, this happens by creating a symbolic link in the <code>/etc/systemd/system</code> directory. This directory contains a subdirectory for every target, which in turn contains "wants" as symbolic links to specific services that should be started:</p>
<pre><code>[root@server1 ~]# ls -l /etc/systemd/system/multi-user.target.wants/
total <span class="hljs-number">0</span>
lrwxrwxrwx. <span class="hljs-number">1</span> root root <span class="hljs-number">35</span> Sep <span class="hljs-number">26</span> <span class="hljs-number">17</span>:<span class="hljs-number">46</span> atd.service -&gt; <span class="hljs-regexp">/usr/</span>lib/systemd/system/atd.service
lrwxrwxrwx. <span class="hljs-number">1</span> root root <span class="hljs-number">38</span> Sep <span class="hljs-number">26</span> <span class="hljs-number">17</span>:<span class="hljs-number">44</span> auditd.service -&gt; <span class="hljs-regexp">/usr/</span>lib/systemd/system/auditd.service
lrwxrwxrwx. <span class="hljs-number">1</span> root root <span class="hljs-number">44</span> Sep <span class="hljs-number">26</span> <span class="hljs-number">17</span>:<span class="hljs-number">46</span> avahi-daemon.service -&gt; <span class="hljs-regexp">/usr/</span>lib/systemd/system/avahi-daemon.service
lrwxrwxrwx. <span class="hljs-number">1</span> root root <span class="hljs-number">39</span> Sep <span class="hljs-number">26</span> <span class="hljs-number">17</span>:<span class="hljs-number">45</span> chronyd.service -&gt; <span class="hljs-regexp">/usr/</span>lib/systemd/system/chronyd.service
lrwxrwxrwx. <span class="hljs-number">1</span> root root <span class="hljs-number">37</span> Sep <span class="hljs-number">26</span> <span class="hljs-number">17</span>:<span class="hljs-number">44</span> crond.service -&gt; <span class="hljs-regexp">/usr/</span>lib/systemd/system/crond.service
...
</code></pre><p>The <code>[Install]</code> section in a service unit file specifies the target it is "wanted" by. Enabling the service creates a symbolic link in that targets' "wants" directory, making sure it starts when that target is booted into or started.</p>
<pre><code>[root@server1 ~]# systemctl cat httpd.service 
...
[Install]
WantedBy=multi-user.target

[root@server1 ~]# systemctl enable httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
</code></pre><h3 id="heading-isolating-targets">Isolating Targets</h3>
<p>To get a list of all targets that are currently loaded, we can use the <code>systemctl --type=target</code> command. This shows all currently active targets. The <code>systemctl --type=target --all</code> command also shows inactivate targets.</p>
<pre><code>[root@server1 ~]# systemctl --type=target
UNIT                   LOAD   ACTIVE SUB    DESCRIPTION                
basic.target           loaded active active Basic System               
cryptsetup.target      loaded active active Local Encrypted Volumes    
getty.target           loaded active active Login Prompts              
graphical.target       loaded active active Graphical Interface        
local-fs-pre.target    loaded active active Local File Systems (Pre)   
local-fs.target        loaded active active Local File Systems         
multi-user.target      loaded active active Multi-User System          
network-online.target  loaded active active Network is Online          
network.target         loaded active active Network                    
nfs-client.target      loaded active active NFS client services        
nss-user-lookup.target loaded active active User and Group Name Lookups
paths.target           loaded active active Paths                      
remote-fs-pre.target   loaded active active Remote File Systems (Pre)  
remote-fs.target       loaded active active Remote File Systems        
rpc_pipefs.target      loaded active active rpc_pipefs.target          
rpcbind.target         loaded active active RPC Port Mapper            
slices.target          loaded active active Slices                     
sockets.target         loaded active active Sockets                    
sound.target           loaded active active Sound Card                 
sshd-keygen.target     loaded active active sshd-keygen.target         
swap.target            loaded active active Swap                       
sysinit.target         loaded active active System Initialization      
timers.target          loaded active active Timers                     

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization <span class="hljs-keyword">of</span> SUB.
SUB    = The low-level unit activation state, values depend on unit type.

<span class="hljs-number">23</span> loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use <span class="hljs-string">'systemctl list-unit-files'</span>.
</code></pre><p>Some of these targets can be isolated, they can be started to define the state of the machine and these are also the targets that can be set as the default target. They roughly correspond to the following System V runlevels:</p>
<p>{{</p><table>}}
Target | Runlevel
-----|----
poweroff.target | runlevel 0
rescue.target | runlevel 1
multi-user.target | runlevel 3
graphical.target | runlevel 5
reboot.target | runlevel 6
{{</table>}}<p></p>
<p>As mentioned earlier, targets that can be isolated have the <code>AllowIsolate</code> property in their <code>[Unit]</code> section:</p>
<pre><code>[root@server1 system]# grep Isolate *.target
anaconda.target:AllowIsolate=yes
ctrl-alt-del.target:AllowIsolate=yes
<span class="hljs-keyword">default</span>.target:AllowIsolate=yes
emergency.target:AllowIsolate=yes
exit.target:AllowIsolate=yes
graphical.target:AllowIsolate=yes
halt.target:AllowIsolate=yes
initrd-<span class="hljs-keyword">switch</span>-root.target:AllowIsolate=yes
initrd.target:AllowIsolate=yes
kexec.target:AllowIsolate=yes
multi-user.target:AllowIsolate=yes
poweroff.target:AllowIsolate=yes
reboot.target:AllowIsolate=yes
rescue.target:AllowIsolate=yes
runlevel0.target:AllowIsolate=yes
runlevel1.target:AllowIsolate=yes
runlevel2.target:AllowIsolate=yes
runlevel3.target:AllowIsolate=yes
runlevel4.target:AllowIsolate=yes
runlevel5.target:AllowIsolate=yes
runlevel6.target:AllowIsolate=yes
system-update.target:AllowIsolate=yes
</code></pre><p>To switch the current state of your machine to either one of these targets, use the <code>systemctl isolate</code> command:
<code>systemctl isolate rescue.target</code>
<code>systemctl isolate reboot.target</code></p>
<p>We can set a default ttarget using the <code>systemctl set-default</code> command, or check the current default target using the <code>systemctl get-default</code> command. Notice how the existing symlink is removed and a new one is created for <code>default.target</code>:</p>
<pre><code>[root@server1 system]# systemctl get-<span class="hljs-keyword">default</span> 
graphical.target

[root@server1 system]# systemctl set-<span class="hljs-keyword">default</span> multi-user.target 
Removed /etc/systemd/system/<span class="hljs-keyword">default</span>.target.
Created symlink /etc/systemd/system/<span class="hljs-keyword">default</span>.target → /usr/lib/systemd/system/multi-user.target.
</code></pre><h2 id="heading-working-with-grub2">Working with GRUB2</h2>
<p>The GRUB2 bootloader makes sure we can boot Linux, it's installed in the boot sector of the hard drive and loads a Linux kernel and initramfs.
The initramfs contains a mini file system, mounted during boot, from where kernel modules load that are needed during the rest of the boot process, e.g. LVM modules.</p>
<p>We apply changes to GRUB2 by editing the <code>/etc/default/grub</code> file and we pass boot arguments to the kernel by editing the <code>GRUB_CMDLINE_LINUX</code> line:</p>
<pre><code>[root@server1 system]# cat /etc/<span class="hljs-keyword">default</span>/grub 
GRUB_TIMEOUT=<span class="hljs-number">5</span>
GRUB_DISTRIBUTOR=<span class="hljs-string">"$(sed 's, release .*$,,g' /etc/system-release)"</span>
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=<span class="hljs-literal">true</span>
GRUB_TERMINAL_OUTPUT=<span class="hljs-string">"console"</span>
GRUB_CMDLINE_LINUX=<span class="hljs-string">"crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet"</span>
GRUB_DISABLE_RECOVERY=<span class="hljs-string">"true"</span>
GRUB_ENABLE_BLSCFG=<span class="hljs-literal">true</span>
</code></pre><p>The <code>GRUB_TIMEOUT</code> parameter defines how long GRUB2 waits before proceeding with the boot procedure. During this time you can press <code>e</code> to make changes to the configuration, just as you would by editing the <code>/etc/default/grub</code> file. </p>
<p>Removing the <code>rhgb</code> and <code>quiet</code> boot options would allow you to see the output of the boot procedure on screen.</p>
<p>After making changes to <code>/etc/default/grub</code> the relevant GRUB file on the <code>/boot</code> partition needs to be regenerated. On a BIOS system this file is located in <code>/boot/grub2/grub.cfg</code>, while on a UEFI system the file is located in <code>/boot/efi/EFI/redhat/grub.cfg</code>. To regenerate these files, we issue the <code>grub2-mkconfig</code> command and redirect its output to either one of these files:
<code>grub2-mkconfig -o /boot/grub2/grub.cfg</code>
<code>grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg</code></p>
]]></content:encoded></item><item><title><![CDATA[Basic Kernel Management]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
The Role of the Linux Kernel
The Linux kernel is the layer between the user who works with Linux from a shell environment and the available hardware. It manages the I/O i...]]></description><link>https://blog.joerismissaert.dev/basic-kernel-management</link><guid isPermaLink="true">https://blog.joerismissaert.dev/basic-kernel-management</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}</p>
<h2 id="heading-the-role-of-the-linux-kernel">The Role of the Linux Kernel</h2>
<p>The Linux kernel is the layer between the user who works with Linux from a shell environment and the available hardware. It manages the I/O instructions received from software and translates it to CPU instructions. The kernel also handles essential operating system tasks like the scheduler to make sure that any processes started on the OS are handled by the CPU.</p>
<p>OS tasks that are handled by the kernel are implemented by using different kernel threads. You can easily indentify them with a command like <em>ps aux</em>, the kernel threads are listed between square brackets:</p>
<pre><code>[root@server1 ~]# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           <span class="hljs-number">1</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.5</span> <span class="hljs-number">180072</span> <span class="hljs-number">10364</span> ?        Ss   <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">01</span> /usr/lib/syst
root           <span class="hljs-number">2</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        S    <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [kthreadd]
root           <span class="hljs-number">3</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I&lt;   <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [rcu_gp]
root           <span class="hljs-number">4</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I&lt;   <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [rcu_par_gp]
root           <span class="hljs-number">6</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I&lt;   <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [kworker/<span class="hljs-number">0</span>:<span class="hljs-number">0</span>H
root           <span class="hljs-number">8</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I&lt;   <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [mm_percpu_wq
root           <span class="hljs-number">9</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        S    <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [ksoftirqd/<span class="hljs-number">0</span>]
root          <span class="hljs-number">10</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I    <span class="hljs-number">09</span>:<span class="hljs-number">50</span>   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [rcu_sched]
</code></pre><p>The kernel also handles hardware initialization, making sure hardware can be used. To do so, drivers must be loaded and since the kernel is modular these drivers are loaded as kernel modules.</p>
<p>Hardware manufacturers do not always provide open source drivers, in this case the alternative would be to use closed source drivers. This is not always ideal, a badly functioning driver can crash the entire kernel. If this happens on an open source driver, the Linux community would jump in to debug and fix the problem which cannot be done on a closed source driver. A closed source or proprietary driver may however provide additional functionality not available in the open source equivalent. A kernel that is using closed source drivers is known as a <em>tainted kernel</em>.</p>
<h2 id="heading-analyzing-what-the-kernel-is-doing">Analyzing What the Kernel is Doing</h2>
<p>A few different tools are provided by the Linux operating system to help check what the kernel is doing:</p>
<ul>
<li><strong>dmesg</strong></li>
<li>The <code>/proc</code> pseudo file system</li>
<li>The <strong>uname</strong> and <strong>hostnamectl</strong> utility</li>
</ul>
<p>When you require detailed information about kernel activity, you can use the <strong>dmesg</strong> command. This prints the content of the kernel ring buffer, an area of memory where the kernel keeps the recent log messages. Each entry in the output starts with a time indicator that shows the specific second the event was logged, relative to the start of the kernel.</p>
<p>An alternative to <strong>dmesg</strong> is <strong>journalctl - -dmesg</strong> or <strong>journalctl -k</strong>. These commands show a clock time indicator.</p>
<pre><code>[root@server1 ~]# dmesg | head
[    <span class="hljs-number">0.000000</span>] Linux version <span class="hljs-number">4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version <span class="hljs-number">8.3</span><span class="hljs-number">.1</span> <span class="hljs-number">20191121</span> (Red Hat <span class="hljs-number">8.3</span><span class="hljs-number">.1</span><span class="hljs-number">-5</span>) (GCC)) #<span class="hljs-number">1</span> SMP Mon Sep <span class="hljs-number">14</span> <span class="hljs-number">14</span>:<span class="hljs-number">37</span>:<span class="hljs-number">00</span> UTC <span class="hljs-number">2020</span>
[    <span class="hljs-number">0.000000</span>] Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz<span class="hljs-number">-4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64 root=<span class="hljs-regexp">/dev/m</span>apper/cl-root ro crashkernel=auto resume=<span class="hljs-regexp">/dev/m</span>apper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet
[    <span class="hljs-number">0.000000</span>] x86/fpu: x87 FPU will use FXSAVE
[    <span class="hljs-number">0.000000</span>] BIOS-provided physical RAM map:
[    <span class="hljs-number">0.000000</span>] BIOS-e820: [mem <span class="hljs-number">0x0000000000000000</span><span class="hljs-number">-0x000000000009fbff</span>] usable
[    <span class="hljs-number">0.000000</span>] BIOS-e820: [mem <span class="hljs-number">0x000000000009fc00</span><span class="hljs-number">-0x000000000009ffff</span>] reserved
[    <span class="hljs-number">0.000000</span>] BIOS-e820: [mem <span class="hljs-number">0x00000000000f0000</span><span class="hljs-number">-0x00000000000fffff</span>] reserved
[    <span class="hljs-number">0.000000</span>] BIOS-e820: [mem <span class="hljs-number">0x0000000000100000</span><span class="hljs-number">-0x000000007ffdcfff</span>] usable
[    <span class="hljs-number">0.000000</span>] BIOS-e820: [mem <span class="hljs-number">0x000000007ffdd000</span><span class="hljs-number">-0x000000007fffffff</span>] reserved
[    <span class="hljs-number">0.000000</span>] BIOS-e820: [mem <span class="hljs-number">0x00000000b0000000</span><span class="hljs-number">-0x00000000bfffffff</span>] reserved


[root@server1 ~]# journalctl -k | head
-- Logs begin at Fri <span class="hljs-number">2020</span><span class="hljs-number">-10</span><span class="hljs-number">-02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> +<span class="hljs-number">04</span>, end at Fri <span class="hljs-number">2020</span><span class="hljs-number">-10</span><span class="hljs-number">-02</span> <span class="hljs-number">10</span>:<span class="hljs-number">53</span>:<span class="hljs-number">20</span> +<span class="hljs-number">04.</span> --
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: Linux version <span class="hljs-number">4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version <span class="hljs-number">8.3</span><span class="hljs-number">.1</span> <span class="hljs-number">20191121</span> (Red Hat <span class="hljs-number">8.3</span><span class="hljs-number">.1</span><span class="hljs-number">-5</span>) (GCC)) #<span class="hljs-number">1</span> SMP Mon Sep <span class="hljs-number">14</span> <span class="hljs-number">14</span>:<span class="hljs-number">37</span>:<span class="hljs-number">00</span> UTC <span class="hljs-number">2020</span>
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz<span class="hljs-number">-4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64 root=<span class="hljs-regexp">/dev/m</span>apper/cl-root ro crashkernel=auto resume=<span class="hljs-regexp">/dev/m</span>apper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: x86/fpu: x87 FPU will use FXSAVE
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: BIOS-provided physical RAM map:
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: BIOS-e820: [mem <span class="hljs-number">0x0000000000000000</span><span class="hljs-number">-0x000000000009fbff</span>] usable
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: BIOS-e820: [mem <span class="hljs-number">0x000000000009fc00</span><span class="hljs-number">-0x000000000009ffff</span>] reserved
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: BIOS-e820: [mem <span class="hljs-number">0x00000000000f0000</span><span class="hljs-number">-0x00000000000fffff</span>] reserved
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: BIOS-e820: [mem <span class="hljs-number">0x0000000000100000</span><span class="hljs-number">-0x000000007ffdcfff</span>] usable
Oct <span class="hljs-number">02</span> <span class="hljs-number">09</span>:<span class="hljs-number">50</span>:<span class="hljs-number">11</span> server1.example.local kernel: BIOS-e820: [mem <span class="hljs-number">0x000000007ffdd000</span><span class="hljs-number">-0x000000007fffffff</span>] reserved
</code></pre><p>Many of the performance related commands or tools we use grab their information from the <code>/proc</code> file system. It contains detailed status information about what is happening on the machine. The <code>/proc</code> directory contains Process ID subdirectories which contain information about the particular process. The directory also contains status files, i.e. <code>/proc/partitions</code> or <code>/proc/meminfo</code>:</p>
<pre><code>[root@server1 ~]# cat /proc/<span class="hljs-number">1146</span>/status
<span class="hljs-attr">Name</span>:    kvdo0:cpuQ0
<span class="hljs-attr">Umask</span>:    <span class="hljs-number">0000</span>
<span class="hljs-attr">State</span>:    S (sleeping)
<span class="hljs-attr">Tgid</span>:    <span class="hljs-number">1146</span>
<span class="hljs-attr">Ngid</span>:    <span class="hljs-number">0</span>
<span class="hljs-attr">Pid</span>:    <span class="hljs-number">1146</span>
<span class="hljs-attr">PPid</span>:    <span class="hljs-number">2</span>
...

[root@server1 ~]# cat /proc/partitions 
major minor  #blocks  name

  <span class="hljs-number">11</span>        <span class="hljs-number">0</span>    <span class="hljs-number">8038400</span> sr0
   <span class="hljs-number">8</span>       <span class="hljs-number">64</span>    <span class="hljs-number">5242880</span> sde
   <span class="hljs-number">8</span>       <span class="hljs-number">48</span>    <span class="hljs-number">5242880</span> sdd
   <span class="hljs-number">8</span>        <span class="hljs-number">0</span>   <span class="hljs-number">26214400</span> sda
   <span class="hljs-number">8</span>        <span class="hljs-number">1</span>    <span class="hljs-number">1048576</span> sda1
   <span class="hljs-number">8</span>        <span class="hljs-number">2</span>   <span class="hljs-number">25164800</span> sda2
   <span class="hljs-number">8</span>       <span class="hljs-number">32</span>    <span class="hljs-number">5242880</span> sdc
   <span class="hljs-number">8</span>       <span class="hljs-number">16</span>    <span class="hljs-number">5242880</span> sdb
   <span class="hljs-number">8</span>       <span class="hljs-number">17</span>     <span class="hljs-number">102400</span> sdb1
   <span class="hljs-number">8</span>       <span class="hljs-number">18</span>     <span class="hljs-number">921600</span> sdb2
   <span class="hljs-number">8</span>       <span class="hljs-number">19</span>    <span class="hljs-number">2097152</span> sdb3
   <span class="hljs-number">8</span>       <span class="hljs-number">20</span>    <span class="hljs-number">2120687</span> sdb4
...



[root@server1 ~]# cat /proc/meminfo 
<span class="hljs-attr">MemTotal</span>:        <span class="hljs-number">1870616</span> kB
<span class="hljs-attr">MemFree</span>:           <span class="hljs-number">87464</span> kB
<span class="hljs-attr">MemAvailable</span>:     <span class="hljs-number">184724</span> kB
...
</code></pre><p>We can change kernel performance parameters during run time by writing values to the <code>/proc/sys</code> pseudo file system. You can apply the changes permanently by writing the parameters to <code>/etc/sysctl.conf</code>. To see what parameters are currently in use, issue the <strong>systctl -a</strong> command.</p>
<pre><code>[root@server1 ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = <span class="hljs-number">0</span>

[root@server1 ~]# echo <span class="hljs-string">"1"</span> &gt; <span class="hljs-regexp">/proc/</span>sys/net/ipv4/ip_forward
[root@server1 ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = <span class="hljs-number">1</span>


[root@server1 ~]# echo <span class="hljs-string">"net.ipv4.ip_forward = 1"</span> &gt;&gt; <span class="hljs-regexp">/etc/</span>sysctl.conf
[root@server1 ~]# reboot


[root@server1 ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = <span class="hljs-number">1</span>
</code></pre><p>Another useful command would be <strong>uname</strong> and <strong>hostnamectl</strong>, it gives different kinds of information about the OS:</p>
<pre><code>[root@server1 ~]# uname -a
Linux server1.example.local <span class="hljs-number">4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64 #<span class="hljs-number">1</span> SMP Mon Sep <span class="hljs-number">14</span> <span class="hljs-number">14</span>:<span class="hljs-number">37</span>:<span class="hljs-number">00</span> UTC <span class="hljs-number">2020</span> x86_64 x86_64 x86_64 GNU/Linux

[root@server1 ~]# uname -r
<span class="hljs-number">4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64

[root@server1 ~]# hostnamectl status
   Static hostname: server1.example.local
         Icon name: computer-vm
           <span class="hljs-attr">Chassis</span>: vm
        Machine ID: e40db12b26ec4e9bb6a6f295f6d4d83e
           Boot ID: <span class="hljs-number">5441210211394</span>c5098724e9b89426cb2
    <span class="hljs-attr">Virtualization</span>: kvm
  Operating System: CentOS Linux <span class="hljs-number">8</span> (Core)
       CPE OS Name: cpe:/o:centos:centos:<span class="hljs-number">8</span>
            <span class="hljs-attr">Kernel</span>: Linux <span class="hljs-number">4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64
      <span class="hljs-attr">Architecture</span>: x86<span class="hljs-number">-64</span>
</code></pre><p>Lastly, you can <em>cat</em> the distribution release verion:</p>
<pre><code>[root@server1 ~]# cat /etc/redhat-release 
CentOS Linux release <span class="hljs-number">8.2</span><span class="hljs-number">.2004</span> (Core)
</code></pre><h2 id="heading-working-with-kernel-modules">Working with Kernel Modules</h2>
<p>Since the release of Linux kernel 2.0 kernels are no longer compiled but modular. A modular kernel consists of a relatively small kernel core and provides driver support through modules that are loaded when they are required. Modules implement specific kernel functionality, they are not limited to loading hardware drivers alone. For example, file system support is also loaded as kernel modules.</p>
<h3 id="heading-understanding-hardware-initialization">Understanding Hardware Initialization</h3>
<p>The loading of drivers is an automated process:</p>
<ul>
<li>The kernel probes available hardware during boot.</li>
<li>When a hardware component is detected, the <strong>systemd-udevd</strong> process loads the appropriate driver and makes the device available.</li>
<li><strong>systemd-udevd</strong> reads the rules in <code>/usr/lib/udev/rules.d/</code>. These are system-provided rules that should not be modified.</li>
<li><strong>systemd-udevd</strong> reads custom rules from the <code>/etc/udev/rules.d</code> directory, if available.</li>
<li>Required kernel modules have been loaded and the status of associated hardware is written to the sysfs file system on <code>/sys</code>. This pseudo file system tracks hardware-related settings.</li>
</ul>
<p>The <strong>systemd-udevd</strong> process continuously monitors for plugging and unplugging of hardware devices. You can see this in action when plugging/unplugging an usb or other block device while the <strong>udevadm monitor</strong> command is running:</p>
<pre><code>[root@server1 ~]# udevadm monitor
monitor will print the received events <span class="hljs-keyword">for</span>:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[<span class="hljs-number">7080.543250</span>] change   /devices/pci0000:<span class="hljs-number">00</span>/<span class="hljs-number">0000</span>:<span class="hljs-number">00</span>:<span class="hljs-number">1</span>f<span class="hljs-number">.2</span>/ata1/host0/target0:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/block/sr0 (block)
UDEV  [<span class="hljs-number">7080.558849</span>] change   /devices/pci0000:<span class="hljs-number">00</span>/<span class="hljs-number">0000</span>:<span class="hljs-number">00</span>:<span class="hljs-number">1</span>f<span class="hljs-number">.2</span>/ata1/host0/target0:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/block/sr0 (block)
KERNEL[<span class="hljs-number">7080.578292</span>] change   /devices/pci0000:<span class="hljs-number">00</span>/<span class="hljs-number">0000</span>:<span class="hljs-number">00</span>:<span class="hljs-number">1</span>f<span class="hljs-number">.2</span>/ata1/host0/target0:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/block/sr0 (block)
UDEV  [<span class="hljs-number">7080.746283</span>] change   /devices/pci0000:<span class="hljs-number">00</span>/<span class="hljs-number">0000</span>:<span class="hljs-number">00</span>:<span class="hljs-number">1</span>f<span class="hljs-number">.2</span>/ata1/host0/target0:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>:<span class="hljs-number">0</span>/block/sr0 (block)
</code></pre><h3 id="heading-managing-kernel-modules">Managing Kernel Modules</h3>
<p>Although loading of drivers happens automatically when they are required, there might be occasions where you need to manually load the appropriate kernel module.</p>
<p>To list all currently used kernel modules we use the <strong>lsmod</strong> command:</p>
<pre><code>[root@server1 ~]# lsmod | head
Module                  Size  Used by
binfmt_misc            <span class="hljs-number">20480</span>  <span class="hljs-number">1</span>
nls_utf8               <span class="hljs-number">16384</span>  <span class="hljs-number">1</span>
isofs                  <span class="hljs-number">45056</span>  <span class="hljs-number">1</span>
fuse                  <span class="hljs-number">131072</span>  <span class="hljs-number">3</span>
uinput                 <span class="hljs-number">20480</span>  <span class="hljs-number">1</span>
xt_CHECKSUM            <span class="hljs-number">16384</span>  <span class="hljs-number">1</span>
ipt_MASQUERADE         <span class="hljs-number">16384</span>  <span class="hljs-number">3</span>
xt_conntrack           <span class="hljs-number">16384</span>  <span class="hljs-number">1</span>
ipt_REJECT             <span class="hljs-number">16384</span>  <span class="hljs-number">2</span>
</code></pre><p><strong>modinfo</strong> provides more information about a specific kernel module, including two interesting sections: the alias and parms.
The alias refers to an alternative name that can be used to address the module and, the parms section refer to parameters that can be set while loading the module.</p>
<pre><code>[root@server1 ~]# modinfo e1000
<span class="hljs-attr">filename</span>:       <span class="hljs-regexp">/lib/m</span>odules/<span class="hljs-number">4.18</span><span class="hljs-number">.0</span><span class="hljs-number">-193.19</span><span class="hljs-number">.1</span>.el8_2.x86_64/kernel/drivers/net/ethernet/intel/e1000/e1000.ko.xz
<span class="hljs-attr">version</span>:        <span class="hljs-number">7.3</span><span class="hljs-number">.21</span>-k8-NAPI
<span class="hljs-attr">license</span>:        GPL
<span class="hljs-attr">description</span>:    Intel(R) PRO/<span class="hljs-number">1000</span> Network Driver
<span class="hljs-attr">author</span>:         Intel Corporation, <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">linux.nics@intel.com</span>&gt;</span>
rhelversion:    8.2
srcversion:     9DFB28D9833DABBB7757EDD
alias:          pci:v00008086d00002E6Esv*sd*bc*sc*i*
...
depends:        
intree:         Y
name:           e1000
vermagic:       4.18.0-193.19.1.el8_2.x86_64 SMP mod_unload modversions 
sig_id:         PKCS#7
signer:         CentOS Linux kernel signing key
sig_key:        4C:02:86:8D:9E:A5:E0:4D:A9:C5:DF:8B:D7:28:EA:05:AF:C6:2A:6D
sig_hashalgo:   sha256
signature:      65:B3:87:34:C5:6F:E5:26:A7:41:90:2C:BB:20:04:54:6E:93:44:2A:
        86:73:D7:FF:FD:12:D3:17:74:EB:4B:9B:9C:FB:19:3F:D8:6A:16:10:
        0D:72:69:CA:63:B2:2E:63:A9:B4:84:94:0D:4B:C4:94:FC:E6:48:CC:
        95:DB:99:65:BC:6F:57:1C:F2:C5:CF:F0:BE:F2:8B:63:11:8F:43:C1:
        8C:1C:D3:03:6B:BC:76:0E:18:06:76:F1:C1:CF:72:84:04:92:07:A7:
        C4:59:4B:7B:72:86:CD:EB:A8:C5:EF:D9:39:FD:B0:38:1A:E3:49:18:
        04:88:39:8D:B9:98:D3:5E:EA:0C:CA:B7:44:51:64:F8:7F:CA:01:75:
        9A:48:DD:E9:2E:E1:38:60:C6:33:37:1A:81:79:B1:22:63:16:5B:42:
        DF:E2:08:9B:B4:47:47:9E:9A:69:5D:62:E9:9E:72:A3:7D:D0:E0:B0:
        51:24:EA:AD:B1:0B:08:67:63:89:17:19:9A:DF:13:82:FB:C2:DA:32:
        97:AA:07:C4:75:A5:6A:A1:E4:AF:D3:64:04:45:24:3F:40:81:21:12:
        99:11:54:2C:04:0C:86:98:56:79:C9:34:EC:B9:96:4F:52:BE:A4:CC:
        0A:3D:0F:78:5B:0E:1A:E3:7A:57:45:FA:B3:80:EF:B0:2E:75:8F:8B:
        FE:71:A1:74:63:DC:B2:7E:29:AD:87:4B:6E:AF:66:F7:81:34:1E:0B:
        7D:02:71:93:20:01:A7:9B:08:5F:AD:8C:EA:F5:E4:1E:4A:D1:AF:90:
        CE:23:9A:65:5B:F7:DE:94:3C:DF:6F:5C:15:51:62:D1:64:05:B3:8A:
        9A:F4:83:3C:C4:31:E4:EE:A5:6C:0D:56:96:DC:F1:00:53:91:78:BD:
        D4:20:03:A1:59:07:58:16:B0:8D:7B:19:E6:6A:A3:31:81:7E:31:ED:
        77:66:58:B0:F5:68:4E:A0:FA:5C:8B:56:40:4A:BB:77:E3:E3:13:62:
        1B:E5:5C:13
parm:           TxDescriptors:Number of transmit descriptors (array of int)
parm:           RxDescriptors:Number of receive descriptors (array of int)
parm:           Speed:Speed setting (array of int)
parm:           Duplex:Duplex setting (array of int)
parm:           AutoNeg:Advertised auto-negotiation setting (array of int)
parm:           FlowControl:Flow Control setting (array of int)
parm:           XsumRX:Disable or enable Receive Checksum offload (array of int)
parm:           TxIntDelay:Transmit Interrupt Delay (array of int)
parm:           TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)
parm:           RxIntDelay:Receive Interrupt Delay (array of int)
parm:           RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)
parm:           InterruptThrottleRate:Interrupt Throttling Rate (array of int)
parm:           SmartPowerDownEnable:Enable PHY smart power down (array of int)
parm:           copybreak:Maximum size of packet that is copied to a new buffer on receive (uint)
parm:           debug:Debug level (0=none,...,16=all) (int)</span>
</code></pre><p>To manually load and unload modules we use the <strong>modprobe</strong> and <strong>modprobe -r</strong> commands.
The <strong>modprobe</strong> command automatically loads any dependencies. </p>
<h3 id="heading-checking-driver-availability-for-hardware-devices">Checking Driver Availability for Hardware Devices</h3>
<p>To check if a particular device is supported and thus has a module loaded you can use the <strong>lspci -k</strong> command. 
If there are any devices for which no kernel module was loaded you're likely dealing with an unsupported device.</p>
<pre><code>[root@server1 ~]# lspci -k
<span class="hljs-number">00</span>:<span class="hljs-number">00.0</span> Host bridge: Intel Corporation <span class="hljs-number">82</span>G33/G31/P35/P31 Express DRAM Controller
    <span class="hljs-attr">Subsystem</span>: Red Hat, Inc. QEMU Virtual Machine
<span class="hljs-number">00</span>:<span class="hljs-number">01.0</span> VGA compatible controller: Red Hat, Inc. Virtio GPU (rev <span class="hljs-number">01</span>)
    <span class="hljs-attr">Subsystem</span>: Red Hat, Inc. Device <span class="hljs-number">1100</span>
    Kernel driver <span class="hljs-keyword">in</span> use: virtio-pci
...
</code></pre><h3 id="heading-managing-kernel-module-parameters">Managing Kernel Module Parameters</h3>
<p>You may want to load kernel modules with specific parameters you've discovered using the <strong>modinfo</strong> command. To do so, specify the name of the parameter and its value in the <strong>modprobe</strong> command:</p>
<pre><code>[root@server1 ~]# modprobe cdrom debug=<span class="hljs-number">1</span>
[root@server1 ~]#
</code></pre><p>To make this persistent, you can add an entry to <code>/etc/modprobe.conf</code> or create a file in the <code>/etc/modprobe.d/</code> directory where the name of the file matches the module name and the content specifies the parameters you want to set:</p>
<pre><code>[root@server1 modprobe.d]# pwd
/etc/modprobe.d
[root@server1 modprobe.d]# cat cdrom.conf 
options cdrom debug=<span class="hljs-number">1</span>
</code></pre><h2 id="heading-upgrading-the-linux-kernel">Upgrading the Linux Kernel</h2>
<p>When upgrading the Linux kernel a new version of the kernel is installed next to the current version and will be used by default.
The kernel files for the last four kernels installed will be kept in <code>/boot</code>. The GRUB2 boot loader automatically picks up all kernels found in this directory, allowing you to select an older kernel at boot time in case the newly installed kernel doesn't boot correctly.</p>
<p>To install a new version of the kernel, issue the <strong>yum upgrade kernel</strong> or <strong>yum install kernel</strong> command.</p>
]]></content:encoded></item><item><title><![CDATA[Advanced Storage: Virtual Data Optimizer]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
Virtual Data Optimizer is a storage solution developed to reduce disk space usage on block devices by applying deduplication features. VDO creates volumes on top of any e...]]></description><link>https://blog.joerismissaert.dev/advanced-storage-virtual-data-optimizer</link><guid isPermaLink="true">https://blog.joerismissaert.dev/advanced-storage-virtual-data-optimizer</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Thu, 20 Aug 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}</p>
<p>Virtual Data Optimizer is a storage solution developed to reduce disk space usage on block devices by applying deduplication features. VDO creates volumes on top of any existing block device from where you either create an XFS file system, or use the volume as a Physical Volume in an LVM setup.</p>
<p>VDO uses three common technologies:</p>
<ul>
<li><strong>Zero-block elimination</strong> to filter out data blocks that contain only zeros.</li>
<li><strong>Deduplication</strong> of redundant data blocks.</li>
<li><strong>Compression</strong> when the kvdo module compresses data blocks.</li>
</ul>
<p>Typical usage cases for VDO are host platforms for containers and virtual machines or cloud block storage. 
Commonly, a logical size of up to 10 times the physical size is used for these types of environments. </p>
<h2 id="heading-setting-up-vdo">Setting up VDO</h2>
<p>To use VDO the underlying block devices must have a minimal size of 4GiB and the <strong>vdo</strong> and <strong>kmod-kvdo</strong> packages must be installed. </p>
<p>We create the VDO device using the <code>vdo create</code> command, specify a name using the <code>--name=</code> option, and we <em>can</em> specify the logical size using the <code>--vdoLogicalSize=</code> option. e.g. <code>vdo create --name=myvdo1 --vdoLogicalSize=1T /dev/sdb</code></p>
<p>Once the device is created, we can put an XFS file system on top of it:
<code>mkfs.xfs -K /dev/mapper/myvdo1</code>
The <code>-K</code> option prevents unused blocks from being discarded immediately, making the command much faster.</p>
<p>At this point we issue the <code>udevadm settle</code> command to ensure device nodes have been created succesfully.</p>
<p>To persistently mount the VDO file system using the <code>/etc/fstab</code> file we must include the following mount options:
<code>x-systemd.requires=vdo.service,discard</code>
This makes sure the <strong>vdo</strong> service is loaded before trying to mount the file system. </p>
<p>An alternative method to persistently mount the VDO file system is to use the example systemd mount unit found in <code>/usr/share/doc/vdo/examples/systemd</code>.
Copy it to <code>/etc/systemc/system/mountpointname.mount</code> and edit the following lines:</p>
<pre><code>name = 
What = 
Where =
</code></pre><p>The Unit file name must correspond to the <em>name</em>, <em>What</em> and <em>Where</em> values.
Make sure to enable and start the moutn at boot: <code>systemctl enable --now mountpointname.mount</code></p>
<h3 id="heading-example">Example</h3>
<pre><code>[root@server1 ~]# vdo create --name=vdo1 --device=<span class="hljs-regexp">/dev/</span>sdb --vdoLogicalSize=<span class="hljs-number">1</span>T
Creating VDO vdo1
      The VDO volume can address <span class="hljs-number">2</span> GB <span class="hljs-keyword">in</span> <span class="hljs-number">1</span> data slab.
      It can grow to address at most <span class="hljs-number">16</span> TB <span class="hljs-keyword">of</span> physical storage <span class="hljs-keyword">in</span> <span class="hljs-number">8192</span> slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance <span class="hljs-number">0</span> volume is ready at /dev/mapper/vdo1


[root@server1 ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           <span class="hljs-number">8</span>:<span class="hljs-number">0</span>    <span class="hljs-number">0</span>   <span class="hljs-number">25</span>G  <span class="hljs-number">0</span> disk 
├─sda1        <span class="hljs-number">8</span>:<span class="hljs-number">1</span>    <span class="hljs-number">0</span>    <span class="hljs-number">1</span>G  <span class="hljs-number">0</span> part /boot
└─sda2        <span class="hljs-number">8</span>:<span class="hljs-number">2</span>    <span class="hljs-number">0</span>   <span class="hljs-number">24</span>G  <span class="hljs-number">0</span> part 
  ├─cl-root <span class="hljs-number">253</span>:<span class="hljs-number">0</span>    <span class="hljs-number">0</span>   <span class="hljs-number">22</span>G  <span class="hljs-number">0</span> lvm  /
  └─cl-swap <span class="hljs-number">253</span>:<span class="hljs-number">1</span>    <span class="hljs-number">0</span>  <span class="hljs-number">2.1</span>G  <span class="hljs-number">0</span> lvm  [SWAP]
sdb           <span class="hljs-number">8</span>:<span class="hljs-number">16</span>   <span class="hljs-number">0</span>    <span class="hljs-number">5</span>G  <span class="hljs-number">0</span> disk 
└─vdo1      <span class="hljs-number">253</span>:<span class="hljs-number">2</span>    <span class="hljs-number">0</span>    <span class="hljs-number">1</span>T  <span class="hljs-number">0</span> vdo  
sdc           <span class="hljs-number">8</span>:<span class="hljs-number">32</span>   <span class="hljs-number">0</span>    <span class="hljs-number">5</span>G  <span class="hljs-number">0</span> disk 

[root@server1 ~]# mkfs.xfs -K /dev/mapper/vdo1 
meta-data=<span class="hljs-regexp">/dev/m</span>apper/vdo1       isize=<span class="hljs-number">512</span>    agcount=<span class="hljs-number">4</span>, agsize=<span class="hljs-number">67108864</span> blks
         =                       sectsz=<span class="hljs-number">4096</span>  attr=<span class="hljs-number">2</span>, projid32bit=<span class="hljs-number">1</span>
         =                       crc=<span class="hljs-number">1</span>        finobt=<span class="hljs-number">1</span>, sparse=<span class="hljs-number">1</span>, rmapbt=<span class="hljs-number">0</span>
         =                       reflink=<span class="hljs-number">1</span>
data     =                       bsize=<span class="hljs-number">4096</span>   blocks=<span class="hljs-number">268435456</span>, imaxpct=<span class="hljs-number">5</span>
         =                       sunit=<span class="hljs-number">0</span>      swidth=<span class="hljs-number">0</span> blks
naming   =version <span class="hljs-number">2</span>              bsize=<span class="hljs-number">4096</span>   ascii-ci=<span class="hljs-number">0</span>, ftype=<span class="hljs-number">1</span>
log      =internal log           bsize=<span class="hljs-number">4096</span>   blocks=<span class="hljs-number">131072</span>, version=<span class="hljs-number">2</span>
         =                       sectsz=<span class="hljs-number">4096</span>  sunit=<span class="hljs-number">1</span> blks, lazy-count=<span class="hljs-number">1</span>
realtime =none                   extsz=<span class="hljs-number">4096</span>   blocks=<span class="hljs-number">0</span>, rtextents=<span class="hljs-number">0</span>

[root@server1 ~]# udevadm settle

[root@server1 ~]# cp /usr/share/doc/vdo/examples/systemd/VDO.mount.example /etc/systemd/system/vdo1.mount
[root@server1 ~]# vim /etc/systemd/system/vdo1.mount 
....

[root@server1 ~]# cat /etc/systemd/system/vdo1.mount 
[Unit]
Description = Mount filesystem that lives on VDO
name = vdo1.mount
Requires = vdo.service systemd-remount-fs.service
After = multi-user.target
Conflicts = umount.target

[Mount]
What = <span class="hljs-regexp">/dev/m</span>apper/vdo1
Where = /vdo1
Type = xfs
Options = discard

[Install]
WantedBy = multi-user.target


[root@server1 ~]# systemctl enable --now vdo1.mount

[root@server1 ~]# vdostats --human-readable 
Device                    Size      Used Available Use% Space saving%
<span class="hljs-regexp">/dev/m</span>apper/vdo1          <span class="hljs-number">5.0</span>G      <span class="hljs-number">3.0</span>G      <span class="hljs-number">2.0</span>G  <span class="hljs-number">60</span>%           <span class="hljs-number">99</span>%

[root@server1 ~]# df -h /vdo1/
Filesystem        Size  Used Avail Use% Mounted on
/dev/mapper/vdo1  <span class="hljs-number">1.0</span>T  <span class="hljs-number">7.2</span>G <span class="hljs-number">1017</span>G   <span class="hljs-number">1</span>% /vdo1

[root@server1 ~]# reboot
</code></pre>]]></content:encoded></item><item><title><![CDATA[Advanced Storage: Configuring Stratis]]></title><description><![CDATA[{{< image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" >}}
Stratis, created as an answer to Btrfs and ZFS by Red Hat, is a volume-managing file system that introduces advanced storage features like:

Thin-provisioning: The file s...]]></description><link>https://blog.joerismissaert.dev/advanced-storage-configuring-stratis</link><guid isPermaLink="true">https://blog.joerismissaert.dev/advanced-storage-configuring-stratis</guid><dc:creator><![CDATA[Joeri JM Smissaert]]></dc:creator><pubDate>Mon, 17 Aug 2020 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>{{&lt; image src="/img/redhat-8-logo.png" alt="Red Hat logo" position="center" &gt;}}</p>
<p>Stratis, created as an answer to Btrfs and ZFS by Red Hat, is a volume-managing file system that introduces advanced storage features like:</p>
<ul>
<li><strong>Thin-provisioning</strong>: The file system presents itself to users as much bigger than it really is. Useful in virtualized environments.</li>
<li><strong>Snapshots</strong>: Allows users to backup the current state of the file system and makes it easy to revert to a previous state.</li>
<li><strong>Cache tier</strong>: A Ceph storage feature that ensures data is stored physically closer to the Ceph client, making data access faster.</li>
<li><strong>Programmatic API</strong>: Storage can be configured and modified through API access, particularly useful in cloud environments.</li>
<li><strong>Monitoring and repair</strong>: Stratis has built-in features to monitor and repair the file system, compared to traditional file systems which would rely on tools like <strong>fsck</strong>. </li>
</ul>
<h2 id="heading-stratis-architecture">Stratis Architecture</h2>
<p>The lowest layer in the Stratis architecture is the pool, which is comparable to an LVM volume group. The pool represents all available storage and consists of one or more storage devices (referred to as <em>blockdev</em>). These block devices can be of any type, including LVM devices but <em>not</em> partitions, but cannot be thin-provisioned as Stratis creates volumes that are thing provisioned themselves. Stratis creates a <code>/stratis/poolname</code> directory for each pool, this directory contains links to devices that represent the file systems in the pool. </p>
<p>From the Stratis pool file systems are created which live in a volume on top of the pool. A pool can contain one or more file systems. Stratis only works with XFS file systems and these are integrated within the Stratis volume: You should not reformat or reconfigure XFS file systems that are managed by Stratis. </p>
<p>The file systems are thin-provisioned, they don't have a fixed size and grow automatically as more data is added to the file system.</p>
<h2 id="heading-creating-and-mounting-stratis-storage">Creating and Mounting Stratis Storage</h2>
<p>To create Stratis storage, we need to create a pool from a block device and add a file system on top of the pool. Block devices need to be 1GiB at a minimum. Note that a Stratis file systems occupies a minimum of 527MiB even if no data has been added. </p>
<p>Let's make sure we have the <code>stratis-cli</code> and <code>stratisd</code> packages installed, then start and enable the <code>stratisd</code> daemon:</p>
<pre><code>[root@server1 ~]# yum install stratis-cli stratisd
...
[root@server1 ~]# systemctl enable --now stratisd
[root@server1 ~]# systemctl status stratisd
● stratisd.service - A daemon that manages a pool <span class="hljs-keyword">of</span> block devices to create flexible file systems
   <span class="hljs-attr">Loaded</span>: loaded (<span class="hljs-regexp">/usr/</span>lib/systemd/system/stratisd.service; enabled; vendor preset: enabled)
   <span class="hljs-attr">Active</span>: active (running) since Wed <span class="hljs-number">2020</span><span class="hljs-number">-08</span><span class="hljs-number">-19</span> <span class="hljs-number">12</span>:<span class="hljs-number">25</span>:<span class="hljs-number">50</span> EDT; <span class="hljs-number">4</span>s ago
</code></pre><p>We create a pool from one of the available block devices. Make sure the block device does not contain any file system (use <code>blkid -p /dev/sdx</code>) or partition table, if so we wipe them with the <strong>wipefs</strong> command, e.g <code>wipefs --all /dev/sdx</code>.</p>
<pre><code>[root@server1 ~]# stratis pool create mypool1 /dev/sda
[root@server1 ~]# stratis pool list
Name     Total Physical Size  Total Physical Used
mypool1               <span class="hljs-number">10</span> GiB               <span class="hljs-number">52</span> MiB
</code></pre><p>Next, we create the <code>myfs1</code> file system on top of the <code>mypool1</code> pool:</p>
<pre><code>[root@server1 ~]# stratis fs create mypool1 myfs1
[root@server1 ~]# stratis fs list
Pool Name  Name   Used     Created            Device                  UUID                            
mypool1    myfs1  <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">12</span>:<span class="hljs-number">28</span>  /stratis/mypool1/myfs1  ffdbb3a131f6421c990f69aa8d87c6aa
[root@server1 ~]#
</code></pre><p>To peristently mount a Stratis file system, the UUID must be used in the <code>/etc/fstab</code> file and the mount option <code>x-systemd.requires=stratisd.service</code> must be specified to ensure that Systemd waits to activate this device until the stratisd service is loaded:</p>
<pre><code>[root@server1 ~]# blkid -p /stratis/mypool1/myfs1
/stratis/mypool1/myfs1: UUID=<span class="hljs-string">"ffdbb3a1-31f6-421c-990f-69aa8d87c6aa"</span> TYPE=<span class="hljs-string">"xfs"</span> USAGE=<span class="hljs-string">"filesystem"</span>

[root@server1 ~]# mkdir /mnt/myfs1
[root@server1 ~]# vim /etc/fstab
...
UUID=ffdbb3a1<span class="hljs-number">-31</span>f6<span class="hljs-number">-421</span>c<span class="hljs-number">-990</span>f<span class="hljs-number">-69</span>aa8d87c6aa       /mnt/myfs1      xfs     defaults,x-systemd.requires=stratisd.service    <span class="hljs-number">0</span> <span class="hljs-number">0</span>
...
[root@server1 ~]# 
[root@server1 ~]# mount -a
[root@server1 ~]# reboot
</code></pre><h2 id="heading-managing-stratis">Managing Stratis</h2>
<p>Traditional Linux tools cannot handle thin-provisioned volumes, we need to use the Stratis specific tools:</p>
<ul>
<li><strong>stratis blockdev</strong>: Shows information about all block devices.</li>
<li><strong>stratis pool</strong>: Shows information about Stratis pools.</li>
<li><strong>stratis fs</strong>: Shows information about file systems.</li>
</ul>
<p>You can use tab-completion on the above commands to reveal specific options. </p>
<h3 id="heading-expanding-and-renaming-a-pool-and-file-system">Expanding and Renaming a Pool and File System</h3>
<p>We can add a block device to a pool to expand the storage capacity of the pool using the <strong>stratis pool add-data poolname blockdevice</strong> command:</p>
<pre><code>[root@server1 ~]# stratis pool list
Name     Total Physical Size  Total Physical Used
mypool1               <span class="hljs-number">10</span> GiB              <span class="hljs-number">597</span> MiB

[root@server1 ~]# stratis pool add-data mypool1 /dev/sdb

[root@server1 ~]# stratis pool list
Name     Total Physical Size  Total Physical Used
mypool1               <span class="hljs-number">15</span> GiB              <span class="hljs-number">601</span> MiB
</code></pre><h3 id="heading-destroying-a-pool-and-file-system">Destroying a Pool and File System</h3>
<p>To destroy a pool and file system, we need to unmount the file system first. Then use the <strong>stratis fs destroy poolname fsname</strong> command, followed by the <strong>stratis pool destroy poolname</strong> command:</p>
<pre><code>[root@server1 ~]# stratis fs list
Pool Name  Name   Used     Created            Device                  UUID                            
mypool1    myfs1  <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">12</span>:<span class="hljs-number">28</span>  /stratis/mypool1/myfs1  ffdbb3a131f6421c990f69aa8d87c6aa

[root@server1 ~]# umount /stratis/mypool1/myfs1
[root@server1 ~]# stratis fs destroy mypool1 myfs1

[root@server1 ~]# stratis fs list
Pool Name  Name  Used  Created  Device  UUID

[root@server1 ~]# stratis pool list
Name     Total Physical Size  Total Physical Used
mypool1               <span class="hljs-number">15</span> GiB               <span class="hljs-number">56</span> MiB

[root@server1 ~]# stratis pool destroy mypool1 

[root@server1 ~]# stratis pool list
Name  Total Physical Size  Total Physical Used
[root@server1 ~]#
</code></pre><h3 id="heading-creating-and-accessing-a-stratis-snapshot">Creating and Accessing a Stratis Snapshot</h3>
<p>In Stratis, a snapshot is a regular Stratis file system created as a copy of another Stratis file system. The snapshot initially contains the same file content as the original file system, but can change as the snapshot is modified. Whatever changes you make to the snapshot will not be reflected in the original file system. </p>
<p>To create a Stratis snapshot, use <strong>stratis fs snapshot poolname fsname snapshotname</strong>.
To access the snapshot, mount it as a regular file system from the /stratis/my-pool/ directory: <strong>mount /stratis/poolname/snapshotname mount-point</strong></p>
<pre><code>[root@server1 ~]# stratis pool create mypool1 /dev/sda
[root@server1 ~]# stratis pool add-data mypool1 /dev/sdb
[root@server1 ~]# stratis pool list
Name     Total Physical Size  Total Physical Used
mypool1               <span class="hljs-number">15</span> GiB               <span class="hljs-number">56</span> MiB

[root@server1 ~]# stratis fs create mypool1 myfs1
[root@server1 ~]# stratis fs list
Pool Name  Name   Used     Created            Device                  UUID                            
mypool1    myfs1  <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">16</span>  /stratis/mypool1/myfs1  d9e0c47f26e44e0b8990a6aa7546d0f7

[root@server1 ~]# stratis fs snapshot mypool1 myfs1 myfs1snapshot
[root@server1 ~]# stratis fs list
Pool Name  Name           Used     Created            Device                          UUID                            
mypool1    myfs1          <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">16</span>  /stratis/mypool1/myfs1          d9e0c47f26e44e0b8990a6aa7546d0f7
mypool1    myfs1snapshot  <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">17</span>  /stratis/mypool1/myfs1snapshot  b2fb662124a4424c9d21429012fcfdc4

[root@server1 ~]# mkdir -p /mnt/myfs1snapshot
[root@server1 ~]# mount /stratis/mypool1/myfs1snapshot /mnt/myfs1snapshot/
[root@server1 ~]# umount /mnt/myfs1snapshot
[root@server1 ~]# mount /stratis/mypool1/myfs1 /mnt/myfs1
[root@server1 ~]#
</code></pre><h3 id="heading-reverting-a-stratis-file-system-to-a-previous-snapshot">Reverting a Stratis File System to a Previous Snapshot</h3>
<p>It's a good idea to backup the current file system before reverting to a previous snapshot:</p>
<pre><code>[root@server1 ~]# stratis fs snapshot mypool1 myfs1 myfs1snapshot2
[root@server1 ~]#
</code></pre><p>Next, we unmount and remove the original file system:</p>
<pre><code>[root@server1 ~]# umount /mnt/myfs1
[root@server1 ~]# stratis fs destroy mypool1 myfs1
[root@server1 ~]#
</code></pre><p>We create a copy of a previous snapshot which we wish to restore, under the name of the original file system:</p>
<pre><code>[root@server1 ~]# stratis fs list
Pool Name  Name            Used     Created            Device                           UUID                            
mypool1    myfs1snapshot2  <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">23</span>  /stratis/mypool1/myfs1snapshot2  f54d88f686d64acd94c3a7d73dac92f5
mypool1    myfs1snapshot   <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">17</span>  /stratis/mypool1/myfs1snapshot   b2fb662124a4424c9d21429012fcfdc4

[root@server1 ~]# stratis fs snapshot mypool1 myfs1snapshot myfs1
[root@server1 ~]# stratis fs list
Pool Name  Name            Used     Created            Device                           UUID                            
mypool1    myfs1snapshot2  <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">23</span>  /stratis/mypool1/myfs1snapshot2  f54d88f686d64acd94c3a7d73dac92f5
mypool1    myfs1           <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">31</span>  /stratis/mypool1/myfs1           <span class="hljs-number">82</span>f75da64c744079b1c2ae51792812a0
mypool1    myfs1snapshot   <span class="hljs-number">545</span> MiB  Aug <span class="hljs-number">19</span> <span class="hljs-number">2020</span> <span class="hljs-number">13</span>:<span class="hljs-number">17</span>  /stratis/mypool1/myfs1snapshot   b2fb662124a4424c9d21429012fcfdc4
</code></pre><p>We mount the snapshot, now accessible with the same name as the original file system:</p>
<pre><code>[root@server1 ~]# mount /stratis/mypool1/myfs1 /mnt/myfs1
[root@server1 ~]#
</code></pre><h3 id="heading-removing-a-stratis-snapshot">Removing a Stratis Snapshot</h3>
<p>We remove a Stratis snapshot by unmounting it first if required, then using the <strong>stratis fs destroy poolname snapshotname</strong> command.</p>
<pre><code>[root@server1 ~]# stratis fs destroy mypool1 myfs1snapshot2
[root@server1 ~]#
</code></pre>]]></content:encoded></item></channel></rss>